Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, the explanation is just shallow enough to seem correct and deceive someone who doesn't grasp really well the subject. No clue how they let it pass, that without mentioning the subpar diagram it created, really didn't seem like something miles better than what previous models can do already.


> No clue how they let it pass

It’s very common to see AI evangelists taking its output at face value, particularly when it’s about something that they are not an expert in. I thought we’d start seeing less of this as people get burned by it, but it seems that we’re actually just seeing more of it as LLMs get better at sounding correct. Their ability to sound correct continues to increase faster than their ability to be correct.


> Their ability to sound correct continues to increase faster than their ability to be correct

Sounds like a core skill for management. Promote this man (LLM).


This is just like the early days of Google search results, "It's on the Internet, it must be true".


Hilarious how the team spent so much time promising GPT5 had fewer hallucinations and deceptions.

Meanwhile the demo seems to suggest business as usual for AI hallucinations and deceptions.


> Yeah, the explanation is just shallow enough to seem correct and deceive someone who doesn't grasp really well the subject.

This is the problem with AI in general.

When I ask it about things I already understand, it’s clearly wrong quite often.

When I ask it about something I don’t understand, I have no way to know if its response is right or wrong.


This is the headline for all LLM output past "hello world"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: