Discussion about this post

User's avatar
John Sikorski's avatar

Over the years I've worked with plent of people who "sound like confident, knowledgeable humans, often ones who tell us what we most want to hear" but they don't actually know anything.

LLM's short coming is in their name, Large Language Models. They don't actually know anything, they just string words together in confident and believable ways that are usually correct. The more you use LLMs the more you realize AGI feels so close but is still far away. It's like full self driving cars. Tesla was "almost there" 10 years ago and now it works in "almost all" situations but the same will be said years from now.

Expand full comment
Loarre's avatar

"They can’t efficiently learn algorithms on their own. They don’t seem able to create novel inferences. They can and do confidently hallucinate."

Depending on how you define hallucinate, novel, algorithm, etc., doesn't this describe many people? I include myself, at least some of the time.

Expand full comment
3 more comments...

No posts