Over the years I've worked with plent of people who "sound like confident, knowledgeable humans, often ones who tell us what we most want to hear" but they don't actually know anything.
LLM's short coming is in their name, Large Language Models. They don't actually know anything, they just string words together in confident and believable ways that are usually correct. The more you use LLMs the more you realize AGI feels so close but is still far away. It's like full self driving cars. Tesla was "almost there" 10 years ago and now it works in "almost all" situations but the same will be said years from now.
Agreed, this is a sticking point for me too -- I'm having stronger and stronger feelins about this, especially the claims that LLMs and their engineers have somehow "solved" language. I think it's pernicious. I don't think LLMs point to AGI. Maybe we'll be proven wrong.
Well, people definitely do hallucinate. But in contrast to deep learning models, people DO learn algorithms on their own, and create novel inferences all the time. Now, in fairness, in my most recent post I discuss an LLM comparing itself toa flight simulator in a way that provoked a really interesting exchange. But I don't know if it was truly a novel inference, or if it "heard it somewhere. I think the key point, which I talk about in my most recent post, is that the LLM doesn't really "understand things" in the sense of forming coherent, stable mental models of the world. They very effectively reproduce patterns in the existing body of human-written text. This lets them look very much AS IF they are forming mental models, even when they really aren't.
Over the years I've worked with plent of people who "sound like confident, knowledgeable humans, often ones who tell us what we most want to hear" but they don't actually know anything.
LLM's short coming is in their name, Large Language Models. They don't actually know anything, they just string words together in confident and believable ways that are usually correct. The more you use LLMs the more you realize AGI feels so close but is still far away. It's like full self driving cars. Tesla was "almost there" 10 years ago and now it works in "almost all" situations but the same will be said years from now.
Agreed, this is a sticking point for me too -- I'm having stronger and stronger feelins about this, especially the claims that LLMs and their engineers have somehow "solved" language. I think it's pernicious. I don't think LLMs point to AGI. Maybe we'll be proven wrong.
"They can’t efficiently learn algorithms on their own. They don’t seem able to create novel inferences. They can and do confidently hallucinate."
Depending on how you define hallucinate, novel, algorithm, etc., doesn't this describe many people? I include myself, at least some of the time.
Well, people definitely do hallucinate. But in contrast to deep learning models, people DO learn algorithms on their own, and create novel inferences all the time. Now, in fairness, in my most recent post I discuss an LLM comparing itself toa flight simulator in a way that provoked a really interesting exchange. But I don't know if it was truly a novel inference, or if it "heard it somewhere. I think the key point, which I talk about in my most recent post, is that the LLM doesn't really "understand things" in the sense of forming coherent, stable mental models of the world. They very effectively reproduce patterns in the existing body of human-written text. This lets them look very much AS IF they are forming mental models, even when they really aren't.
See my comment on "What is a Large Language Model (part 2)" for ongoing discussion.