Simon Willison suggests thinking of LLMs as a “ calculator for words”-something that can do useful things with text-rather than as a general-purpose intelligence or smart person. Those caveats aside, ChatGPT is clearly helpful for a range of tasks. Thus, naively treating LLMs like a really smart and knowledgeable person is likely to backfire. We expect that someone who can spout lines from Shakespeare, explain quantum computing, and give a proof of the prime number theorem in rhyming verse would also be able to count. The metacognitive ability to know what you don’t know is underdeveloped in these applications.Īnother expectation we have is that verbal fluency tracks other aspects of intelligence. Large language models, however, routinely violate this expectation by providing fluent answers that may be totally wrong. Feeling like you’re talking to a real person encourages you to rely on conversational expectations that may not hold with a machine.įor instance, we generally expect that most people do not make up facts. Part of the difficulty is that ChatGPT’s human-like conversation abilities can be deceptive. Using applications like ChatGPT requires some care. Today, I’m rounding up some of those suggestions and trying to synthesize the advice for when (and when not) to use these tools for learning. Recently, I asked readers here to share how they’re using large language models ( LLMs) like ChatGPT to learn and study.
0 Comments
Leave a Reply. |