Like humans, artificial minds can ‘learn by thinking’

Posted by
Spread the love
Earn Bitcoin
Earn Bitcoin

Some of the greatest discoveries don’t come merely from observations but from thinking. Einstein developed theories about relativity through thought experiments, and Galileo derived insights about gravity through mental simulations. A review published September 18 in the journal Trends in Cognitive Sciences shows that this process of thinking is not exclusive to humans. Artificial intelligence, too, is capable of self-correction and arriving at new conclusions through “learning by thinking.”

“There are some recent demonstrations of what looks like learning by thinking in AI, particularly in large language models,” says author Tania Lombrozo, a professor of psychology and co-director of the Natural and Artificial Minds initiative at Princeton University. “Sometimes ChatGPT will correct itself without being explicitly told. That’s similar to what happens when people are engaged in learning by thinking.”

Lombrozo identified four examples of learning by thinking in humans and AI: learners can acquire new information without external input through explanation, simulation, analogy, and reasoning. In humans, explaining how a microwave works to a child might reveal the gaps in our understanding. Rearranging furniture in the living room often involves creating a mental image to simulate different layouts before making any physical changes. Downloading pirated software may initially seem morally acceptable until one draws an analogy to the theft of physical goods. If you know that a friend’s birthday is on a leap day and tomorrow is a leap day, you can reason that your friend’s birthday is tomorrow.

AI shows similar learning processes. When asked to elaborate on a complex topic, AI may correct or refine its initial response based on the explanation it provides. The gaming industry uses simulation engines to approximate real-world outcomes, and models can use the outputs of simulations as inputs to learning. Asking a language model to draw analogies can lead it to answer questions more accurately than it would with simple questions. Prompting AI to engage in step-by-step reasoning can lead it to answers it would fail to reach with a direct query.

“This poses the question of why both natural and artificial minds have these characteristics. What function does learning by thinking serve? Why is it valuable?” says Lombrozo. “I argue that learning by thinking is a kind of ‘on-demand learning.'”

When you learn something new, you don’t know how the information may serve you in the future. Lombrozo says people can squirrel away the knowledge for later — until the context makes it relevant and worthwhile to expend the cognitive effort to think and learn.

Lombrozo acknowledges the challenges in defining the boundaries between reasoning, learning, and other high-level cognitive functions, which is an area of debate within the field of cognitive sciences. The review also raises more questions, some of which Lombrozo plans to explore further, such as whether AI systems are actually “thinking” or simply mimicking the outputs of such processes.

“AI has gotten to the point where it’s so sophisticated in some ways, but limited in others, that we have this opportunity to study the similarities and differences between human and artificial intelligence,” says Lombrozo. “We can learn important things about human cognition through AI and improve AI by comparing it to natural minds. It’s a pivotal moment where we’re in this new position to ask these interesting, comparative questions.”