Skip to main content

How AI Knows Things No One Told It

Researchers are still struggling to understand how AI models trained to parrot Internet text can perform advanced tasks such as running code, playing games and trying to break up a marriage

Pixelated illustration of a parrot and owl.

Chris Gash

No one yet knows how ChatGPT and its artificial-intelligence cousins will transform the world, and one reason is that no one really knows what goes on inside them. Some of these systems' abilities go far beyond what they were trained to do—and even their inventors are baffled as to why. A growing number of tests suggest these AI systems develop internal models of the real world, much as our own brain does, although the machines' technique is different.

“Everything we want to do with them in order to make them better or safer or anything like that seems to me like a ridiculous thing to ask ourselves to do if we don't understand how they work,” says Ellie Pavlick of Brown University, one of the researchers working to fill that explanatory void.

At one level, she and her colleagues understand GPT (short for “generative pre-trained transformer”) and other large language models, or LLMs, perfectly well. The models rely on a machine-learning system called a neural network. Such networks have a structure modeled loosely after the connected neurons of the human brain. The code for these programs is relatively simple and fills just a few screens. It sets up an autocorrection algorithm, which chooses the most likely word to complete a passage based on laborious statistical analysis of hundreds of gigabytes of Internet text. Additional training ensures the system will present its results in the form of dialogue. In this sense, all it does is regurgitate what it learned—it is a “stochastic parrot,” in the words of Emily Bender, a linguist at the University of Washington. (Not to dishonor the late Alex, an African Grey Parrot who understood concepts such as color, shape and “bread” and used corresponding words intentionally.) But LLMs have also managed to ace the bar exam, write a sonnet about the Higgs boson and make an attempt to break up their users' marriage. Few had expected a fairly straightforward autocorrection algorithm to acquire such broad abilities.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


That GPT and other AI systems perform tasks they were not trained to do, giving them “emergent abilities,” has surprised even researchers who have been generally skeptical about the hype over LLMs. “I don't know how they're doing it or if they could do it more generally the way humans do—but they've challenged my views,” says Melanie Mitchell, an AI researcher at the Santa Fe Institute.

“It is certainly much more than a stochastic parrot, and it certainly builds some representation of the world—although I do not think that it is quite like how humans build an internal world model,” says Yoshua Bengio, an AI researcher at the University of Montreal.

At a conference at New York University in March, philosopher Raphaël Millière of Columbia University offered yet another jaw-dropping example of what LLMs can do. The models had already demonstrated the ability to write computer code, which is impressive but not too surprising because there is so much code out there on the Internet to mimic. Millière went a step further and showed that GPT can execute code, too, however. The philosopher typed in a program to calculate the 83rd number in the Fibonacci sequence. “It's multistep reasoning of a very high degree,” he says. And the bot nailed it. When Millière asked directly for the 83rd Fibonacci number, however, GPT got it wrong, which suggests the system wasn't just parroting the Internet. Rather it was performing its own calculations to reach the correct answer.

Although an LLM runs on a computer, it is not itself a computer. It lacks essential computational elements, such as working memory. In a tacit acknowledgment that GPT on its own should not be able to run code, its inventor, tech company OpenAI, has since introduced a specialized plug-in—a tool ChatGPT can use when answering a query—that allows it to do so. But that plug-in was not used in Millière's demonstration. Instead he hypothesizes that the machine improvised a memory by harnessing its mechanisms for interpreting words according to their context—a situation similar to how nature repurposes existing capacities for new functions.

This impromptu ability demonstrates that LLMs develop an internal complexity that goes well beyond a shallow statistical analysis. Researchers are finding that these systems seem to achieve genuine understanding of what they have learned. In one study presented in May at the International Conference on Learning Representations, doctoral student Kenneth Li of Harvard University and his AI researcher colleagues—Aspen K. Hopkins of the Massachusetts Institute of Technology; David Bau of Northeastern University; and Fernanda Viégas, Hanspeter Pfister and Martin Wattenberg, all at Harvard—spun up their own smaller copy of the GPT neural network so they could study its inner workings. They trained it on millions of matches of the board game Othello by feeding in long sequences of moves in text form. Their model became a nearly perfect player.

To study how the neural network encoded information, they adopted a technique that Bengio and Guillaume Alain, also at the University of Montreal, devised in 2016. They created a miniature “probe” network to analyze the main network layer by layer. Li compares this approach to neuroscience methods. “This is similar to when we put an electrical probe into the human brain,” he says. In the case of the AI, the probe showed that its “neural activity” matched the representation of an Othello game board, albeit in a convoluted form. To confirm this, the researchers ran the probe in reverse to implant information into the network—for instance, flipping one of the game's black marker pieces to a white one. “Basically we hack into the brain of these language models,” Li says. The network adjusted its moves accordingly. The researchers concluded that it was playing Othello roughly like a human: by keeping a game board in its “mind's eye” and using this model to evaluate moves. Li says he thinks the system learns this skill because it is the most parsimonious description of its training data. “If you are given a whole lot of game scripts, trying to figure out the rule behind it is the best way to compress,” he adds.

This ability to infer the structure of the outside world is not limited to simple game-playing moves; it also shows up in dialogue. Belinda Li (no relation to Kenneth Li), Maxwell Nye and Jacob Andreas, all at M.I.T., studied networks that played a text-based adventure game. They fed in sentences such as “The key is in the treasure chest,” followed by “You take the key.” Using a probe, they found that the networks encoded within themselves variables corresponding to “chest” and “you,” each with the property of possessing a key or not, and updated these variables sentence by sentence. The system had no independent way of knowing what a box or key is, yet it picked up the concepts it needed for this task. “There is some representation of the state hidden inside of the model,” Belinda Li says.

Researchers marvel at how much LLMs are able to learn from text. For example, Pavlick and her then Ph.D. student Roma Patel found that these networks absorb color descriptions from Internet text and construct internal representations of color. When they see the word “red,” they process it not just as an abstract symbol but as a concept that has certain relations to maroon, crimson, fuchsia, rust, and so on. Demonstrating this was somewhat tricky. Instead of inserting a probe into a network, the researchers studied its response to a series of text prompts. To check whether it was merely echoing color relations from online references, they tried misdirecting the system by telling it that red is in fact green—like the old philosophical thought experiment in which one person's red is another person's green. Rather than parroting back an incorrect answer, the system's color evaluations changed appropriately to maintain the correct relations.

Picking up on the idea that to perform its autocorrection function the system seeks the underlying logic of its training data, machine-learning researcher Sébastien Bubeck of Microsoft Research suggests that the wider the range of the data, the more general the rules the system will discover. “Maybe we're seeing such a huge jump because we have reached a diversity of data, which is large enough that the only underlying principle to all of it is that intelligent beings produced them,” he says. “And so the only way to explain all of the data is [for the model] to become intelligent.”

In addition to extracting the underlying meaning of language, LLMs can learn on the fly. In the AI field, the term “learning” is usually reserved for the computationally intensive process in which developers expose the neural network to gigabytes of data and tweak its internal connections. By the time you type a query into ChatGPT, the network should be fixed; unlike humans, it should not continue to learn. So it came as a surprise that LLMs do, in fact, learn from their users' prompts—an ability known as in-context learning. “It's a different sort of learning that wasn't really understood to exist before,” says Ben Goertzel, founder of AI company SingularityNET.

One example of how an LLM learns comes from the way humans interact with chatbots such as ChatGPT. You can give the system examples of how you want it to respond, and it will obey. Its outputs are determined by the last several thousand words it has seen. What it does, given those words, is prescribed by its fixed internal connections—but the word sequence nonetheless offers some adaptability. Entire websites are devoted to “jailbreak” prompts that overcome the system's “guardrails”—restrictions that stop the system from telling users how to make a pipe bomb, for example—typically by directing the model to pretend to be a system without guardrails. Some people use jailbreaking for sketchy purposes, yet others deploy it to elicit more creative answers. “It will answer scientific questions, I would say, better” than if you just ask it directly, without the special jailbreak prompt, says William Hahn, co-director of the Machine Perception and Cognitive Robotics Laboratory at Florida Atlantic University. “It's better at scholarship.”

Another type of in-context learning happens via “chain of thought” prompting, which means asking the network to spell out each step of its reasoning—a tactic that makes it do better at logic or arithmetic problems requiring multiple steps. (But one thing that made Millière's example so surprising is that the network found the Fibonacci number without any such coaching.)

In 2022 a team at Google Research and the Swiss Federal Institute of Technology in Zurich—Johannes von Oswald, Eyvind Niklasson, Ettore Randazzo, João Sacramento, Alexander Mordvintsev, Andrey Zhmoginov and Max Vladymyrov—showed that in-context learning follows the same basic computational procedure as standard learning, known as gradient descent. This procedure was not programmed; the system discovered it without help. “It would need to be a learned skill,” says Blaise Agüera y Arcas, a vice president at Google Research. In fact, he thinks LLMs may have other latent abilities that no one has discovered yet. “Every time we test for a new ability that we can quantify, we find it,” he says.

Although LLMs have enough blind spots not to qualify as artificial general intelligence, or AGI—the term for a machine that attains the resourcefulness of animal brains—these emergent abilities suggest to some researchers that tech companies are closer to AGI than even optimists had guessed. “They're indirect evidence that we are probably not that far off from AGI,” Goertzel said in March at a conference on deep learning at Florida Atlantic University. OpenAI's plug-ins have given ChatGPT a modular architecture a little like that of the human brain. “Combining GPT-4 [the latest version of the LLM that powers ChatGPT] with various plug-ins might be a route toward a humanlike specialization of function,” says M.I.T. researcher Anna Ivanova.

At the same time, though, researchers worry the window may be closing on their ability to study these systems. OpenAI has not divulged the details of how it designed and trained GPT-4, in part because it is locked in competition with Google and other companies—not to mention other countries. “Probably there's going to be less open research from industry, and things are going to be more siloed and organized around building products,” says Dan Roberts, a theoretical physicist at M.I.T., who applies the techniques of his profession to understanding AI.

And this lack of transparency does not just harm researchers, says Mitchell of the Santa Fe Institute. It also hinders efforts to understand the social impacts of the rush to adopt AI technology. “Transparency about these models is the most important thing to ensure safety.”