It can be difficult to understand how generative AI produces its output.
On March 27, Anthropic published a blog post introducing a tool for looking inside a large language model to follow its behaviour, with the goal of answering questions such as what language its model Claude “thinks” in, whether the model plans ahead or predicts one word at a time, and whether the AI’s own explanations of its reasoning actually reflect what is going on underneath.
In most cases, the explanation does not correspond to the actual processing. Claude generates its own explanations for its reasoning, which can include hallucinations.
A microscope for “AI biology”
Anthropic published a paper on “mapping” Claude’s internal structures in May 2024, and its new paper on describing the “features” a model uses to link concepts together builds on that work. Anthropic describes its research as part of the creation of a “microscope” for “AI biology.”
In the first paper, Anthropic researchers identified “features” linked by “circuits,” or paths from Claude’s input to output. The second paper focused on Claude 3.5 Haiku, examining ten behaviours to demonstrate how the AI reached its conclusion. Anthropic discovered:
- Claude definitely plans ahead, especially for tasks like writing rhyming poetry.
- The model includes a “conceptual space that is shared between languages.”
- Claude can “make up fake reasoning” when presenting its thought process to the user.
The researchers discovered how Claude translates concepts between languages by looking at the overlap in how the AI processes questions in different languages. For example, the prompt “the opposite of small is” in various languages is routed through the same features as “the concepts of smallness and oppositeness.”
This latter point is consistent with Apollo Research’s investigations into Claude Sonnet 3.7’s ability to detect an ethics test. When prompted to explain its reasoning, Anthropic discovered that Claude “will give a plausible-sounding argument designed to agree with the user rather than to follow logical steps.”
Generative AI is not magic; it is sophisticated computing that adheres to rules; however, because of its black-box nature, determining what those rules are and under what conditions they apply can be difficult. For example, Claude demonstrated a general reluctance to provide speculative answers, but it may process its end goal faster than it produces output: “In response to an example jailbreak, we found that the model recognised it had been asked for dangerous information long before it was able to gracefully bring the conversation back around,” the researchers discovered.
How does a word-trained artificial intelligence solve mathematical problems?
I primarily use ChatGPT for maths problems, and the model consistently produces the correct answer despite occasional hallucinations in the middle of the reasoning process. So I have been wondering about one of Anthropic’s points: does the model think of numbers as letters? Anthropic may have pinpointed precisely why models behave like this: To solve math problems, Claude uses multiple computational paths at once.
“One path calculates a rough approximation of the answer, while the other focusses on precisely determining the last digit of the sum,” Anthropic wrote.
So, it makes sense if the output is correct but the step-by-step explanation is not.
Claude’s first step is to “parse out the structure of the numbers,” which involves identifying patterns similar to those found in letters and words. Claude cannot externally explain this process, just as a human cannot tell which neurones are firing; instead, Claude will provide an explanation of how a human would solve the problem. The Anthropic researchers speculated that this is due to the AI’s training on human-written math explanations.
What are the next steps for Anthropic’s LLM research?
The “circuits” can be difficult to interpret due to the density of the generative AI’s performance. Anthropic stated that it took a human several hours to interpret circuits generated by prompts with “tens of words.” They speculate that it may require AI assistance to understand how generative AI works.
Anthropic stated that its LLM research is designed to ensure that AI aligns with human ethics; as such, the company is investigating real-time monitoring, model character improvements, and model alignment.