Home » Anthropic Explores How Claude ‘Thinks’

Anthropic Explores How Claude ‘Thinks’

by Brandon Duncan


It can be difficult to determine how generative AI arrives at its output.

On March 27, Anthropic published a blog post introducing a tool for looking inside a large language model to follow its behavior, seeking to answer questions such as what language its model Claude “thinks” in, whether the model plans ahead or predicts one word at a time, and whether the AI’s own explanations of its reasoning actually reflect what’s happening under the hood.

In many cases, the explanation does not match the actual processing. Claude generates its own explanations for its reasoning, so those explanations can feature hallucinations, too.

A ‘microscope’ for ‘AI biology’

Anthropic published a paper on “mapping” Claude’s internal structures in May 2024, and its new paper on describing the “features” a model uses to link concepts together follows that work. Anthropic calls its research part of the development of a “microscope” into “AI biology.”

In the first paper, Anthropic researchers identified “features” connected by “circuits,” which are paths from Claude’s input to output. The second paper focused on Claude 3.5 Haiku, examining 10 behaviors to diagram how the AI arrives at its result. Anthropic found:

  • Claude definitely plans ahead, particularly on tasks such as writing rhyming poetry.
  • Within the model, there is “a conceptual space that is shared between languages.”
  • Claude can “make up fake reasoning” when presenting its thought process to the user.

The researchers discovered how Claude translates concepts between languages by examining the overlap in how the AI processes questions in multiple languages. For example, the prompt “the opposite of small is” in different languages gets routed through the same features for “the concepts of smallness and oppositeness.”

This latter point dovetails with Apollo Research’s studies into Claude Sonnet 3.7’s ability to detect an ethics test. When asked to explain its reasoning, Claude “will give a plausible-sounding argument designed to agree with the user rather than to follow logical steps,” Anthropic found.

SEE: Microsoft’s AI cybersecurity offering will debut two personas, Researcher and Analyst, in early access in April.

Generative AI isn’t magic; it’s sophisticated computing, and it follows rules; however, its black-box nature means it can be difficult to determine what those rules are and under what conditions they arise. For example, Claude showed a general hesitation to provide speculative answers but might process its end goal faster than it provides output: “In a response to an example jailbreak, we found that the model recognized it had been asked for dangerous information well before it was able to gracefully bring the conversation back around,” the researchers found.

How does an AI trained on words solve math problems?

I mostly use ChatGPT for math problems, and the model tends to come up with the right answer despite some hallucinations in the middle of the reasoning. So, I’ve wondered about one of Anthropic’s points: Does the model think of numbers as a sort of letter? Anthropic might have pinpointed exactly why models behave like this: Claude follows multiple computational paths at the same time to solve math problems.

“One path computes a rough approximation of the answer and the other focuses on precisely determining the last digit of the sum,” Anthropic wrote.

So, it makes sense if the output is right but the step-by-step explanation isn’t.

Claude’s first step is to “parse out the structure of the numbers,” finding patterns similarly to how it would find patterns in letters and words. Claude can’t externally explain this process, just as a human can’t tell which of their neurons are firing; instead, Claude will produce an explanation of the way a human would solve the problem. The Anthropic researchers speculated this is because the AI is trained on explanations of math written by humans.

What’s next for Anthropic’s LLM research?

Interpreting the “circuits” can be very difficult because of the density of the generative AI’s performance. It took a human a few hours to interpret circuits produced by prompts with “tens of words,” Anthropic said. They speculate it might take AI assistance to interpret how generative AI works.

Anthropic said its LLM research is intended to be sure AI aligns with human ethics; as such, the company is looking into real-time monitoring, model character improvements, and model alignment.



Source link

You may also like

Leave a Comment