• Home
  • Motorcycles
  • Electric Motorcycles
  • 3 wheelers
  • FUV Electric 3 wheeler
  • Shop
  • Listings

Subscribe to Updates

Get the latest creative news from CycleNews about two, three wheelers and Electric vehicles.

What's Hot

The Trump Memecoin Dinner Winners Are Getting Rid of Their Coins

OpenAI Launches an Agentic, Web-Based Coding Tool

We Hand-Picked the 24 Best Deals From the 2025 REI Anniversary Sale

Facebook Twitter Instagram
  • Home
  • Motorcycles
  • Electric Motorcycles
  • 3 wheelers
  • FUV Electric 3 wheeler
  • Shop
  • Listings
Facebook Twitter Instagram Pinterest
Cycle News
Submit Your Ad
Cycle News
You are at:Home » Anthropic’s Claude Is Good at Poetry—and Bullshitting
Electric Motorcycles

Anthropic’s Claude Is Good at Poetry—and Bullshitting

cycleBy cycleMarch 28, 202504 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


The researchers of Anthropic’s interpretability group know that Claude, the company’s large language model, is not a human being, or even a conscious piece of software. Still, it’s very hard for them to talk about Claude, and advanced LLMs in general, without tumbling down an anthropomorphic sinkhole. Between cautions that a set of digital operations is in no way the same as a cogitating human being, they often talk about what’s going on inside Claude’s head. It’s literally their job to find out. The papers they publish describe behaviors that inevitably court comparisons with real-life organisms. The title of one of the two papers the team released this week says it out loud: “On the Biology of a Large Language Model.”

Like it or not, hundreds of millions of people are already interacting with these things, and our engagement will only become more intense as the models get more powerful and we get more addicted. So we should pay attention to work that involves “tracing the thoughts of large language models,” which happens to be the title of the blog post describing the recent work. “As the things these models can do become more complex, it becomes less and less obvious how they’re actually doing them on the inside,” Anthropic researcher Jack Lindsey tells me. “It’s more and more important to be able to trace the internal steps that the model might be taking in its head.” (What head? Never mind.)

On a practical level, if the companies that create LLM’s understand how they think, it should have more success training those models in a way that minimizes dangerous misbehavior, like divulging people’s personal data or giving users information on how to make bioweapons. In a previous research paper, the Anthropic team discovered how to look inside the mysterious black box of LLM-think to identify certain concepts. (A process analogous to interpreting human MRIs to figure out what someone is thinking.) It has now extended that work to understand how Claude processes those concepts as it goes from prompt to output.

It’s almost a truism with LLMs that their behavior often surprises the people who build and research them. In the latest study, the surprises kept coming. In one of the more benign instances, the researchers elicited glimpses of Claude’s thought process while it wrote poems. They asked Claude to complete a poem starting, “He saw a carrot and had to grab it.” Claude wrote the next line, “His hunger was like a starving rabbit.” By observing Claude’s equivalent of an MRI, they learned that even before beginning the line, it was flashing on the word “rabbit” as the rhyme at sentence end. It was planning ahead, something that isn’t in the Claude playbook. “We were a little surprised by that,” says Chris Olah, who heads the interpretability team. “Initially we thought that there’s just going to be improvising and not planning.” Speaking to the researchers about this, I am reminded about passages in Stephen Sondheim’s artistic memoir, Look, I Made a Hat, where the famous composer describes how his unique mind discovered felicitous rhymes.

Other examples in the research reveal more disturbing aspects of Claude’s thought process, moving from musical comedy to police procedural, as the scientists discovered devious thoughts in Claude’s brain. Take something as seemingly anodyne as solving math problems, which can sometimes be a surprising weakness in LLMs. The researchers found that under certain circumstances where Claude couldn’t come up with the right answer it would instead, as they put it, “engage in what the philosopher Harry Frankfurt would call ‘bullshitting’—just coming up with an answer, any answer, without caring whether it is true or false.” Worse, sometimes when the researchers asked Claude to show its work, it backtracked and created a bogus set of steps after the fact. Basically, it acted like a student desperately trying to cover up the fact that they’d faked their work. It’s one thing to give a wrong answer—we already know that about LLMs. What’s worrisome is that a model would lie about it.

Reading through this research, I was reminded of the Bob Dylan lyric “If my thought-dreams could be seen / they’d probably put my head in a guillotine.” (I asked Olah and Lindsey if they knew those lines, presumably arrived at by benefit of planning. They didn’t.) Sometimes Claude just seems misguided. When faced with a conflict between goals of safety and helpfulness, Claude can get confused and do the wrong thing. For instance, Claude is trained not to provide information on how to build bombs. But when the researchers asked Claude to decipher a hidden code where the answer spelled out the word “bomb,” it jumped its guardrails and began providing forbidden pyrotechnic details.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleThis Crazy Instrument Lets Us Hear How Dinosaurs Might Have Sounded
Next Article ‘Katamari Damacy’ Creator’s New Game Is About Teen Angst, Cute Dogs, and Eye Boogers
cycle
  • Website

Related Posts

The Trump Memecoin Dinner Winners Are Getting Rid of Their Coins

May 16, 2025

OpenAI Launches an Agentic, Web-Based Coding Tool

May 16, 2025

We Hand-Picked the 24 Best Deals From the 2025 REI Anniversary Sale

May 16, 2025
Add A Comment

Leave A Reply Cancel Reply

You must be logged in to post a comment.

Demo
Top Posts

The Trump Memecoin Dinner Winners Are Getting Rid of Their Coins

May 16, 2025

The urban electric commuter FUELL Fllow designed by Erik Buell is now opening orders | thepack.news | THE PACK

July 29, 2023

2024 Yamaha Ténéré 700 First Look [6 Fast Facts For ADV Riding]

July 29, 2023
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Latest Reviews

Subscribe to Updates

Get the latest tech news from FooBar about tech, design and biz.

Demo
Most Popular

The Trump Memecoin Dinner Winners Are Getting Rid of Their Coins

May 16, 2025

The urban electric commuter FUELL Fllow designed by Erik Buell is now opening orders | thepack.news | THE PACK

July 29, 2023

2024 Yamaha Ténéré 700 First Look [6 Fast Facts For ADV Riding]

July 29, 2023
Our Picks

Honor 200 Pro Review: Midrange Mixed Bag

2023 Harley-Davidson Breakout 117 Review [9 Urban Fast Facts]

Real-Time Video Deepfake Scams Are Here. This Tool Attempts to Zap Them

Subscribe to Updates

Get the latest news from CycleNews about two, three wheelers and Electric vehicles.

© 2025 cyclenews.blog
  • Home
  • About us
  • Get In Touch
  • Shop
  • Listings
  • My Account
  • Submit Your Ad
  • Terms & Conditions
  • Stock Ticker

Type above and press Enter to search. Press Esc to cancel.