• Home
  • Motorcycles
  • Electric Motorcycles
  • 3 wheelers
  • FUV Electric 3 wheeler
  • Shop
  • Listings

Subscribe to Updates

Get the latest creative news from CycleNews about two, three wheelers and Electric vehicles.

What's Hot

The EPA Will Likely Gut Team That Studies Health Risks From Chemicals

10 Best Pet Cameras (2025), Tested and Reviewed

Street Comparison of the Twins

Facebook Twitter Instagram
  • Home
  • Motorcycles
  • Electric Motorcycles
  • 3 wheelers
  • FUV Electric 3 wheeler
  • Shop
  • Listings
Facebook Twitter Instagram Pinterest
Cycle News
Submit Your Ad
Cycle News
You are at:Home » An Advisor to Elon Musk’s xAI Has a Way to Make AI More Like Donald Trump
Electric Motorcycles

An Advisor to Elon Musk’s xAI Has a Way to Make AI More Like Donald Trump

cycleBy cycleFebruary 11, 202503 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


A researcher affiliated with Elon Musk’s startup xAI has found a new way to both measure and manipulate entrenched preferences and values expressed by artificial intelligence models—including their political views.

The work was led by Dan Hendrycks, director of the nonprofit Center for AI Safety and an adviser to xAI. He suggests that the technique could be used to make popular AI models better reflect the will of the electorate. “Maybe in the future, [a model] could be aligned to the specific user,” Hendrycks told WIRED. But in the meantime, he says, a good default would be using election results to steer the views of AI models. He’s not saying a model should necessarily be “Trump all the way,” but he argues it should be biased toward Trump slightly, “because he won the popular vote.”

xAI issued a new AI risk framework on February 10 stating that Hendrycks’ utility engineering approach could be used to assess Grok.

Hendrycks led a team from the Center for AI Safety, UC Berkeley, and the University of Pennsylvania that analyzed AI models using a technique borrowed from economics to measure consumers’ preferences for different goods. By testing models across a wide range of hypothetical scenarios, the researchers were able to calculate what’s known as a utility function, a measure of the satisfaction that people derive from a good or service. This allowed them to measure the preferences expressed by different AI models. The researchers determined that they were often consistent rather than haphazard, and showed that these preferences become more ingrained as models get larger and more powerful.

Some research studies have found that AI tools such as ChatGPT are biased towards views expressed by pro-environmental, left-leaning, and libertarian ideologies. In February 2024, Google faced criticism from Musk and others after its Gemini tool was found to be predisposed to generate images that critics branded as “woke,” such as Black vikings and Nazis.

The technique developed by Hendrycks and his collaborators offers a new way to determine how AI models’ perspectives may differ from its users. Eventually, some experts hypothesize, this kind of divergence could become potentially dangerous for very clever and capable models. The researchers show in their study, for instance, that certain models consistently value the existence of AI above that of certain nonhuman animals. The researchers say they also found that models seem to value some people over others, raising its own ethical questions.

Some researchers, Hendrycks included, believe that current methods for aligning models, such as manipulating and blocking their outputs, may not be sufficient if unwanted goals lurk under the surface within the model itself. “We’re gonna have to confront this,” Hendrycks says. “You can’t pretend it’s not there.”

Dylan Hadfield-Menell, a professor at MIT who researches methods for aligning AI with human values, says Hendrycks’ paper suggests a promising direction for AI research. “They find some interesting results,” he says. “The main one that stands out is that as the model scale increases, utility representations get more complete and coherent.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleThomson Reuters Wins First Major AI Copyright Case in the US
Next Article The CFPB Work Freeze Is Putting Big Tech Regulations ‘On Ice’
cycle
  • Website

Related Posts

The EPA Will Likely Gut Team That Studies Health Risks From Chemicals

May 12, 2025

10 Best Pet Cameras (2025), Tested and Reviewed

May 12, 2025

Hansker Productivity Vertical Gaming Mouse Review: Super Ergonomics

May 12, 2025
Add A Comment

Leave A Reply Cancel Reply

You must be logged in to post a comment.

Demo
Top Posts

The EPA Will Likely Gut Team That Studies Health Risks From Chemicals

May 12, 2025

The urban electric commuter FUELL Fllow designed by Erik Buell is now opening orders | thepack.news | THE PACK

July 29, 2023

2024 Yamaha Ténéré 700 First Look [6 Fast Facts For ADV Riding]

July 29, 2023
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Latest Reviews

Subscribe to Updates

Get the latest tech news from FooBar about tech, design and biz.

Demo
Most Popular

The EPA Will Likely Gut Team That Studies Health Risks From Chemicals

May 12, 2025

The urban electric commuter FUELL Fllow designed by Erik Buell is now opening orders | thepack.news | THE PACK

July 29, 2023

2024 Yamaha Ténéré 700 First Look [6 Fast Facts For ADV Riding]

July 29, 2023
Our Picks

NEW ELECTRIC THREE WHEELERS Auto Rickshaw Auto Videos Auto Video Rickshaw Electric Auto rickshaw2022

Ducati’s first electric motorcycle takes to the track in new video in action

A Capitol Rioter’s Son Is Terrified About His Father’s Release

Subscribe to Updates

Get the latest news from CycleNews about two, three wheelers and Electric vehicles.

© 2025 cyclenews.blog
  • Home
  • About us
  • Get In Touch
  • Shop
  • Listings
  • My Account
  • Submit Your Ad
  • Terms & Conditions
  • Stock Ticker

Type above and press Enter to search. Press Esc to cancel.