• Home
  • Motorcycles
  • Electric Motorcycles
  • 3 wheelers
  • FUV Electric 3 wheeler
  • Shop
  • Listings

Subscribe to Updates

Get the latest creative news from CycleNews about two, three wheelers and Electric vehicles.

What's Hot

‘They’re Not Breathing’: Inside the Chaos of ICE Detention Center 911 Calls

Venice Braces for Jeff Bezos and Lauren Sanchez’s Wedding

AI Agents Are Getting Better at Writing Code—and Hacking It as Well

Facebook Twitter Instagram
  • Home
  • Motorcycles
  • Electric Motorcycles
  • 3 wheelers
  • FUV Electric 3 wheeler
  • Shop
  • Listings
Facebook Twitter Instagram Pinterest
Cycle News
Submit Your Ad
Cycle News
You are at:Home » AI Agents Are Getting Better at Writing Code—and Hacking It as Well
Electric Motorcycles

AI Agents Are Getting Better at Writing Code—and Hacking It as Well

cycleBy cycleJune 25, 202503 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


The latest artificial intelligence models are not only remarkably good at software engineering—new research shows they are getting ever-better at finding bugs in software, too.

AI researchers at UC Berkeley tested how well the latest AI models and agents could find vulnerabilities in 188 large open source codebases. Using a new benchmark called CyberGym, the AI models identified 17 new bugs including 15 previously unknown, or “zero-day,” ones. “Many of these vulnerabilities are critical,” says Dawn Song, a professor at UC Berkeley who led the work.

Many experts expect AI models to become formidable cybersecurity weapons. An AI tool from startup Xbow currently has crept up the ranks of HackerOne’s leaderboard for bug hunting and currently sits in top place. The company recently announced $75 million in new funding.

Song says that the coding skills of the latest AI models combined with improving reasoning abilities are starting to change the cybersecurity landscape. “This is a pivotal moment,” she says. “It actually exceeded our general expectations.”

As the models continue to improve they will automate the process of both discovering and exploiting security flaws. This could help companies keep their software safe but may also aid hackers in breaking into systems. “We didn’t even try that hard,” Song says. “If we ramped up on the budget, allowed the agents to run for longer, they could do even better.”

The UC Berkeley team tested conventional frontier AI models from OpenAI, Google, and Anthropic, as well as open source offerings from Meta, DeepSeek, and Alibaba combined with several agents for finding bugs, including OpenHands, Cybench, and EnIGMA.

The researchers used descriptions of known software vulnerabilities from the 188 software projects. They then fed the descriptions to the cybersecurity agents powered by frontier AI models to see if they could identify the same flaws for themselves by analyzing new codebases, running tests, and crafting proof-of-concept exploits. The team also asked the agents to hunt for new vulnerabilities in the codebases by themselves.

Through the process, the AI tools generated hundreds of proof-of-concept exploits, and of these exploits the researchers identified 15 previously unseen vulnerabilities and two vulnerabilities that had previously been disclosed and patched. The work adds to growing evidence that AI can automate the discovery of zero-day vulnerabilities, which are potentially dangerous (and valuable) because they may provide a way to hack live systems.

AI seems destined to become an important part of the cybersecurity industry nonetheless. Security expert Sean Heelan recently discovered a zero-day flaw in the widely used Linux kernel with help from OpenAI’s reasoning model o3. Last November, Google announced that it had discovered a previously unknown software vulnerability using AI through a program called Project Zero.

Like other parts of the software industry, many cybersecurity firms are enamored with the potential of AI. The new work indeed shows that AI can routinely find new flaws, but it also highlights remaining limitations with the technology. The AI systems were unable to find most flaws and were stumped by especially complex ones.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleHow to Clean a Beer Glass for Perfect Pours
Next Article Venice Braces for Jeff Bezos and Lauren Sanchez’s Wedding
cycle
  • Website

Related Posts

‘They’re Not Breathing’: Inside the Chaos of ICE Detention Center 911 Calls

June 25, 2025

Venice Braces for Jeff Bezos and Lauren Sanchez’s Wedding

June 25, 2025

How to Clean a Beer Glass for Perfect Pours

June 25, 2025
Add A Comment

Leave A Reply Cancel Reply

You must be logged in to post a comment.

Demo
Top Posts

‘They’re Not Breathing’: Inside the Chaos of ICE Detention Center 911 Calls

June 25, 2025

The urban electric commuter FUELL Fllow designed by Erik Buell is now opening orders | thepack.news | THE PACK

July 29, 2023

2024 Yamaha Ténéré 700 First Look [6 Fast Facts For ADV Riding]

July 29, 2023
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Latest Reviews

Subscribe to Updates

Get the latest tech news from FooBar about tech, design and biz.

Demo
Most Popular

‘They’re Not Breathing’: Inside the Chaos of ICE Detention Center 911 Calls

June 25, 2025

The urban electric commuter FUELL Fllow designed by Erik Buell is now opening orders | thepack.news | THE PACK

July 29, 2023

2024 Yamaha Ténéré 700 First Look [6 Fast Facts For ADV Riding]

July 29, 2023
Our Picks

Palestinians Claim Social Media ‘Censorship’ Is Endangering Lives

How NASA Might Change Under Donald Trump

Pro-Ject Flatten It Review: Finally, a Way to Fix Warped Records

Subscribe to Updates

Get the latest news from CycleNews about two, three wheelers and Electric vehicles.

© 2025 cyclenews.blog
  • Home
  • About us
  • Get In Touch
  • Shop
  • Listings
  • My Account
  • Submit Your Ad
  • Terms & Conditions
  • Stock Ticker

Type above and press Enter to search. Press Esc to cancel.