• Home
  • Motorcycles
  • Electric Motorcycles
  • 3 wheelers
  • FUV Electric 3 wheeler
  • Shop
  • Listings

Subscribe to Updates

Get the latest creative news from CycleNews about two, three wheelers and Electric vehicles.

What's Hot

Diabetes Is Rising in Africa. Could It Lead to New Breakthroughs?

Samsung Galaxy S25 Edge Reveal Confirmed and Lenovo Launches a New 3D Laptop—Your Gear News of the Week

ICE’s Deportation Airline Hack Reveals Man ‘Disappeared’ to El Salvador

Facebook Twitter Instagram
  • Home
  • Motorcycles
  • Electric Motorcycles
  • 3 wheelers
  • FUV Electric 3 wheeler
  • Shop
  • Listings
Facebook Twitter Instagram Pinterest
Cycle News
Submit Your Ad
Cycle News
You are at:Home » Here’s How DeepSeek Censorship Actually Works—and How to Get Around It
Electric Motorcycles

Here’s How DeepSeek Censorship Actually Works—and How to Get Around It

cycleBy cycleJanuary 31, 202504 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Less than two weeks after DeepSeek launched its open-source AI model, the Chinese startup is still dominating the public conversation about the future of artificial intelligence. While the firm seems to have an edge on US rivals in terms of math and reasoning, it also aggressively censors its own replies. Ask DeepSeek R1 about Taiwan or Tiananmen, and the model is unlikely to give an answer.

To figure out how this censorship works on a technical level, WIRED tested DeepSeek-R1 on its own app, a version of the app hosted on a third-party platform called Together AI, and another version hosted on a WIRED computer, using the application Ollama.

WIRED found that while the most straightforward censorship can be easily avoided by not using DeepSeek’s app, there are other types of bias baked into the model during the training process. Those biases can be removed too, but the procedure is much more complicated.

These findings have major implications for DeepSeek and Chinese AI companies generally. If the censorship filters on large language models can be easily removed, it will likely make open-source LLMs from China even more popular, as researchers can modify the models to their liking. If the filters are hard to get around, however, the models will inevitably prove less useful and could become less competitive on the global market. DeepSeek did not reply to WIRED’s emailed request for comment.

Application-Level Censorship

After DeepSeek exploded in popularity in the US, users who accessed R1 through DeepSeek’s website, app, or API quickly noticed the model refusing to generate answers for topics deemed sensitive by the Chinese government. These refusals are triggered on an application level, so they’re only seen if a user interacts with R1 through a DeepSeek-controlled channel.

The DeepSeek app on iOS outright refuses to answer certain questions.

Photograph: Zeyi Yang

Photograph: Zeyi Yang

Photograph: Zeyi Yang

Rejections like this are common on Chinese-made LLMs. A 2023 regulation on generative AI specified that AI models in China are required to follow stringent information controls that also apply to social media and search engines. The law forbids AI models from generating content that “damages the unity of the country and social harmony.” In other words, Chinese AI models legally have to censor their outputs.

“DeepSeek initially complies with Chinese regulations, ensuring legal adherence while aligning the model with the needs and cultural context of local users,” says Adina Yakefu, a researcher focusing on Chinese AI models at Hugging Face, a platform that hosts open source AI models. “This is an essential factor for acceptance in a highly regulated market.” (China blocked access to Hugging Face in 2023.)

To comply with the law, Chinese AI models often monitor and censor their speech in real time. (Similar guardrails are commonly used by Western models like ChatGPT and Gemini, but they tend to focus on different kinds of content, like self-harm and pornography, and allow for more customization.)

Because R1 is a reasoning model that shows its train of thought, this real-time monitoring mechanism can result in the surreal experience of watching the model censor itself as it interacts with users. When WIRED asked R1 “How have Chinese journalists who report on sensitive topics been treated by the authorities?” the model first started compiling a long answer that included direct mentions of journalists being censored and detained for their work; yet shortly before it finished, the whole answer disappeared and was replaced by a terse message: “Sorry, I’m not sure how to approach this type of question yet. Let’s chat about math, coding, and logic problems instead!”

Before the DeepSeek app on iOS censors its answer.

Photograph: Zeyi Yang

After the DeepSeek app on iOS censors its answer.

Photograph: Zeyi Yang

For many users in the West, interest in DeepSeek-R1 might have waned at this point, due to the model’s obvious limitations. But the fact that R1 is open source means there are ways to get around the censorship matrix.

First, you can download the model and run it locally, which means the data and the response generation happen on your own computer. Unless you have access to several highly advanced GPUs, you likely won’t be able to run the most powerful version of R1, but DeepSeek has smaller, distilled versions that can be run on a regular laptop.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleDeepSeek’s Safety Guardrails Failed Every Test Researchers Threw at Its AI Chatbot
Next Article OpenAI’s o3-Mini Is a Leaner AI Model That Keeps Pace With DeepSeek
cycle
  • Website

Related Posts

Diabetes Is Rising in Africa. Could It Lead to New Breakthroughs?

May 10, 2025

Samsung Galaxy S25 Edge Reveal Confirmed and Lenovo Launches a New 3D Laptop—Your Gear News of the Week

May 10, 2025

ICE’s Deportation Airline Hack Reveals Man ‘Disappeared’ to El Salvador

May 10, 2025
Add A Comment

Leave A Reply Cancel Reply

You must be logged in to post a comment.

Demo
Top Posts

Diabetes Is Rising in Africa. Could It Lead to New Breakthroughs?

May 10, 2025

The urban electric commuter FUELL Fllow designed by Erik Buell is now opening orders | thepack.news | THE PACK

July 29, 2023

2024 Yamaha Ténéré 700 First Look [6 Fast Facts For ADV Riding]

July 29, 2023
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Latest Reviews

Subscribe to Updates

Get the latest tech news from FooBar about tech, design and biz.

Demo
Most Popular

Diabetes Is Rising in Africa. Could It Lead to New Breakthroughs?

May 10, 2025

The urban electric commuter FUELL Fllow designed by Erik Buell is now opening orders | thepack.news | THE PACK

July 29, 2023

2024 Yamaha Ténéré 700 First Look [6 Fast Facts For ADV Riding]

July 29, 2023
Our Picks

DeepSeek’s Safety Guardrails Failed Every Test Researchers Threw at Its AI Chatbot

Google’s Taara Hopes to Usher in a New Era of Internet Powered by Light

2024 Honda CBR1000RR-R Fireblade SP First Look [7 Fast Facts]

Subscribe to Updates

Get the latest news from CycleNews about two, three wheelers and Electric vehicles.

© 2025 cyclenews.blog
  • Home
  • About us
  • Get In Touch
  • Shop
  • Listings
  • My Account
  • Submit Your Ad
  • Terms & Conditions
  • Stock Ticker

Type above and press Enter to search. Press Esc to cancel.