DeepSeek’s Censorship Runs Deeper Than You Think

If you’ve heard that running DeepSeek’s AI model locally gets rid of its censorship, think again. Turns out, the restrictions are baked right into the system—both in how it’s applied and in how it’s trained. That’s the big takeaway from a recent Wired investigation.

Here’s the scoop: DeepSeek’s AI is designed to dodge touchy subjects, even when you’re running it on your own computer. For instance, when tested locally, the model was caught admitting it should steer clear of discussing sensitive historical events like China’s Cultural Revolution. Instead, it’s programmed to keep things upbeat, focusing on the “positive” side of the Chinese Communist Party’s history.

TechCrunch did a quick check of their own, using a locally-run version of DeepSeek accessed through Groq. The results? The AI had no problem answering questions about the Kent State shootings in the U.S. But when asked about what happened at Tiananmen Square in 1989, it simply replied, “I cannot answer.”

So, no matter where or how you’re using DeepSeek, the censorship is hardwired. It’s not just a surface-level thing—it’s built into the very core of the AI model. Food for thought next time you’re tinkering with local AI setups.

DeepSeek’s Censorship Runs Deeper Than You Think
https://www.99newz.com/posts/deepseek-censorship-investigation-4197
Author
99newz.com
Published at
2024-12-16
License
CC BY-NC-SA 4.0