this post was submitted on 04 Feb 2025
10 points (75.0% liked)

Technology

1756 readers
483 users here now

Which posts fit here?

Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

[email protected]
[email protected]


Icon attribution | Banner attribution


If someone is interested in moderating this community, message @[email protected].

founded 1 year ago
MODERATORS
 

cross-posted from: https://lemmy.sdf.org/post/28978937

There’s an idea floating around that DeepSeek’s well-documented censorship only exists at its application layer but goes away if you run it locally (that means downloading its AI model to your computer).

But DeepSeek’s censorship is baked-in, according to a Wired investigation which found that the model is censored on both the application and training levels.

For example, a locally run version of DeepSeek revealed to Wired thanks to its reasoning feature that it should “avoid mentioning” events like the Cultural Revolution and focus only on the “positive” aspects of the Chinese Communist Party.

A quick check by TechCrunch of a locally run version of DeepSeek available via Groq also showed clear censorship: DeepSeek happily answered a question about the Kent State shootings in the U.S., but replied “I cannot answer” when asked about what happened in Tiananmen Square in 1989.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 7 hours ago* (last edited 6 hours ago) (2 children)

I've never heard that myth. But yeah, it's government mandated censorship. No Chinese company can release a model that doesn't have censorship baked in. And it's not very hard to check this. Fist thing I did was download one of the smaller variants of the R1 distills and ask it some provocative questions. And it refused to answer peoperly. Much like Meta's instruct-tuned models or generally most of the models out there. Just with the political censorship on top.

[–] [email protected] 4 points 7 hours ago (1 children)

I never believed that myth either, but it's been around here on Lemmy these days :-)

[–] [email protected] 3 points 4 hours ago

Okay. I guess at this point there is every possible claim out there anyways. I've read it's too censored, it's not censored enough, it was cheap to train, it wasn't as cheap to train as they claimed, they used H800, they probably used other cards as well... There is just an absurd amount of unsubstantiated myths out there. Plus all the speculation regarding Nvidia's stock price...