Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected].
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try [email protected] or [email protected]
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
view the rest of the comments
All things made by humans have some level of bias, there is no escaping it but as for deliberate bias and censorship. Those are business decisions made by business people.
Was there a difference between a biased or unbiased LLM? not one worth mentioning, I have tried both types and I haven't seen a difference in 90% of my output unless I am specially trying to hit a bias.
Which would I use for work? Neither, because my work is specialsed enough that the hallucinations LLM produce cause more problems then the boot strapping solves.
I think it's worth distinguishing between censorship and limitations.
Deepseek essentially wipes out events or concepts in order to hide them, obviously without saying so. That's bad
When, say, OpenAI removes/blocks porn/nudity and says so, that's maybe not aligned with your values, but it's not hiding anything.
The problem here is, we don't know how large each category is for each model. I'm 100% sure there's knowledge blocked/removed from Chatgpt without ever publicly saying so.
I have tested Deepseek and have found it to be pretty open about censorship in at least many topics. I asked it some questions about China and it mentioned issues with Xinjiang, Uyghurs, and Taiwan. I did not bring it up, or try to trick it into talking about it. It was mentioned as some future challenges China will face.
It did not share explicitly what those issues were, but that those are sensitive issues.
In other words it does acknowledge that there is censorship, I doubt that it is fully open about all the censorship, and potential bias if it has any baked in.
I did not experience any obvious bias or censorship.
I guess questions regarding Tiananmen square would be censored though, but how not asked.