this post was submitted on 22 Feb 2024
283 points (87.7% liked)

Lemmy Shitpost

26705 readers
2690 users here now

Welcome to Lemmy Shitpost. Here you can shitpost to your hearts content.

Anything and everything goes. Memes, Jokes, Vents and Banter. Though we still have to comply with lemmy.world instance rules. So behave!


Rules:

1. Be Respectful


Refrain from using harmful language pertaining to a protected characteristic: e.g. race, gender, sexuality, disability or religion.

Refrain from being argumentative when responding or commenting to posts/replies. Personal attacks are not welcome here.

...


2. No Illegal Content


Content that violates the law. Any post/comment found to be in breach of common law will be removed and given to the authorities if required.

That means:

-No promoting violence/threats against any individuals

-No CSA content or Revenge Porn

-No sharing private/personal information (Doxxing)

...


3. No Spam


Posting the same post, no matter the intent is against the rules.

-If you have posted content, please refrain from re-posting said content within this community.

-Do not spam posts with intent to harass, annoy, bully, advertise, scam or harm this community.

-No posting Scams/Advertisements/Phishing Links/IP Grabbers

-No Bots, Bots will be banned from the community.

...


4. No Porn/ExplicitContent


-Do not post explicit content. Lemmy.World is not the instance for NSFW content.

-Do not post Gore or Shock Content.

...


5. No Enciting Harassment,Brigading, Doxxing or Witch Hunts


-Do not Brigade other Communities

-No calls to action against other communities/users within Lemmy or outside of Lemmy.

-No Witch Hunts against users/communities.

-No content that harasses members within or outside of the community.

...


6. NSFW should be behind NSFW tags.


-Content that is NSFW should be behind NSFW tags.

-Content that might be distressing should be kept behind NSFW tags.

...

If you see content that is a breach of the rules, please flag and report the comment and a moderator will take action where they can.


Also check out:

Partnered Communities:

1.Memes

2.Lemmy Review

3.Mildly Infuriating

4.Lemmy Be Wholesome

5.No Stupid Questions

6.You Should Know

7.Comedy Heaven

8.Credible Defense

9.Ten Forward

10.LinuxMemes (Linux themed memes)


Reach out to

All communities included on the sidebar are to be made in compliance with the instance rules. Striker

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 15 points 8 months ago (2 children)

The problem is that these answers are hugely incorrect and if some child learning about history of England would see this, they would create bias that England was always diverse.
The same is true for some recent post, where people knowing nothing about Scotland history could learn from images that half of Scotland population in 18th century was black.
So from my perspective these images are just completely wrong and it should be fixed.
Also if you want diversity, what about handicapped people?

[–] [email protected] 24 points 8 months ago (2 children)

Repeat after me:

"Current AI is not a knowledge tool. It MUST NOT be used to get information about any topic!"

If your child is learning Scottish history from AI, you failed as a teacher/parent. This isn't even about bias, just about what an AI model is. It's not even supposed to be correct, that's not what it is for. It is for appearing as correct as the things it has been trained on. And as long as there are two opinions in the training data, the AI will gladly make up a third.

[–] [email protected] 5 points 8 months ago

That doesn’t matter though. People will definitely use it to acquire knowledge, they are already doing it now. Which is why it’s so dangerous to let these "moderate" inaccuracies fly.

You even perfectly summed up why that is: LLMs are made to give a possibly correct answer in the most convincing way.

[–] [email protected] 10 points 8 months ago* (last edited 8 months ago) (2 children)
  • it's true that this would mislead children, but the model could hallucinate about literally anything. Especially at this stage, no one-- children or adults-- should be uncritically accepting what the model states as fact. That said, I agree LLMs need to improve their factual accuracy

  • Although it is highly debated, some scholars suggest Queen Charlotte might have had African ancestry, or that she would be considered a POC by today's standards. Of course, she reigned in the 17-1800s, but it isn't entirely outlandish to have a "Queen of Color", if we aren't requesting a specific queen or a specific race

  • People of color did live in England in the middle ages? Like not diverse in the way we conceive now, but here are a few papers discussing the racial diversity at the time. It was surely less intermingled than today, but it's not like these images are impossible

  • Other things are anachronistic or fantastical about these images, such as clothing. Are we worried about children getting the wrong impression of history in that sense?

  • Of course increasing visibility and representation of all kinds of marginalized people is important. I, myself, am disabled, so I care about that representation too-- thanks for pointing out how we could improve the model further. I do kinda feel like people would be groaning if the model had produced a Queen with a visible disability, though... I would be delighted to be wrong on this front :)

[–] [email protected] 4 points 8 months ago (1 children)

I know that POC lived in England and it was possible to meet someone like that. But I would prefer if the model gave most possible, most general answers. If I ask for an image of a car I would like to give me four-wheeled red or gray or green car, not three-wheeled pink car just because there exist some car like that.

[–] [email protected] 4 points 8 months ago* (last edited 8 months ago)

That's valid! I agree. I think in this case it would be reasonable for the model to give multiple (or like, at least one, jeez) images with white queens. I don't disagree with anyone in that sense. I just also don't think it's worth pitching a fit when the dumbass model that has been trained to show more racial diversity produces (frankly comical) hallucinations.

The ethos of the trainers is a good one. Attempting to counter the (demonstrated, measurable) bias of many models toward whiteness is a good choice. I prefer that the trainers choose to address the bias even if it (sometimes, in early versions) makes the model make silly mistakes like this. That's all.

[–] [email protected] 0 points 8 months ago

These are not hallucinations. The image generator system prompt has been intensely altered to mix all races and genders. The model is probably not inaccurate up until it being misused. The misuse could be at any level of interaction, so it's very misleading to base it on such an example