this post was submitted on 11 Dec 2023
362 points (98.4% liked)
SNOOcalypse - document, discuss, and promote the downfall of Reddit.
4672 readers
1 users here now
SNOOcalypse is closing down. If you wish to talk about Reddit, check out [email protected], [email protected] and [email protected].
This community welcomes anyone who wants to see Reddit gone. Nuke the Snoo!
When sharing links, please also share an archived version of the target of your link.
Rules:
- Follow lemmy.ml's global rules and code of conduct.
- Keep it on-topic.
- Don't promote illegal stuff here.
- Don't be stupid, noisy, obnoxious or obtuse (S.N.O.O.)
- Have fun, and enjoy the popcorn! 🍿
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The frustrating thing is that pretty much anyone who has interacted with these systems has encountered that. Whether it’s photos or social media posts, there are some “memories” that make it worse for people, and in extreme cases could trigger depression or worse.
And ot could be fixed (or at least mitigated) fairly easily. First, obviously, remove from the candidate set any references to something that’s obviously triggering - death, SA, violence, abuse, and so on. Those items wil still be there for the person to look back through at whatever time of their own choosing. They don’t want to wake up to this kind of thing. It’s not a boost to user experience. Photos would need a bit more work, but things like image and facial recognition are good enough that you could come up with a heuristic along the lines of “If there’s a lot of photos of a specific person or animal and then photos of them just stop, remove it from the data set.” You could do something similar for car accidents, burning buildings, scenes containing injury or violence, and so on. On the other hand, you can boost scores for things like pictures of parties and concerts. And I’m just talking about simple heuristics, not invoking an ML model or anything at this point. There would be a lot of false positives, perhaps, but hopefully few false negatives. It’s better to skip a potentially “good” photo when you have a thousand other good photos than it is to show a “bad” one, so we’d bias our error function like that.
I don't disagree with your main point, but what to you think facial recognition is if not a ML model?
Fair point. I meant you could avoid having to do any specific development and training. The facial/object recognition is an off the shelf function these days.