this post was submitted on 21 Nov 2024
40 points (100.0% liked)

Fediverse

28465 readers
506 users here now

A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).

If you wanted to get help with moderating your own community then head over to [email protected]!

Rules

Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration), Search Lemmy

founded 2 years ago
MODERATORS
 

For any social network, not just a federated one.

My thoughts: The way it works in big tech social networks is like this:

  1. **The organic methods: **
  • your followee shares something from a poster you don't follow
  • someone you don't follow comments on a post from someone you follow
  • you join a group or community and find others you currently don't follow
  1. The recommendation engine methods: content you do not follow shows up, and you are likely to engage in it based on statistical models. Big tech is pushing this more and more.
  2. Search: you specifically attempt to find what you're looking for through some search capability. Big tech is pushing against this more and more.

In my opinion, the fediverse covers #1 well already. But #1 has a bubble effect. Your followees are less likely to share something very drastically different from what you already have.

The fediverse is principally opposed to #2, at least the way it is done in big tech. But maybe some variation of it could be done well.

#3 is a big weakness for fediverse. But I am curious how it would ideally manifest. Would it be full text search? Semantic search? Or something with more machine learning?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 5 hours ago

Something I've thought about a bunch re: recommendation engines is the idea of a "sweet spot" that balances exploration and safety

Though actually I should start by saying that recommendation engines tend to aim to maximise engagement, which is why manosphere type content is so prevalent on places like YouTube if you go in with a fresh account — outrage generates engagement far more reliably than other content. I'm imagining a world where recommendation algorithms may be able to be individually tailored and trained, where I can let my goals shape the recommendations. I did some tinkering with a concept like this in the context of a personal music recommender, and I gave it an "exploration" slider, where at maximum, it'd suggest some really out-there stuff, but lower down might give me new songs from familiar artists. That project worked quite well, but it needs a lot of work to untangle before I can figure out how and why it worked so well.

That was a super individualistic program I made there, in that it was trained exclusively from data I gave it. One can get individual goals without having to rely on the data of just one person though - listenbrainz is very cool — its open source, and they are working on recommendation stuff (I've used listenbrainz as a user, but not yet as a contributor/developer)

Anyway, that exploration slider I mentioned is an aspect of the "sweet spot" I mentioned at the start. If we imagine a "benevolent" (aligned with the goals of its user) recommendation engine, and say that the goal you're after is you want to listen to more diverse music. For a random set of songs that are new to you, we could estimate how close they are to your current taste (getting this stuff into matrices is a big chunk of the work, ime). But maybe one of the songs is 10 arbitrary units away from the boundary of your "musical comfort zone". Maybe 10 units is too much too soon, too far away from your comfort zone. But maybe the song that's only 1 unit away is too similar to what you like already and doesn't feel stimulating and exciting in the way you expect the algorithm to feel. So maybe we could try what we think is a 4 or 5. Something novel enough to be exciting, but still feels safe.

Research has shown that recommendation algorithms can change affect our beliefs and our tastes [citation needed]. I got onto the music thing because I was thinking about the power in a recommendation algorithm, which is currently mostly used on keeping us consuming content like good cash cows. It's reasonable that so many people have developed an aversion to algorithmic recommendations, but I wish I could have a dash of algorithmic exploration, but with me in control (but not quite so in control as what you describe in your options 3). As someone who is decently well versed in machine learning (by scientist standards — I have never worked properly in software development or ML), I think it's definitely possible.