this post was submitted on 14 Jan 2025
1 points (52.2% liked)

Technology

35251 readers
497 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
 

Since Meta announced they would stop moderating posts much of the mainstream discussion surrounding social media has been centered on whether a platform has a responsibility or not for the content being posted on their service. Which I think is a fair discussion though I favor the side of less moderation in almost every instance.

But as I think about it the problem is not moderation at all: we had very little moderation in the early days of the internet and social media and yet people didn’t believe the nonsense they saw online, unlike nowadays were even official news platforms have reported on outright bullshit being made up on social media. To me the problem is the godamn algorithm that pushes people into bubbles that reinforce their correct or incorrect views; and I think anyone with two brain cells and an iota of understanding of how engagement algorithms works can see this. So why is the discussion about moderation and not about banning algorithms?

top 30 comments
sorted by: hot top controversial new old
[–] [email protected] 6 points 19 hours ago (1 children)

Nah. It is just people, including me, don't wanna to think too much about the information when it is present to us. Most like to read just the headline and make a conclusion. It is the laziness in thinking and emotional reaction that makes this whole situation worse.

Algorithm (recomendation engines) is just a catalyst.

[–] [email protected] 2 points 18 hours ago

Manipulative algorithms, yes.

[–] [email protected] 7 points 19 hours ago

I think you're making a lot of assumptions here, many of which I have contentions with.

we had very little moderation in the early days of the internet and social media

It differed from site to site, but in my experience of the Internet in the '90s and '00s, a lot of forums were heavily moderated, and even Facebook was kept pretty clean when I got on it in ~2006/2007.

and yet people didn’t believe the nonsense they saw online,

I fully dispute this. People have always believed hearsay. They're just exposed to more of it through the web instead of it coming verbally from your family, friends, and coworkers.

unlike nowadays were even official news platforms have reported on outright bullshit being made up on social media.

  1. We live in a world of 24-hour news cycles and sensationalization, which has escalated over the past few decades. This often encourages ratings over quality.

  2. Mainstream media has always had problems with fact-check. I'm not trying to attack the news media or anything, I think most reporters do their best and strive to be factual, but they sometimes make mistakes. I can't remember the name of it, but I there's some sort of phenomenon where if you watch a news broadcast, and they talk about a subject you have expertise in, you're likely to find inaccuracies in it, and be more skeptical of the rest of the broadcast.

To me the problem is the godamn algorithm that pushes people into bubbles that reinforce their correct or incorrect views

Polarization is not limited to social media. The news media has become more and more tribal over time. Company that sell products and services have been more likely to present a political world-view.

Overall, I think you're ignoring a lot of other things that have changed over the years. It's not like the only thing that has changed in the world is the algorithmic feed. We are perpetually online now and that's where most people get their news, so it's only natural that would also be their source of disinformation. I think algorithmic feeds that push people into their bubbles is a response to this polarization, not the source of it.

[–] [email protected] 8 points 20 hours ago (2 children)

I don't think the problem is strictly the existence algorithms, but rather how they are used against the users.

To some extent, on Lemmy users are free to sort posts how they wish. Such as by new, or hot.. etc. Should these types of algorithm-like features be banned too?

While pretty much Youtube and Facebook just decide for you what gets pitched to you.

[–] [email protected] 3 points 20 hours ago (2 children)

You think the Meta algorithm just sorts the feed for you? It is way more complex and it basically puts you on some very fine-grained clusters, then decides what to show to you, then collects your clicks and reactions and adjusts itself. For scale, no academic "research with human subjects" would be approved with mechanics like that under the hood. It is deeply unethical and invasive, outright dangerous for the individuals (eg teen self esteem issues, anorexias, etc, etc). So "algorithm-like features" is apples to oranges here.

[–] [email protected] 5 points 20 hours ago (1 children)

Exactly my point. In lemmy I can still see all the posts, Meta’s algorithm will remove stuff from the feeds and push others and even hide comments. It is literally a reality warping engine.

[–] [email protected] 3 points 20 hours ago

a reality warping engine.

Now you're talking.

[–] [email protected] 1 points 20 hours ago (1 children)

It's not as cut and dry obviously, but Meta certainly does take away control from the user in comparison to Fediverse based platforms regarding algorithms. and despite how complex it is, the algorithm will still sort your feed for you based off that data.

I think that algorithms that the user can control is good. But when the algorithm is used against users like with Meta it's bad. It's about how it's used not just simply because an algorithm exists.

[–] [email protected] 3 points 19 hours ago

Fancier algorithms are not bad per se. They can be ultra-productive for many purposes. In fact, we take no issue with fancy algorithms when published as software libraries. But then only specially trained folks can seize their fruit, which it happens it is people working for Big Tech. Now, if we had user interfaces that could let the user control several free parameters of the algorithms and experience different feeds, then it would be kinda nice. The problem boils down to these areas:

  • near-universal social graphs (they have all the people enlisted)
  • exert total control on the algorithm parameters
  • infer personal and sensitive data points (user-modeling)
  • not ensuring informed consent on the part of the user
  • total behavioral surveillance (they collect every click)
  • manipulate the feed and observe all behavioral response (essentially human subject research for ads)
  • profiteering from the above while harming the user's well being (unethical)

Political interference and proliferation of fascist "ideas" is just a function that is possible if and only if all of the above are in play. If you take all this destructive shit away, a software that would let you explore vast amounts of data with cool algorithms through a user-friendly interface would not be bad in itself.

But you see, that is why we say "the medium is the message" and that "television is not a neutral technology". As a media system, television is so constructed so that few corporations can address the masses, not the other way round, nor people interact with their neighbor. For a brief point in time, the internet promised to subvert that, when centralized social media brought back the exertion of control over the messaging by few corporations. The current alternative is the Fediverse and P2P networks. This is my analysis.

[–] [email protected] 0 points 20 hours ago (1 children)

Like I said below I think the distinction is that a) I have access to a algorithm free feed here and b) lemmy (as far as I understand it) simply sorts content, rather than outright removing content from my feed if it thinks it will make me spend less time on it. I could be wrong about that second point though.

[–] [email protected] 3 points 20 hours ago* (last edited 20 hours ago) (1 children)

Algorithm isn't just whether or not it shows you the content. It is the sorting. Well plus the show it or not.

[–] [email protected] 1 points 20 hours ago

Through the discussion I’ve had here I can see that I should have been more specific and defined what kind of algorithm is the problem. But that was the point of making the post in the first place, to understand why the narrative is not moving in that direction and now I can see why, it’s nuanced discussion. But I think it’s well worth it to steer it in that direction.

[–] [email protected] 8 points 21 hours ago (1 children)

Algorithms can be useful - and at a certain scale they’re necessary. Just look at Lemmy - even as small as it is there’s already some utility in algorithms like “Active”, “Hot” and “Scaled”, and as the number of communities and instances grows they’ll be even more useful. The trouble starts when there are perverse incentives to drive users toward one type of content or another, which I think is one of the fediverse’s key strengths.

[–] [email protected] -1 points 20 hours ago (2 children)

But correct me if I’m wrong (I’m not a programmer), lemmy’s algorithm is basically just sorting; it doesn’t choose over two pieces of media to show me but rather how it orders them. Facebook et al will simply not show content that I will not engage with or that will make me spend less time on the platform.

I agree that they are useful but at a certain point we as a society sometimes need to weight the usefulness of certain technologies against the potential for harm. If the potential for harm is greater than the benefit, then maybe we should somewhat curb the potential for that harm or remove it altogether.

So maybe we could refine the argument to be we need to limit what signals algorithms can use to push content? Or maybe that all social media users should have access to an algorithm free feed and that the algorithm driven feed be hidden by default and can be customizable by users?

[–] [email protected] 5 points 20 hours ago

Algorithm is just a fancy word for rules to sort by. "New" is an algorithm that says "sort by the timestamp of the submissions". That one is pretty innocuous, I think. Likewise "Active" which just says "sort by the last time someone commented" (or whatever). "Hot" and "Scaled", though, involve business logic -- rules that don't have one technically correct solution, but involve decisions and preferences made by people to accomplish a certain aim. Again in Lemmy's case I don't think either the "Hot" or "Scaled" algorithms should be too controversial -- and if they are, you can review the source code, make comments or a PR for changes, or stand up your own Lemmy instance that does it the way you want to. For walled-garden SM sites like TikTok, Facebook and Twitter/X, though, we don't know what the logic behind the algorithm says. We can speculate that it's optimized to keep people using the service for longer, or encouraging them to come back more frequently, but for all intents and purposes those algorithms are black boxes and we have to assume that they're working only for the benefits of the companies, and not the users.

[–] [email protected] 3 points 20 hours ago

Every algorithm is just sorting. Facebook sorts which posts should be shown on top and which should be pushed down. Posts that have a bad rank aren't shown to many people - and eventually - none at all.

algorithm free feed

Sorting by new?

[–] [email protected] 6 points 20 hours ago* (last edited 20 hours ago) (1 children)

How would you identify the kinds of algorithms that should be banned, as opposed to all the other kinds of algorithms? I have a feeling that would be tricky.

[–] [email protected] 1 points 20 hours ago (2 children)

The easy answer for me would be to ban algorithms that have the specific intent of maximizing user time spent on the app. I know that’s very hard to define legally. Maybe like I suggested below we can ban what kinds of signals algorithms can use to suggest and push content?

[–] [email protected] 5 points 20 hours ago (1 children)

To do it based on intent would create some difficult grey areas - for example, video game creators would have to try to make their games as compelling as possible without passing a more or less vague threshold and breaking the law. The second approach of working on the ways different types of data can be used sounds more promising.

[–] [email protected] 0 points 19 hours ago

Exactly. Even Meta and their thousands of lawyers would immediately say this. How does it harm people? Prove it does. Why are they singled out? They're just showing content they think is relevant, and I'm guessing they honestly are. It's that political groups take advantage of that, and make slop that enrages and inflames. But Meta would just say "you can't punish us for trying to make our platform successful". A mess all around

[–] [email protected] 0 points 20 hours ago (1 children)

ban algorithms that have the specific intent of maximizing user time spent on the app.

That just means make the app shitty. You can optimize for engagement without just trying to make users angry. Making users angry at each other is just an extremely effective way to boost engagement.

[–] [email protected] 1 points 20 hours ago

I dunno, old forums were fun as fuck and they had no algorithm beyond sorting by most popular, new etc. Hey if it makes people spend less time looking at their phone it is still a win in my book— I type as I spend hours on my tablet. I’m a hypocrite, won’t lie.

[–] [email protected] 3 points 19 hours ago

Those mega corporations have intentionally misused the term “algorithm” which implies an unbiased method of ranking or sorting. What they are actually using is more like a human curated list of items to promote that supports their self serving goals.

[–] [email protected] 3 points 20 hours ago (1 children)

It would be really nice if at the very least we could get some insight into how algorithms are tuned. It seems obvious that Facebook and X want users to get pissed off. It does not seem ethical at all and should at the very least be examined

[–] [email protected] 2 points 20 hours ago* (last edited 20 hours ago)

While transparency would be helpful for discussion, I don’t think it would change or help with stopping propaganda, misinformation and outright bullshit from being disseminated to the masses because people just don’t care. Even if the algorithm was transparently made to push false narratives people would just shrug and keep using it. The average person doesn’t care about the who, what or why as long as they are entertained. But yes, transparency would be a good first step.

[–] [email protected] 2 points 20 hours ago

Non-consensual user-modeling systems should be heavily regulated.

[–] [email protected] 2 points 20 hours ago (2 children)

I participated in a discussion similar to this recently here on the German-language community: https://discuss.tchncs.de/post/28281369/15510510

Topics that were raised there by various people, some by me (read the full discussion if you can read German):

  • an "algorithm" is really just a way of manipulating data, it's meaningless to say you are banning "algorithms" because all software is based on "algorithms", even reverse-chronological sorting of things you're subscribed to is an algorithm
  • algorithms are mainly intended to keep people on the platform for as long as possible (but I raised the issue that I actually found old web forums more engaging than today's Facebook)
  • how do you define "an algorithm" legally? I suggested a definition based on transparency and objectivity, others raised the issue that this would mean that misinformation could be easily manipulated to be shown at the top, and that if you require "transparency", the platforms will just disclose how their algorithms work instead of abolishing them

One important aspect that nobody raised in that discussion is that moderation is different from censorship.

[–] [email protected] 2 points 20 hours ago* (last edited 20 hours ago)

I think the point of that article is closer to my own argument than what I myself would have thought. I do still think that the problem is the design of the algorithm: a simple algorithm that just sorts content is not a problem. One that decides what to omit and what to push based on what it thinks will make me spend more time on the platform is problematic and is the kind of algorithm we should ban. So maybe the premise is, algorithms designed to make people spend more time on social media should be banned.

Engaging with another idea in there I absolutely think that people should be able to say that Joe Biden is a lizard person and have that come up on everyone’s feed. Because ridiculous claims like that are easily shut down when everyone can see them and comment how fucking dumb it is. But when the message only makes the rounds around communities that are primed to believe that Joe Biden is a lizard person, the message gains credibility for them the more it is suppressed. We used to bring the Klu Klux Klan people on tv to embarrass themselves in front of all of America and it worked very very well, it’s a social sanity check. We no longer have this and now we have bubbles in every part of the political spectrum believing all kinds of oversimplifications, lies and propaganda.

[–] [email protected] 1 points 20 hours ago

If you model and infer some aspect of the user that is considered personal (eg de-anonymize) or sensitive (eg infer sexuality) by means of an inference system, then you are in the area of GDPR. Further use of these inferred data down the pipeline can be construed as unethical. If they want to be transparent about it they have to open-source their user-modeling and decision making system.

[–] [email protected] 2 points 20 hours ago

The real question is how do you ban algorithms without banning editorial discretion of the press?