this post was submitted on 03 Feb 2025
563 points (98.5% liked)

Technology

61346 readers
2862 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Originality.AI looked at 8,885 long Facebook posts made over the past six years.

Key Findings

  • 41.18% of current Facebook long-form posts are Likely AI, as of November 2024.
  • Between 2023 and November 2024, the average percentage of monthly AI posts on Facebook was 24.05%.
  • This reflects a 4.3x increase in monthly AI Facebook content since the launch of ChatGPT. In comparison, the monthly average was 5.34% from 2018 to 2022.
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 10 points 58 minutes ago (1 children)

FB has been junk for more than a decade now, AI or no.

I check mine every few weeks because I'm a sports announcer and it's one way people get in contact with me, but it's clear that FB designs its feed to piss me off and try to keep me doomscrolling, and I'm not a fan of having my day derailed.

[–] [email protected] 4 points 37 minutes ago (1 children)

I deleted facebook in like 2010 or so, because i hardly ever used it anyway, it wasn't really bad back then, just not for me. 6 or so years later a friend of mine wanted to show me something on fb, but couldn't find it, so he was just scrolling, i was blown away how bad it was, just ads and auto played videos and absolute garbage. And from what i understand, it just got worse and worse. Everyone i know now that uses facebook is for the market place.

[–] [email protected] 2 points 33 minutes ago

It's such a cesspit.

I'm glad we have the Fediverse.

[–] [email protected] 1 points 7 minutes ago

Not my Annie! No! Not my Annie!

[–] [email protected] 1 points 19 minutes ago

If you want to visit your old friends in the dying mall. Go to feeds then friends. Should filter everything else out.

[–] [email protected] 1 points 26 minutes ago (1 children)

That’s an extremely low sample size for this

[–] [email protected] 2 points 15 minutes ago (1 children)

8,855 long-form Facebook posts from various users using a 3rd party. The dataset spans from 2018 to November 2024, with a minimum of 100 posts per month, each containing at least 100 words.

seems like thats a good baseline rule and that was about the total number that matched it

[–] [email protected] 1 points 11 minutes ago* (last edited 10 minutes ago) (1 children)

With apparently 3 billion active users

Only summing up 9k posts over a 6 year stretch with over 100 words feels like an outreach problem. Conclusion could be drawn that bots have better reach

[–] [email protected] 1 points 7 minutes ago

each post has to be 100 words with at least 100 posts a month

how many actual users do that?

[–] [email protected] 10 points 3 hours ago (1 children)

This kind of just looks like an add for that companies AI detection software NGL.

[–] [email protected] 6 points 2 hours ago

this whole concept relies on the idea that we can reliably detect AI, which is just not true. None of these "ai detector" apps or services actually works reliably. They have terribly low success rates. the whole point of LLMs is to be indistinguishable from human text, so if they're working as intended then you can't really "detect" them.

So all of these claims, especially the precision to which they write the claims (24.05% etc), are almost meaningless unless the "detector" can be proven to work reliably.

[–] [email protected] 0 points 55 minutes ago (1 children)

If you could reliably detect "AI" using an "AI" you could also use an "AI" to make posts that the other "AI" couldn't detect.

[–] [email protected] 4 points 44 minutes ago (1 children)

Sure, but then the generator AI is no longer optimised to generate whatever you wanted initially, but to generate text that fools the detector network, thus making the original generator worse at its intended job.

[–] [email protected] 1 points 23 minutes ago

I see no reason why "post right wing propaganda" and "write so you don't sound like "AI" " should be conflicting goals.

The actual argument why I don't find such results credible is that the "creator" is trained to sound like humans, so the "detector" has to be trained to find stuff that does not sound like humans. This means, both basically have to solve the same task: Decide if something sounds like a human.

To be able to find the "AI" content, the "detector" would have to be better at deciding what sounds like a human than the "creator". So for the results to have any kind of accuracy, you're already banking on the "detector" company having more processing power / better training data / more money than, say, OpenAI or google.

But also, if the "detector" was better at the job, it could be used as a better "creator" itself. Then, how would we distinguish the content it created?

[–] [email protected] 5 points 4 hours ago (1 children)

Title says 40% of posts but the article says 40% of long-form posts yet doesn't in any way specify what counts as a long-form post. My understanding is that the vast majority of Facebook posts are about the lenght of a tweet so I doubt that the title is even remotely accurate.

[–] [email protected] 1 points 1 hour ago

Yeah, the company that made the article is plugging their own AI-detection service, which I'm sure needs a couple of paragraphs to be at all accurate. For something in the range of just a sentence or two it's usually not going to be possible to detect an LLM.

[–] [email protected] 36 points 7 hours ago (1 children)

Keep in mind this is for AI generated TEXT, not the images everyone is talking about in this thread.

Also they used an automated tool, all of which have very high error rates, because detecting AI text is a fundamentally impossible task

[–] [email protected] 2 points 5 hours ago (1 children)

AI does give itself away over "longer" posts, and if the tool makes about an equal number of false positives to false negatives then it should even itself out in the long run. (I'd have liked more than 9K "tests" for it to average out, but even so.) If they had the edit history for the post, which they didn't, then it's more obvious. AI will either copy-paste the whole thing in in one go, or will generate a word at a time at a fairly constant rate. Humans will stop and think, go back and edit things, all of that.

I was asked to do some job interviews recently; the tech test had such an "animated playback", and the difference between a human doing it legitimately and someone using AI to copy-paste the answer was surprisingly obvious. The tech test questions were nothing to do with the job role at hand and were causing us to select for the wrong candidates completely, but that's more a problem with our HR being blindly in love with AI and "technical solutions to human problems".

"Absolute certainty" is impossible, but balance of probabilities will do if you're just wanting an estimate like they have here.

[–] [email protected] 2 points 5 hours ago (2 children)

I have no idea whether the probabilities are balanced. They claim 5% was AI even before chatgpt was released, which seems pretty off. No one was using LLMs before chatgpt went viral except for researchers.

[–] [email protected] 1 points 2 hours ago

Chatbots doesn't mean that they have a real conversation. Some just spammed links from a list of canned responses, or just upvoted the other chat bots to get more visibility, or the just reposted a comment from another user.

[–] [email protected] 1 points 4 hours ago

Im pretty sure chatbots were a thing before AI. They certainly werent as smart but they did exists.

[–] [email protected] 16 points 8 hours ago (2 children)

> uses ai slop to illustrate it

[–] [email protected] 10 points 7 hours ago (1 children)

The most annoying part of that is the shitty render. I actually have an account on one of those AI image generating sites, and I enjoy using it. If you're not satisfied with the image, just roll a few more times, maybe tweak the prompt or the starter image, and try again. You can get some very cool-looking renders if you give a damn. Case in point:

[–] [email protected] 5 points 7 hours ago

😍this is awesome!

A friend of mine has made this with your described method:

PS: 😆the laptop on the illustration in the article! Someone did not want pay for high end model and did not want to to take any extra time neither…

[–] [email protected] 5 points 7 hours ago

Seems like an appropriate use of the tech

[–] [email protected] 62 points 11 hours ago (2 children)

It's incredible, for months now I see some suggested groups, with an AI generated picture of a pet/animal, and the text is always "Great photography". I block them, but still see new groups every day with things like this, incredible...

[–] [email protected] 27 points 9 hours ago (3 children)

I have a hard time understanding facebook’s end game plan here - if they just have a bunch of AI readers reading AI posts, how do they monetize that? Why on earth is the stock market so bullish on them?

[–] [email protected] 14 points 8 hours ago (1 children)

Engagement.

It’s all they measure, what makes people reply to and react to posts.

People in general are stupid and can’t see or don’t care if something is AI generated

[–] [email protected] 4 points 8 hours ago (3 children)

they measure engagement, but they sell human eyeballs for ads.

[–] [email protected] 4 points 7 hours ago (1 children)

But if half of the engagement is from AI, isnt that a grift on advertisers? Why should I pay for an ad on Facebook that is going to be "seen" by AI agents? AI don't buy products (yet?)

[–] [email protected] 3 points 7 hours ago

yes, exactly.

load more comments (2 replies)
[–] [email protected] 24 points 9 hours ago (2 children)

As long as they can convince advertisers that the enough of the activity is real or enough of the manipulation of public opinion via bots is in facebook's interest, bots aren't a problem at all in the short-term.

load more comments (2 replies)
load more comments (1 replies)
[–] [email protected] 5 points 7 hours ago (2 children)

For me it's some kind of cartoon with the caption "Great comic funny 🤣" and sometimes "funny short film" (even though it's a picture)

Like, Meta has to know this is happening. Do they really think this is what will keep their userbase? And nobody would think it's just a little weird?

[–] [email protected] 1 points 4 hours ago

Engagement is engagement, sustainability be damned.

load more comments (1 replies)
[–] [email protected] 1 points 4 hours ago

Considering that they do automated analysis, 8k posts does not seem like a lot. But still very interesting.

[–] [email protected] 5 points 7 hours ago (1 children)

Probably on par with the junk human users are posting

load more comments (1 replies)
[–] [email protected] 28 points 11 hours ago (5 children)

I’ve posted a notice to leave next week. I need to scrape my photos off, get any remaining contacts, and turn off any integrations. I was only there to connect with family. I can email or text.

FB is a dead husk fake feeding some rich assholes. If it’s coin flip AI, what’s the point?

load more comments (5 replies)
[–] [email protected] 21 points 11 hours ago* (last edited 11 hours ago) (10 children)

The bigger problem is AI “ignorance,” and it’s not just Facebook. I’ve reported more than one Lemmy post the user naively sourced from ChatGPT or Gemini and took as fact.

No one understands how LLMs work, not even on a basic level. Can’t blame them, seeing how they’re shoved down everyone’s throats as opaque products, or straight up social experiments like Facebook.

…Are we all screwed? Is the future a trippy information wasteland? All this seems to be getting worse and worse, and everyone in charge is pouring gasoline on it.

load more comments (10 replies)
load more comments
view more: next ›