85
submitted 1 month ago by [email protected] to c/[email protected]
you are viewing a single comment's thread
view the rest of the comments
[-] [email protected] 31 points 1 month ago

You'd expect them to control for that.

But who the hell knows with these people.

[-] [email protected] 46 points 1 month ago

How could they? The Internet is flooded with that garbage, and not just from their own model.

[-] [email protected] 21 points 1 month ago

Yeah, they would have to have a sure fire way to identify the content as 100% AI generated so they could ditch it. If they're training on Reddit Comments then they're fucked.

[-] [email protected] 19 points 1 month ago

To be fair, they do thanks the gold for the stranger fuck CCP bacon.

[-] [email protected] 9 points 1 month ago* (last edited 1 month ago)

ChatGPT doesn't have access to verified not-bots. Google and Facebook does (they can read all of your messages). Expect them to become the privacy-invasive leads on this.

[-] [email protected] 4 points 1 month ago

Exactly it.

Meta will be able to have bots dominate their platforms while being able to distinguish and train on the real human users.

Human interaction with bots will also allow labelling some bot generated data as acceptible for reuse as training data when real human data isn't enough.

Social media platforms will have a massive advantage for LLMs in the long run. Glorified search engine platforms such as ChatGPT are a relic of this current era.

[-] [email protected] 3 points 1 month ago

They could use human filters to discard obvious bots... but that would make this shit even more expensive and unprofitable so they have to cut corners and make their own project fail.

this post was submitted on 06 May 2025
85 points (100.0% liked)

technology

23856 readers
374 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS