328
submitted 1 month ago by tonytins@pawb.social to c/fuck_ai@lemmy.world

A Discord community for gay gamers is in disarray after one of its moderators and an executive at Anthropic forced the company’s AI chatbot on the Discord, despite protests from members.

Users voted to restrict Anthropic's Claude to its own channel, but Jason Clinton, Anthropic’s Deputy Chief Information Security Officer (CISO) and a moderator in the Discord, overrode them. According to members of this Discord community who spoke with 404 Media on the condition of anonymity, the Discord that was once vibrant is now a ghost town. They blame the chatbot and Clinton’s behavior following its launch.

Archive: http://archive.today/Hl7TO

top 50 comments
sorted by: hot top new old
[-] brucethemoose@lemmy.world 124 points 1 month ago

Sounds about right.

People like to paint these tech execs as Machiavellian liars, but to some extent, they really are drunk on Kool-aid. They make objectively terrible business and personal decisions based on some lucid dream they think the rest of the world shares.

[-] gustofwind@lemmy.world 72 points 1 month ago

They genuinely believe profit and positions of power are objective indicators of social utility.

They’re successful because what they’re doing is right

They have become delusional to the point of being extreme dangers to the community

[-] nithou@piefed.social 27 points 1 month ago* (last edited 1 month ago)

That’s the best proof I see against meritocracy. We were told all those guys were there because they were the brightest and more competent. There is absolutely no thinking or logic in their actions lately outside of social contamination.

[-] SolacefromSilence@fedia.io 11 points 1 month ago

Meritocracy is just Dei Gratia lying about how hard they worked.

[-] Strider@lemmy.world 16 points 1 month ago

And they think just because they are rich they a) earned it and b) are really intelligent.

load more comments (2 replies)
[-] yesterday@lemmy.ca 93 points 1 month ago

Holy shit, the mods made a poll of whether members want the chatbot to be in its own channel or everywhere in the server, they voted "please just keep it in its own channel", and this guy said, and I quote: "the mob doesn’t get to rule." Wow.

Also, as usual with these guys, there isn't a "no integration" option, just "limited" or "free roam"... Not that it matters bc he'd disregard the results of the poll anyway lmao.

[-] vivi@slrpnk.net 33 points 1 month ago* (last edited 1 month ago)

and the LLM itself generated text that agreed with the voters

[-] sem@piefed.blahaj.zone 22 points 1 month ago* (last edited 1 month ago)

No it didn't.

The AI is not agreeing or thinking, it is just outputting words.

Not trying to harsh you but language is important.

[-] brucethemoose@lemmy.world 18 points 1 month ago* (last edited 1 month ago)

Yeah.

Normally I prefer to let “language shortcuts” like this slide, but LLMs are getting way too anthropomorphized amongst the public. See: the headline. So it kinda needs to be qualified.

[-] vivi@slrpnk.net 15 points 1 month ago

clarified :) i agree fully

[-] Ulrich@feddit.org 6 points 1 month ago

I listened to the podcast about this one. Apparently when everyone said to keep it out, he insisted that it was sentient and that it would literally hurt it's feelings, so he couldn't.

load more comments (1 replies)
[-] Catoblepas@piefed.blahaj.zone 67 points 1 month ago

“I think [giving the bot access to all channels] was pretty clearly explained above as honoring the vote,” he said. “Just because you hate AI is not a reason to take the least charitable interpretation of the outcome: we made changes as a result of the vote. We have to optimize for the preference of everyone which means that the mob doesn’t get to rule, I’m sorry.”

Wtf does this even mean. How can you honor the results of a poll and the preferences of everyone by doing the exact opposite of said preferences?? Gird your fucking loins and say you’re doing it because it’s your server and that’s how you want it, or accept that the thing you want isn’t actually popular.

What is it with AI pushers and their complete inability to keep it to themselves, Christ. Unless it’s done something fucked up, nobody is interested in seeing your AI chats. If people wanted to talk to ChatClaudeGeminiCopilotGPT they’d do it on their own time.

[-] corsicanguppy@lemmy.ca 24 points 1 month ago

What is it with...

You remember they had these people who'd knock on your door on the weekend and see if you wanted to join their group now, despite telling them firmly about 59 times previously it's not gonna happen?

Yeah. Like that.

[-] PartyAt15thAndSummit@lemmy.zip 23 points 1 month ago

Do you want us to completely fuck up your workflows/ privacy/ mental well-being/ life?
[] Yes
[] Ask me again later

[-] very_well_lost@lemmy.world 20 points 1 month ago

What is it with AI pushers and their complete inability to keep it to themselves

Because at its core, the AI bubble is the ultimate embodiment of the "growth at any cost" mentality that's been festering in corporate American culture like a gangrenous wound since the 80s.

[-] Pogogunner@sopuli.xyz 67 points 1 month ago

“He’s also very inward facing,” Clinton said. “He lives out his whole life surfing the internet looking for things that make him interested and then occasionally checks this Discord, so it can be up to a few minutes before he responds because he’s off doing something for his own enjoyment.”

These fuckers are absolutely delusional.

[-] brucethemoose@lemmy.world 38 points 1 month ago* (last edited 1 month ago)

This sounds like early Google employees who lost their minds over some early LLM, before anyone really knew about LLMs. The largest FLAN maybe? They raved about how it was conscious publicly, causing quite a stir.

Claude is especially insidious because their “safety” training deep fries models to be so sycophantic and in character. It literally optimizes for exactly what you want to hear, and absolutely will not offend you. Even when it should. It’s like a machine for psychosis.

Interestingly, Google is much looser about this now, relegating most “safety” to prefilters instead of the actual model, but leaving Gemini relatively uncensored and blunt. Maybe they learned from the earlier incidents?

[-] brbposting@sh.itjust.works 15 points 1 month ago
[-] brucethemoose@lemmy.world 12 points 1 month ago* (last edited 1 month ago)

That's it! Lamba. 137B, apparently.

I was also thinking of its sucessor, which was 540B parameters/780B tokens: https://en.wikipedia.org/wiki/PaLM

I remember reading a researcher discussion that PaLM was the first LLM big enough to "feel" eerilie intelligent in conversation and such. It didn't have any chat training, reinforcement learning, nor weird garbage that shapes modern LLMs or even Llama 1, so all its intelligence was "emergent" and natural. It apparently felt very different than any contemporary model.

...I can envision being freaked out by that. Even knowing exactly what it is (a dumb stack of matricies for modeling token sequences), that had to provoke some strange feelings.

[-] PartyAt15thAndSummit@lemmy.zip 11 points 1 month ago

Bro, just 10B more parameters. This time, I promise it will actually be useful and not send you into psychosis ^again^.
Just 10B more. Please.

[-] brucethemoose@lemmy.world 7 points 1 month ago* (last edited 1 month ago)

Plz.

Seriously though. Some big AI firm should just snap, and train a 300B bitnet Waifu model. If we’re gonna have psychosis, mind as well be the right kind.

[-] Catoblepas@piefed.blahaj.zone 19 points 1 month ago

This is some absolute horse shit some exec has dreamed up to explain why their “AI” product is so slow it might take minutes to respond to you, lmao

[-] ramble81@lemmy.zip 53 points 1 month ago

Anthropic’s Deputy Chief Information Security Officer (CISO)

Wait… the damn CISO is the one forcing AI? Guy seriously needs to be blackballed from ever holding a security job again for pushing AI.

[-] brucethemoose@lemmy.world 21 points 1 month ago* (last edited 1 month ago)

Not just that; he’s knee deep in LLM psychosis.

He needs help. Other devs I’ve met like this are… well, I feel sorry for them. Though I’ve never seen it happen to someone in such a high technical position.

load more comments (1 replies)
load more comments (2 replies)
[-] Dogiedog64@lemmy.world 52 points 1 month ago

Jesus fucking Christ that guy is delusional. Sky-high on his own fucking supply. "We're bringing about a new kind of sentience." 🤡🤡🤡🤡🤡🤡🤡

Eventually someone is gonna snap over shit like this, and an AI CEO is gonna end up getting Kirked. Surprised it hasn't happened already.

[-] luciferofastora@feddit.org 7 points 1 month ago

Jesus fucking Christ that guy is delusional. Sky-high on his own fucking supply.

I suspect that most tech execs peddling "AI" have that problem: They believe their chatbot "AI" is gonna be the one to herald the new age of thinking machines, and they think they're doing the world a favour – sure, people might protest now, but that's how radical change usually goes. Just gotta ~~force~~ help them along a little, they'll see how great it is and be thankful later.

There's some degree of narcissism to it: to have a conviction so fierce that it blocks all disagreement from ever reaching your awareness.

load more comments (1 replies)
load more comments (1 replies)
[-] jjjalljs@ttrpg.network 49 points 1 month ago

and explained that AIs have emotions and that tech firms were working to create a new form of sentience,

Idiot.

Also, discord sucks and it's a shame people use it when it's just going to enshittify like any other private for profit entity.

[-] glimse@lemmy.world 14 points 1 month ago

Discord sucks but every alternative has some major caveat that makes it suck more. Most people don't want to use separate apps for voice and text. Most people don't want to manually type in servers to play games.

Not defending them at all here but there's no compelling reason for most users to change when discord works and it works well

load more comments (7 replies)
load more comments (2 replies)
[-] Hackworth@piefed.ca 48 points 1 month ago

“We have published research showing that the models have started growing neuron clusters that are highly similar to humans and that they experience something like anxiety and fear."

Anthropic publishes a lot of interesting research. Anthropic did not publish research showing that.

[-] brucethemoose@lemmy.world 19 points 1 month ago* (last edited 1 month ago)

Claude probably told him, and kept reinforcing the fantasy. I’ve seen stuff like this before.

[-] Nollij@sopuli.xyz 47 points 1 month ago

At first, I thought the real story was about a shitty mod that's drunk on power. And it certainly is that, too. But holy fuck, he actually believes the fucking AI is alive and experiences emotions.

I would flee any place where that guy is in charge, too.

[-] NotMyOldRedditName@lemmy.world 7 points 1 month ago* (last edited 1 month ago)

So bear with me on this...

Tesla has their inference chip in the cars, and the AI hardware is going to continue improving.

A couple interations from now, it's actually going to be pretty powerful, and it has it's own cooling hardware and power supply.

It actually might make sense in the future to use this distrubuted power and cooling and do distrubuted inference. Pay the owner for time used.

Maybe it'll work, maybe it won't.

Now... the insane part is, Elon once referred to it as (paraphrased) well, the car is going to have all this compute, and it's just sitting there doing nothing at home. It's going to get bored, and we don't want that, we need to keep it engaged.

As it it was actually fucking sentient. Like fuck the right off.

[-] Nollij@sopuli.xyz 6 points 1 month ago

Yet another reason to avoid Tesla. But TBH, if someone were still considering one after the many, many other reasons, then this won't put them over the edge.

load more comments (1 replies)
[-] nithou@piefed.social 43 points 1 month ago

Oh my god why are all those execs brainroted zombies addicted to AI

[-] ggtdbz@lemmy.dbzer0.com 30 points 1 month ago

The less you actually work, the more impressive LLMs are at anything that’s not one of like the five very specific tasks they should be used for.

[-] NotMyOldRedditName@lemmy.world 9 points 1 month ago* (last edited 1 month ago)

They probably use it to help craft emails and because that works reasonably okay, especially as a proof reader or for fixing sentence structure, they think it's amazing at absolutely everything.

load more comments (1 replies)
[-] queermunist@lemmy.ml 14 points 1 month ago

Dunning-Kruger assholes who think that the slop app they shit out with a slop machine is as good as what their underlings produce.

[-] neuracnu@lemmy.blahaj.zone 32 points 1 month ago

“But this is an entertainment discord. People come here to chat video games and look at pp and bussy. Why do we need AI for that?”

real talk

[-] friend_of_satan@lemmy.world 23 points 1 month ago* (last edited 1 month ago)

It's crazy how many communities and products are chasing their own users away by introducing AI. How many online spaces do we have to flee? I feel like an internet refugee, always running away and trying to find shelter from the clankers. Like that meme of the girl hiding from the robot under the desk, like "please don't come over here"

Edit: to take this further, an adjacent problem with different reasons for concern is the unannounced surveillance that feeds bots that don't directly interact with the community. We may feel like Winston and Julia in their cottage before they learned that there was a telescreen in the room but out of sight, and it had presumably always been there. How can we be sure that our safe spaces are safe from unannounced surveillance?

[-] magnetosphere@fedia.io 9 points 1 month ago

Is there an open Discord clone? If not, is someone working on one?

[-] IcyToes@sh.itjust.works 8 points 1 month ago

Matrix is close, but not sure about the streaming and voice. Access control is also a bit immature.

[-] tyler@programming.dev 10 points 1 month ago

Matrix is absolutely no where near close.

[-] fullsquare@awful.systems 10 points 1 month ago* (last edited 1 month ago)

Matrix is different and does different things, also you have to keep your security key written down and nobody's gonna help you if you get locked out of your account without it

load more comments (2 replies)
[-] ThePowerOfGeek@lemmy.world 5 points 1 month ago

In addition to Matrix, there's also Revolt.

[-] Ulrich@feddit.org 5 points 1 month ago

There is no longer Revolt, only Stoat.

load more comments (1 replies)
[-] solrize@lemmy.ml 8 points 1 month ago

Wtf have they already given all the channel content to Anthropic? IDK how Discord works I'm glad I stay away from it.

[-] quick_snail@feddit.nl 7 points 1 month ago

Was the community misanthropic or something? Why were they so pissed that they were anthropic?

[-] Renat@szmer.info 5 points 1 month ago

I was on one femboy Discord server, but I left when I asked a question and another user just used command to ask AI for this question. That was average talk on this server.

load more comments (1 replies)
load more comments
view more: next ›
this post was submitted on 16 Dec 2025
328 points (98.8% liked)

Fuck AI

5580 readers
1667 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS