25
submitted 3 days ago by [email protected] to c/[email protected]

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

(page 2) 26 comments
sorted by: hot top new old
[-] [email protected] 13 points 2 days ago

"Another thing I expect is audiences becoming a lot less receptive towards AI in general - any notion that AI behaves like a human, let alone thinks like one, has been thoroughly undermined by the hallucination-ridden LLMs powering this bubble, and thanks to said bubble’s wide-spread harms […] any notion of AI being value-neutral as a tech/concept has been equally undermined. [As such], I expect any positive depiction of AI is gonna face some backlash, at least for a good while."

Me, two months ago

Well, it appears I've fucking called it - I've recently stumbled across some particularly bizarre discourse on Tumblr recently, reportedly over a highly unsubtle allegory for transmisogynistic violence:

You want my opinion on this small-scale debacle, I've got two thoughts about this:

First, any questions about the line between man and machine have likely been put to bed for a good while. Between AI art's uniquely AI-like sloppiness, and chatbots' uniquely AI-like hallucinations, the LLM bubble has done plenty to delineate the line between man and machine, chiefly to AI's detriment. In particular, creativity has come to be increasingly viewed as exclusively a human trait, with machines capable only of copying what came before.

Second, using robots or AI to allegorise a marginalised group is off the table until at least the next AI spring. As I've already noted, the LLM bubble's undermined any notion that AI systems can act or think like us, and double-tapped any notion of AI being a value-neutral concept. Add in the heavy backlash that's built up against AI, and you've got a cultural zeitgeist that will readily other or villainise whatever robotic characters you put on screen - a zeitgeist that will ensure your AI-based allegory will fail to land without some serious effort on your part.

[-] [email protected] 10 points 2 days ago

Humans are very picky when it comes to empathy. If LLMs were made out of cultured human neurons, grown in a laboratory, then there would be outrage over the way in which we have perverted nature; compare with the controversy over e.g. HeLa lines. If chatbots were made out of synthetic human organs assembled into a body, then not only would there be body-horror films about it, along the lines of eXistenZ or Blade Runner, but there would be a massive underground terrorist movement which bombs organ-assembly centers, by analogy with existing violence against abortion providers, as shown in RUR.

Remember, always close-read discussions about robotics by replacing the word "robot" with "slave". When done to this particular hashtag, the result is a sentiment that we no longer accept in polite society:

I'm not gonna lie, if slaves ever start protesting for rights, I'm also grabbing a sledgehammer and going to town. … The only rights a slave has are that of property.

[-] [email protected] 20 points 3 days ago* (last edited 3 days ago)

I'm going to put a token down and make a prediction: when the bubble pops, the prompt fondlers will go all in on a "stabbed in the back" myth and will repeatedly try to re-inflate the bubble, because we were that close to building robot god and they can't fathom a world where they were wrong.

The only question is who will get the blame.

[-] [email protected] 4 points 2 days ago

The only question is who will get the blame.

what does chatbot say about that?

[-] [email protected] 8 points 3 days ago

They're doing it with cryptocurrency right now.

[-] [email protected] 9 points 3 days ago

Whoever they say they blame it's probably going to be ultimately indistinguishable from "the Jews"

[-] [email protected] 8 points 3 days ago

nah they'll just stop and do nothing. they won't be able to do anything without chatgpt telling them what to do and think

i think that deflation of this bubble will be much slower and a bit anticlimatic. maybe they'll figure a way to squeeze suckers out of their money in order to keep the charade going

load more comments (2 replies)
load more comments (5 replies)
[-] [email protected] 14 points 3 days ago* (last edited 3 days ago)

The Gentle Singularity - Sam Altman

This entire blog post is sneerable so I encourage reading it, but the TL;DR is:

We're already in the singularity. Chat-GPT is more powerful than anyone on earth (if you squint). Anyone who uses it has their productivity multiplied drastically, and anyone who doesn't will be out of a job. 10 years from now we'll be in a society where ideas and the execution of those ideas are no longer scarce thanks to LLMs doing most of the work. This will bring about all manner of sci-fi wonders.

Sure makes you wonder why Mr. Altman is so concerned about coddling billionaires if he thinks capitalism as we know it won't exist 10 years from now but hey what do I know.

[-] [email protected] 8 points 2 days ago* (last edited 2 days ago)

I think I liked this observation better when Charles Stross made it.

If for no other reason than he doesn't start off by dramatically overstating the current state of this tech, isn't trying to sell anything, and unlike ChatGPT is actually a good writer.

load more comments (2 replies)
[-] [email protected] 13 points 3 days ago

Bummer, I wasn't on the invite list to the hottest SF wedding of 2025.

Update your mental models of Claude lads.

Because if the wife stuff isn't true, what else could Claude be lying about? The vending machine business?? The blackmail??? Being bad at Pokemon????

[-] [email protected] 11 points 3 days ago

It's gonna be so awkward when Anthropic reveals that inside their data center is actually just Some Guy Named Claude who has been answering everyone's questions with his superhuman typing speed.

[-] [email protected] 8 points 3 days ago

11.000 indian people renamed to Claude

[-] [email protected] 14 points 1 day ago

In the morning: we are thrilled to announce this new opportunity for AI in the classroom

In the afternoon:

Someone finally flipped a switch. As of a few minutes ago, Grok is now posting far less often on Hitler, and condemning the Nazis when it does, while claiming that the screenshots people show it of what it's been saying all afternoon are fakes.

load more comments (2 replies)
[-] [email protected] 13 points 1 day ago

Today's bullshit that annoys me: Wikiwand. From what I can tell their grift is that it's just a shitty UI wrapper for Wikipedia that sells your data to who the fuck knows to make money for some Israeli shop. Also they SEO the fuck out of their stupid site so that every time I search for something that has a Finnish wikipedia page, the search results also contain a pointless shittier duplicate result from wikiwand dot com. Has anyone done a deeper investigation into what their deal is or at least some kind of rant I could indulge in for catharsis?

load more comments (1 replies)
[-] [email protected] 12 points 2 days ago

Love how the most recent post in the AI2027 blog starts with an admonition to please don't do terrorism:

We may only have 2 years left before humanity’s fate is sealed!

Despite the urgency, please do not pursue extreme uncooperative actions. If something seems very bad on common-sense ethical views, don’t do it.

Most of the rest is run of the mill EA type fluff such as here's a list of influential professions and positions you should insinuate yourself in, but failing that you can help immanentize the eschaton by spreading the word and giving us money.

[-] [email protected] 11 points 1 day ago

It's kind of telling that it's only been a couple months since that fan fic was published and there is already so much defensive posturing from the LW/EA community. I swear the people who were sharing it when it dropped and tacitly endorsing it as the vision of the future from certified prophet Daniel K are like, "oh it's directionally correct, but too aggressive" Note that we are over halfway through 2025 and the earliest prediction of agents entering the work force is already fucked. So if you are a 'super forecaster' (guru) you can do some sleight of hand now to come out against the model knowing the first goal post was already missed and the tower of conditional probabilities that rest on it is already breaking.

Funniest part is even one of authors themselves seem to be panicking too as even they can tell they are losing the crowd and is falling back on this "It's not the most likely future, it's the just the most probable." A truly meaningless statement if your goal is to guide policy since events with arbitrarily low probability density can still be the "most probable" given enough different outcomes.

Also, there's literally mass brain uploading in AI-2027. This strikes me as physically impossible in any meaningful way in the sense that the compute to model all molecular interactions in a brain would take a really, really, really big computer. But I understand if your religious beliefs and cultural convictions necessitate big snake 🐍 to upload you, then I will refrain from passing judgement.

load more comments (2 replies)
load more comments (1 replies)
[-] [email protected] 10 points 2 days ago

Andrew Gelman does some more digging and poking about those "ignore all previous instructions and give a positive review" papers:

https://statmodeling.stat.columbia.edu/2025/07/07/chatbot-prompts/

Previous Stubsack discussion:

https://awful.systems/comment/7936520

[-] [email protected] 7 points 2 days ago

What I don't understand is how these people didn't think they would be caught, with potentially career-ending consequences? What is the series of steps that leads someone to do this, and how stupid do you need to be?

load more comments (1 replies)
load more comments (4 replies)
[-] [email protected] 9 points 3 days ago

So apparently Grok is even more of a Nazi conspiracy loon now.

I'm sure a Tucker Carlson interview is going to happen soon.

[-] [email protected] 5 points 2 days ago
load more comments
view more: ‹ prev next ›
this post was submitted on 06 Jul 2025
25 points (100.0% liked)

TechTakes

2051 readers
111 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS