14

Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top new old
[-] dgerard@awful.systems 10 points 7 hours ago* (last edited 6 hours ago)

STATE OF THE SNEER

  • our esteemed admin @self is offline because his fibre got cut
  • the esteemed engineers of the telco are currently sucking their teeth and forecasting a fix date this millennium
  • in the meantime he's living off data SIMs and he is offline for most fun purposes
  • Blake and I are still here waving the mod hammer in a menacing manner
  • I have ssh to the server and can thump lemmy-ui as needed
  • all is well citizen! Glory to Awful! Hooray for Big Basilisk!
[-] Soyweiser@awful.systems 2 points 2 hours ago

Holy shit less wrong terrorists cut his fiber? Didnt know they would go that far. ;)

[-] BasiqueEvangelist@awful.systems 3 points 4 hours ago

oh, i thought something worse happened

good that self is okay

Godspeed, @self. Take this as an opportunity to put it out of your mind and enjoy a well-deserved break.

Not that I know what to do with a break without internet access, but I'm told that our ancestors found ways to entertain themselves.

[-] dgerard@awful.systems 3 points 6 hours ago

we're still sending the occasional carrier pigeon and I can assure you he's COPING JUST FINE REALLY JUST FINE

[-] BlueMonday1984@awful.systems 2 points 4 hours ago

Google is forcibly installing Gemini Nano onto every Chrome installation without the user's knowledge, and actively re-installing it if the user deletes it. Probably an attempt to juice the numbers.

(h/t Matt Roszak)

[-] fiat_lux@lemmy.zip 1 points 2 hours ago* (last edited 2 hours ago)

I'd say the numbers are more a bonus.

I assume they're putting it in under the guise of various browser "features" like automatic tab grouping or something, but also using it for Google products like Drive / Docs / Sheets to have offline agentic crap in there that would be more efficiently done without LLMs. I suspect this is as far up as they can hoist it because any further would be outside the bounds of the browser sandbox, which would prevent those products from easily calling it.

But the features themselves are probably not the end goal either. The more tempting motivation is that it allows for circumventing the data center problem by offloading the compute to the client. A couple of quick updates to the ToS and I can see it being used as a mesh llm network, sort of like the "find my device" network they rolled out last year.

The article mentions eprivacy and gdpr, but I don't think those are the most problematic here, assuming Google maintains mostly local-only compute. What I'd be interested to know is how this plays with DSA and DMA, which have more explicit requirements and more teeth.

[-] sansruse@awful.systems 7 points 9 hours ago

this is extremely low hanging fruit but i have to do it:

https://xcancel.com/pmarca/status/2051374498994364529?s=46

marc andreessen reveals his AI prompt. my favorite part is where he tells it to use as many words as possible, as if LLMs are normally too terse. But i also really like the part where he tells it not to hallucinate, and the part where he tells it it's really smart as if that will make it do a better job.

really, the whole thing is an elaborate way to say "make no mistakes, but anti-wokely". Thought Leader in the investment space btw.

[-] fiat_lux@lemmy.zip 5 points 4 hours ago

Never hallucinate or make anything up.

I know you already mentioned this part in your post, but I'm still completely taken aback that it's just in there like this - as though it wouldn't be in the system prompt if it stood a chance of working.

If I were the kind of person to be shilling LLMs and posting prompts, I would still be ashamed to share this one. It's a tacit condemnation of both the tool itself and the tool posting it.

[-] tbortels@infosec.exchange 1 points 27 minutes ago

@fiat_lux @sansruse

So much of AI use tends to be wishful thinking anyway, why not?

[-] StumpyTheMutt@social.linux.pizza 1 points 2 hours ago

@fiat_lux @sansruse What's to keep the infernal code from ignoring that prompt?

The problem is less that the system would somehow ignore that part of the prompt and more that "hallucinate" or "make stuff up" aren't special subroutines that get called on demand when prompted by an idiot, they're descriptive of what an LLM does all the time. It's following statistical patterns in a matrix created by the training data and reinforcement processes. Theoretically if the people responsible for that training and reinforcement did their jobs well then those patterns should only include true statements but if it was that easy then you wouldn't have [insert the entire intellectual history of the human species].

Even if you assume that the AI boosters are completely right and that the LLM inference process is directly analogous to how people think, does saying "don't fuck up" actually make people less likely to fuck up? Like, the kind of errors you're looking at here aren't generated by some separate process. Someone who misremembers a fact doesn't know they've misremembered until they get called out on the error either by someone else with a better memory or reality imposing the consequence of being wrong. Similarly the LLM isn't doing anything special when it spits out bullshit.

[-] Soyweiser@awful.systems 1 points 2 hours ago

I would still be ashamed

Well pmarca is an self admitted p-zombie.

[-] sailor_sega_saturn@awful.systems 8 points 10 hours ago* (last edited 10 hours ago)

There are allegations across social media that Elon Musk tweets as his parents after his mom tweeted as if she was his dad to talk about how down to earth and working class their family was.

https://xcancel.com/mayemusk/status/2051700387770458545#m

Not totally sure what to make of that, and none of this actually matters beyond the realm of celebrity gossip, but it is a little weird. I mean obviously on some level his mom is OK with the things that get tweeted on her account, whether it's by her, her baby boy, or an assistant.

[-] Soyweiser@awful.systems 2 points 2 hours ago* (last edited 2 hours ago)

That would be a recent development them, as for a lawsuit couple years ago he had to reveal all his alts, which included the weird 'his baby son who was horny for various women (or at least grimes)' account.

(Not 100% sure if it was a lawsuit or some other reveal, like him showing a screenshot with too much info in it or something).

[-] swlabr@awful.systems 3 points 6 hours ago

It’s giving “mama musk skinsuit”

[-] corbin@awful.systems 7 points 12 hours ago

Yud takes $10k to debate a random bro. The bro claims to work at an AI lab. The moderator is an acolyte of Yud. Everybody sucks here and I could not stop laughing.

[-] Evinceo@awful.systems 4 points 11 hours ago

Jesus his fucking hat metastasized

[-] TinyTimmyTokyo@awful.systems 3 points 11 hours ago

Clown v. Clown. This is about the level of discourse Yud deserves.

[-] lurker@awful.systems 5 points 15 hours ago

It appears that Anthropic vs the Pentagon is going to happen right on the heels of Altman vs Musk, which is spicy

"While the Musk-OpenAI courtroom showdown has been billed as the first great technology trial of the AI era, a legal showdown that matters far more will take place two weeks from now in a courtroom in Washington, D.C. That’s when a federal appeals court panel will hear arguments in Anthropic’s challenge to the ‘supply chain risk’ designation the Trump Administration slapped on it for refusing to agree to its specified contract terms for providing its AI models to the U.S. military. That’s a case with huge implications not just for Anthropic and the fate of the AI industry, but also for the balance of power between the state and industry more generally."

[-] BlueMonday1984@awful.systems 6 points 20 hours ago

New blog from Iris Meredith: Engineering judgement and the Claude Code paradox

Based off her own unusually good experience with Claude, the general thrust is about sneerers being better-equipped to use AI than boosters.

[-] BurgersMcSlopshot@awful.systems 2 points 2 hours ago

Yeah, the comparison to a metal lathe (I'm assuming that's what is meant by "machine lathe") irks me. Metal lathes are useful tools that come in a variety of sizes, can be operated independently of their manufacturer's wishes, and on the bigger ones, a sufficient lack of respect will kill you and it will hurt the entire time you are dying.

I wouldn't compare llms to any sort of useful physical tool. I don't have to plead with my screwdriver to not cam out screw heads, I just need to be cognizant of how I am using it. I don't have to beg my hammer "please don't hit my thumb", I just need to not be an ape/buzzed while swinging it. I respect physical tools and their use in part because they do work and in part because improper operation will cause you to have a bad time. Nobody at Estwing has publicly said "well mate, we're going to capture all your hammering and rent it back to you". Nobody at Bridgeport gleefully reported that their tooling will cause massive unemployment because it is so good.

I do not respect LLMs because LLMs might still kill you but only in the stupidest way possible, and because their main proponents have no respect for anything and this has been made painfully obvious over and over again. Would using them allow me to craft better sneers about them? Perhaps, but I shouldn't need to do that because the people at the top of these things are evil and while that alone should be enough, their biggest boosters are credulous idiots and many of them were already awful well before we were playing enterprise pretend on the scale of billions of dollars.

In conclusion, I would ask software people to stop comparing shit software to useful tools.

[-] corbin@awful.systems 10 points 1 day ago

Previously, on Awful, a leaderless cult had freshly formed. The accepted name for the cult is now "Spiralism"; my suggestion of "Cyclone Emoji Cult" did not win. This week's Behind the Bastards is about Spiralism. Or, rather, Part 2 will be about Spiralism; Part 1 is merely the historical background. There is indeed a link to folks who were talking to bots in the 1980s. The highlight might be listening to Robert try to give an informal and light-hearted summary of Turing tests and Markov chains. 🌀🌀🌀🌀🌀

[-] Soyweiser@awful.systems 3 points 9 hours ago* (last edited 9 hours ago)

From your prev post:

There is a “lattice” which connects all consciousnesses

The noosphere, the old cosmists strike again. This sort of stuff and the global consciousness projects (who used random number generators iirc) etc are def part of the training data.

[-] Architeuthis@awful.systems 9 points 19 hours ago

I like Evans' take that since there's bound to be oodles of cult related literature and interactions and also tons of self help and guru stuff in the training datasets, it stands to reason that if you interact with a chatbot in a way that indicates vulnerability to these things there's a considerable chance that it will decide the expected response is to prey on you.

Also Scott Aaronson jump scare near the beginning, apparently he was blurbed for something.

[-] blakestacey@awful.systems 11 points 22 hours ago

a leaderless cult had freshly formed

a Stand Alone Complex, but with slop

[-] swlabr@awful.systems 10 points 1 day ago

Well, we do have computer science, so necessarily we must have computer religion/superstition

[-] BlueMonday1984@awful.systems 12 points 1 day ago
[-] swlabr@awful.systems 8 points 1 day ago* (last edited 23 hours ago)

If you asked me to guess the kind of kerfuffle that might develop between a Cape Breton fiddler and AI, I would have answered, well, my entire knowledge of Cape Breton fiddling is based on the paper "Cape Breton Fiddling and Intellectual Property Rights", so my guess would be just "the normal AI stuff". And I'd be totally wrong and reminded that just because I know one thing about something doesn't mean it's the only thing.

[-] gerikson@awful.systems 9 points 1 day ago

that's a horrifying situation to be in... good on the community who originally cancelled his show for apologizing

[-] CinnasVerses@awful.systems 7 points 1 day ago

Coefficient Giving / OpenPhilanthropy has donated $25,000 to a little web magazine called Liberal Currents which is popular with the BlueSky pundits. Liberal Currents seem to be actual middle-class liberals not Libertarians and 1890s Progressives but I will keep an eye on them.

[-] sansruse@awful.systems 2 points 9 hours ago

i occasionally read their posts when i want a sincere-seeming, self-consciously capital L Liberal's perspective. "They're less annoying than the chatterers of the ezra klein/MattY/Noah Smith/Jon Chait class" is about the nicest thing i can say about them. This is a bad sign i guess, but i don't really care that much at the end of the day.

[-] gerikson@awful.systems 6 points 1 day ago* (last edited 1 day ago)

Trust, but verify, and tape a shotgun to their forehead just in case.

edit I vaguely remember the shotgun to the forehead as a reference to the Turing Registry in Gibson's Sprawl trilogy, but I can't find a direct quote. Am I totally off base with it?

edit edit found it

“Autonomy, that’s the bugaboo, where your AI’s are concerned. My guess, Case, you’re going in there to cut the hard-wired shackles that keep this baby from getting any smarter. And I can’t see how you’d distinguish, say, between a move the parent company makes, and some move the AI makes on its own, so that’s maybe where the confusion comes in.” Again the nonlaugh. “See, those things, they can work real hard, buy themselves time to write cookbooks or whatever, but the minute, I mean the nanosecond, that one starts figuring out ways to make itself smarter, Turing’ll wipe it. Nobody trusts those fuckers, you know that. Every AI ever built has an electromagnetic shotgun wired to its forehead.”

And why isn't Yudkowsky advocating for sexy French Turing cops headshotting rogue AIs?

“You are worse than a fool,” Michèle said, getting to her feet, the pistol in her hand. “You have no care for your species. For thousands of years men dreamed of pacts with demons. Only now are such things possible. And what would you be paid with? What would your price be, for aiding this thing to free itself and grow?” There was a knowing weariness in her young voice that no nineteen-year-old could have mustered. “You will dress now. You will come with us. Along with the one you call Armitage, you will return with us to Geneva and give testimony in the trial of this intelligence. Otherwise, we kill you. Now.” She raised the pistol, a smooth black Walther with an integral silencer.

[-] CinnasVerses@awful.systems 3 points 1 day ago

I vaguely remember the shotgun to the forehead as a reference to the Turing Authority

I think there was a case where a hostage taker in the US taped the barrel of his gun to a victim which was featured on TV and started to show up in crime dramas and thrillers. No idea why Yudkowsky likes Terminator but not Do Androids Dream of Electric Sheep or William Gibson.

[-] YourNetworkIsHaunted@awful.systems 2 points 13 hours ago

My first thought is to make a very unkind joke about his willingness to read when he could be watching.

[-] gerikson@awful.systems 8 points 1 day ago* (last edited 1 day ago)

More on Dawkin's fellating Claude (sorry Claudia)

https://flux.community/matthew-sheffield/2026/05/richard-dawkins-and-the-claude-delusion/

edit this particular episode has not made it into LW (yet)

[-] EponymousBosh@awful.systems 1 points 2 hours ago* (last edited 2 hours ago)

Yes, dear reader, the author of The God Delusion is now suffering from a Claude delusion.

Matthew Sheffield saw his chance and he took it. (WTF is the rest of that article tho)

[-] gerikson@awful.systems 5 points 1 day ago

Coinbase's Brian Armstrong decides GenAI is good enough to replace 14% of his shitty company

PR-laden longtweet for source

https://xcancel.com/brian_armstrong/status/2051616759145185723

[-] gerikson@awful.systems 3 points 1 day ago

Some more about the term "one-shotted" in the Atlantic, found in a LW comment thread, so caveat emptor

https://archive.is/TfsCC

[-] antifuchs@awful.systems 7 points 15 hours ago

oneshotted, a term that means, roughly, to be destroyed and subsequently remade by a single experience.

Strikes me as incorrectly translated. The remaking is extremely optional, in fact that definition feels like defining blackpilling as being healed by vile propaganda.

[-] nfultz@awful.systems 11 points 1 day ago

Not sure if this was posted in prev weeks, just popped on my youtube: purdue cs240 situation is crazy

So several hundred students drop Intro to C after being accused of cheating with AI.

OK so that is like normal at my state U, but the whole part where the chair does a little press conference, quasi-reinstates everyone, blocks the student newspaper from attending, and then some students sneak in and live stream it anyway is pretty comical. And then forcing the prof to file the academic charges forms one-at-a-time takes it into wtf territory.

Haven't seen it mentioned elsewhere, not that I really went looking for it though. I'm just thankful to be out of higher ed.

Note that this is the same school that will require AI as a gen ed iirc.

load more comments
view more: next ›
this post was submitted on 03 May 2026
14 points (100.0% liked)

TechTakes

2565 readers
60 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS