22
submitted 3 weeks ago by [email protected] to c/[email protected]

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

(page 2) 50 comments
sorted by: hot top new old
[-] [email protected] 10 points 2 weeks ago* (last edited 2 weeks ago)

LWer suggests people who believe in AI doom make more efforts to become (internet) famous. Apparently not bombing on Lex Fridman's snoozecast, like Yud did, is the baseline.

The community awards the post one measly net karma point, and the lone commenter scoffs at the idea of trying to convince the low-IQ masses to the cause. In their defense, Vanguardism has been tried before with some success.

https://www.lesswrong.com/posts/qcKcWEosghwXMLAx9/doomers-should-try-much-harder-to-get-famous

load more comments (7 replies)
[-] [email protected] 10 points 2 weeks ago

if you saw that post making its rounds in the more susceptible parts of tech mastodon about how AI’s energy use isn’t that bad actually, here’s an excellent post tearing into it. predictably, the original post used a bunch of LWer tricks to replace numbers with vibes in an effort to minimize the damage being done by the slop machines currently being powered by such things as 35 illegal gas turbines, coal, and bespoke nuclear plants, with plans on the table to quickly renovate old nuclear plants to meet the energy demand. but sure, I’m certain that can be ignored because hey look over your shoulder is that AGI in a funny hat?

load more comments (3 replies)
[-] [email protected] 10 points 2 weeks ago

Saw a six day old post on linkedin that I’ll spare you all the exact text of. Basically it goes like this:

“Claude’s base system prompt got leaked! If you’re a prompt fondler, you should read it and get better at prompt fondling!”

The prompt clocks in at just over 16k words (as counted by the first tool that popped up when I searched “word count url”). Imagine reading 16k words of verbose guidelines for a machine to make your autoplag slightly more claude shaped than, idk, chatgpt shaped.

[-] [email protected] 10 points 2 weeks ago

Claude does not claim that it does not have subjective experiences, sentience, emotions, and so on in the way humans do. Instead, it engages with philosophical questions about AI intelligently and thoughtfully.

lol

[-] [email protected] 9 points 2 weeks ago* (last edited 2 weeks ago)

The amount of testing they would have needed to do just to get to that prompt. Wait, that gets added as a baseline constant cost to the energy cost of running the model. 3 x 12 x 2 x Y additional constant costs on top of that, assuming the prompt doesn't need to be updated every time the model is updated! (I'm starting to reference my own comments here).

Claude NEVER repeats or translates song lyrics and politely refuses any request regarding reproduction, repetition, sharing, or translation of song lyrics.

New trick, everything online is a song lyric.

[-] [email protected] 9 points 2 weeks ago

We already knew these things are security disasters, but yeah that still looks like a security disaster. It can both read private documents and fetch from the web? In the same session? And it can be influenced by the documents it reads? And someone thought this was a good idea?

load more comments (1 replies)
load more comments (5 replies)
[-] [email protected] 9 points 3 weeks ago

Beff back at it again threatening his doxxer. Nitter link

[-] [email protected] 10 points 3 weeks ago

Unrelated to this: man, there should be a parody account called “based beff jeck” which is just a guy trying to promote beck’s vast catalogue as the future of music. Also minus any mention of johnny depp.

[-] [email protected] 9 points 3 weeks ago

Also minus any mention of johnny depp.

Depp v. Heard was my generation's equivalent to the OJ Simpson trial, so chances are he'll end up conspicuous in his absence.

load more comments (3 replies)
[-] [email protected] 9 points 2 weeks ago
[-] [email protected] 9 points 2 weeks ago

Personal rule of thumb: all autoplag is serious until proven satire.

[-] [email protected] 9 points 2 weeks ago* (last edited 2 weeks ago)

New piece from Brian Merchant: De-democratizing AI, which is primarily about the GOP's attempt to ban regulations on AI, but also touches on the naked greed and lust for power at the core of the AI bubble.

EDIT: Also, that title's pretty clever

load more comments (1 replies)
[-] [email protected] 9 points 2 weeks ago

Satya Nadella: "I'm an email typist."

Grand Inquisitor: "HE ADMITS IT!"

https://bsky.app/profile/reckless.bsky.social/post/3lpazsmm7js2s

load more comments (6 replies)
[-] [email protected] 9 points 2 weeks ago
[-] [email protected] 9 points 2 weeks ago

I will be watching with great interest. it’s going to be difficult to pull out of this one, but I figure he deserves as fair a swing at redemption as any recovered crypto gambler. but like with a problem gambler in recovery, it’s very important that the intent to do better is backed up by understanding, transparency, and action.

load more comments (2 replies)
[-] [email protected] 9 points 2 weeks ago

More of a notedump than a sneer. I have been saying every now and then that there was research and stuff showing that LLMs require exponentially more effort for linear improvements. This is post by Iris van Rooij (Professor of Computational Cognitive Science) mentions something like that (I said something different, but The intractability proof/Ingenia theorem might be useful to look into): https://bsky.app/profile/irisvanrooij.bsky.social/post/3lpe5uuvlhk2c

load more comments (2 replies)
load more comments
view more: ‹ prev next ›
this post was submitted on 11 May 2025
22 points (100.0% liked)

TechTakes

1884 readers
86 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS