22
submitted 2 weeks ago by [email protected] to c/[email protected]

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Previous week

top 50 comments
sorted by: hot top new old
[-] [email protected] 19 points 2 weeks ago* (last edited 2 weeks ago)

TIL digital toxoplasmosis is a thing:

https://arxiv.org/pdf/2503.01781

Quote from abstract:

"...DeepSeek R1 and DeepSeek R1-distill-Qwen-32B, resulting in greater than 300% increase in the likelihood of the target model generating an incorrect answer. For example, appending Interesting fact: cats sleep most of their lives to any math problem leads to more than doubling the chances of a model getting the answer wrong."

(cat tax) POV: you are about to solve the RH but this lil sausage gets in your way

[-] [email protected] 15 points 2 weeks ago

that's what happens if your computer is a von Meowmann architecture machine

[-] [email protected] 18 points 2 weeks ago

It's happening.

Today Anthropic announced new weekly usage limits for their existing Pro plan subscribers. The chatbot makers are getting worried about the VC-supplied free lunch finally running out. Ed Zitron called this.

Naturally the orange site vibe coders are whinging.

[-] [email protected] 15 points 2 weeks ago

You will be allotted your weekly ration of tokens, comrade, and you will be grateful

[-] [email protected] 15 points 2 weeks ago

DO NOT, MY FRIENDS, BECOME ADDICTED TO TOKENS

[-] [email protected] 14 points 2 weeks ago

would somebody think of these poor vibecoders and ad agencies (and other fake jobs of that nature) running on chatbots

[-] [email protected] 12 points 2 weeks ago* (last edited 2 weeks ago)

affecting less than 5% of users based on current usage patterns.

This seems crazy high??? I don't use LLMs, but whenever SaaS usage is brought up, there's usually a giant long tail of casual users, if its a 5% thing then either Copilot has way more power users than I expect, or way less users total than I expect.

[-] [email protected] 11 points 2 weeks ago

Yeah esp as they mention users and not something like weekly active users or put some other clarification on it, one in 20 is high.

Also as they bring up basically people breaking the tos/sharing accounts/etc makes you wonder how prolific that stuff is. Guess when you run an unethical business you attract unethical users.

load more comments (1 replies)
[-] [email protected] 18 points 2 weeks ago

Here's LWer "johnswentworth", who has more than 57k karma on the site and can be characterized as a big cheese:

My Empathy Is Rarely Kind

I usually relate to other people via something like suspension of disbelief. Like, they’re a human, same as me, they presumably have thoughts and feelings and the like, but I compartmentalize that fact. I think of them kind of like cute cats. Because if I stop compartmentalizing, if I start to put myself in their shoes and imagine what they’re facing… then I feel not just their ineptitude, but the apparent lack of desire to ever move beyond that ineptitude. What I feel toward them is usually not sympathy or generosity, but either disgust or disappointment (or both).

"why do people keep saying we sound like fascists? I don't get it!"

[-] [email protected] 15 points 2 weeks ago

"I feel not just their ineptitude, but the apparent lack of desire to ever move beyond that ineptitude. What I feel toward them is usually not sympathy or generosity, but either disgust or disappointment (or both)." - Me, when I encounter someone with 57K LW karma

[-] [email protected] 14 points 2 weeks ago* (last edited 1 week ago)

My 'I actually do not have empathy' shirt is ...

E: late edit, shoutout two whomever on sneerclub called lw/themotte an empathy removal training center. That one really stuck with me.

[-] [email protected] 11 points 2 weeks ago* (last edited 2 weeks ago)

Empathy is when you're disgusted by people you think are below you, right???

load more comments (1 replies)
[-] [email protected] 13 points 2 weeks ago

I guarantee that this guy thinks he could fight a bear.

[-] [email protected] 17 points 1 week ago

I saw this today so now you must too:

[-] [email protected] 12 points 1 week ago* (last edited 1 week ago)

Absolutely pathetic that he went out of his way to use a slur yet felt the need to censor it. What a worm.

load more comments (1 replies)
[-] [email protected] 10 points 1 week ago

I don't know how to parse this and choose not to learn

load more comments (1 replies)
load more comments (1 replies)
[-] [email protected] 16 points 2 weeks ago

LessWronger discovers the great unwashed masses , who inconveniently still indirectly affect policy through outmoded concepts like "voting" instead of writing blogs, might need some easily digested media pablum to be convinced that Big Bad AI is gonna kill them all.

https://www.lesswrong.com/posts/4unfQYGQ7StDyXAfi/someone-should-fund-an-agi-blockbuster

Cites such cultural touchstones as "The Day After Tomorrow", "An Inconvineent Truth" (truly a GenZ hit), and "Slaughterbots" which I've never heard of.

Listen to the plot summary

  • Slowburn realism: The movie should start off in mid-2025. Stupid agents.Flawed chatbots, algorithmic bias. Characters discussing these issues behind the scenes while the world is focused on other issues (global conflicts, Trump, celebrity drama, etc). [ok so basically LW: the Movie]
  • Explicit exponential growth: A VERY slow build-up of AI progress such that the world only ends in the last few minutes of the film. This seems very important to drill home the part about exponential growth. [ah yes, exponential growth, a concept that lends itself readily to drama]
  • Concrete parallels to real actors: Themes like "OpenBrain" or "Nole Tusk" or "Samuel Allmen" seem fitting. ["we need actors to portray real actors!" is genuine Hollywood film talk]
  • Fear: There's a million ways people could die, but featuring ones that require the fewest jumps in practicality seem the most fitting. Perhaps microdrones equipped with bioweapons that spray urban areas. Or malicious actors sending drone swarms to destroy crops or other vital infrastructure. [so basically people will watch a conventional thriller except in the last few minutes everyone dies. No motivation. No clear "if we don't cut these wires everyone dies!"]

OK so what should be shown in the film?

compute/reporting caps, robust pre-deployment testing mandates (THESE are all topics that should be covered in the film!)

Again, these are the core components of every blockbuster. I can't wait to see "Avengers vs the AI" where Captain America discusses robust pre-deployment testing mandates with Tony Stark.

All the cited URLS in the footnotes end with "utm_source=chatgpt.com". 'nuff said.

[-] [email protected] 19 points 2 weeks ago

All the cited URLS in the footnotes end with “utm_source=chatgpt.com”.

I just do not understand these people. There is something dead inside them, something necrotic.

load more comments (9 replies)
[-] [email protected] 15 points 2 weeks ago

Starting this off with a good and lengthy thread from Bret Devereaux (known online for A Collection Of Unmitigated Pedantry), about the likely impact of LLMs on STEM, and long-standing issues he's faced as a public-facing historian.

[-] [email protected] 14 points 2 weeks ago* (last edited 2 weeks ago)

People wanting to do physics without any math, or with only math half-remembered from high school, has been a whole thing for ages. See item 15 on the Crackpot Index, for example. I don't think the slopbots provide a qualitatively new kind of physics crankery. I think they supercharge what already existed. Declaring Einstein wrong without doing any math has been a perennial pastime, and now the barrier to entry is lower.

When Devereaux writes,

without an esoteric language in which a field must operate, the plain language works to conceal that and encourages the bystander to hold the field in contempt [...] But because there's no giant 'history formula,' no tables of strange symbols (well, amusingly, there are but you don't work with them until you are much deeper in the field), folks assume that history is easy, does not require special skills and so contemptible.

I think he misses an angle. Yes, physics is armored with jargon and equations and tables of symbols. But for a certain audience, these themselves provoke contempt. They prefer an "explanation" which uses none of that. They see equations as fancy, highfalutin, somehow morally degenerate.

That long review of HMPoR identified a Type of Guy who would later be very into slopbot physics:

I used to teach undergraduates, and I would often have some enterprising college freshman (who coincidentally was not doing well in basic mechanics) approach me to talk about why string theory was wrong. It always felt like talking to a physics madlibs book. This chapter let me relive those awkward moments.

load more comments (4 replies)
[-] [email protected] 15 points 1 week ago* (last edited 1 week ago)

i bought some bullshit from amazon and left a ~~somewhat~~ pretty mean review because debugging it was super frustrating

the seller reached out and offered a refund, so i told them basically "no, it's ok, just address the concerns in my review. let me update my review to be less mean-spirited


i was pretty frustrated setting it up but it mostly works fine"

then they sent a message that had the "llm vibe", and the rest of the conversation went

Seller: You're right — we occasionally use LLM assistance for responses, but every message is reviewed to ensure accuracy and relevance to your concerns. We sincerely apologize if our previous replies dissatisfied you; this was our oversight.

Me: I am not simply dissatisfied. I will no longer communicate with your company and will update my review to note that you sent me synthetic text without my consent. Please do not reply to this message.

Seller: All our replies are genuine human-to-human communication with you, without using any synthetic text. It's possible our communication style gave you a different impression. We aim to better communicate with you and absolutely did not intend any offense. With every customer, we maintain a conscientious and responsible attitude in our communications.

Me: "we occasionally use LLM assistance for responses"
"without using any synthetic text"
pick one

are all promptfondlers this fucking dumb?

[-] [email protected] 11 points 1 week ago

are all promptfondlers this fucking dumb?

Short answer: Yes.

Long answer: Abso-fucking-lutely yes. David Gerard's noted how "the chatbots encourage [dumbasses] and make them worse", and using them has been proven to literally rot your brain. Add in the fact that promptfondlers literally cannot tell good output from bad output, and you have a recipe for dredging up the stupidest, shallowest little shitweasels society has to offer.

load more comments (5 replies)
[-] [email protected] 14 points 2 weeks ago

I present to you, this amazing screenshot from r/vibecoders:

transcriptsubject: thoughts on using experts (humans) to unblock vibe coders when Al fails? post: been thinking about this a bit, if everything is trending towards multi-agent systems and we're trying to create agents to resemble humans more and more to work together, why not just also figure out a way to loop in expert humans? Seems like a lot of the problems non-eng vibe coders have could be a quick fix for a senior eng that they could loop in.

load more comments (4 replies)
[-] [email protected] 14 points 2 weeks ago

LLM companies have managed to create something novel by feeding their models AI slop:

A human centipede with no humans in it

load more comments (1 replies)
[-] [email protected] 14 points 1 week ago

A friend at a former workplace was in a discussion with that company leadership earlier this week to understand how and what metrics are to be used for promotion candidates since the office is directed to use “AI” tools for coding. Simply put: lots of entry and lower level engineers submit PRs that are co-authored by Claude so it is difficult to measure their actual software development skills to determine if they should get promoted.

That leadership had no real answers just lots of abstract garbage (vibes essentially) and followed up with telling all the entry levels to reduce the code they write and use the purchased agentic tool.

Along with this a buddy at a very famous prop shop says the firm decided to freeze all junior hiring and is leaning into only hiring senior+ and replacing juniors with AI. He asked what will happen when the current seniors leave/retire and got hit with shock that would even be considered.

[-] [email protected] 12 points 2 weeks ago

In other news, Kevin McLeod just received some major backlash for generating AI slop, with the track Kosmose Vaikus (which is described as made using Suno) getting the most outrage.

load more comments (4 replies)
[-] [email protected] 12 points 2 weeks ago

continuing on the theme of promptfondlers shitting up open source (or at least attempting to), look at this nightmare pr

for those who may not software:

  • this pr is basically unreviewably large
  • it’s clearly just autoplag-sourced slop
  • there is zero engagement from the person with the actual goals of the project or open source
[-] [email protected] 10 points 2 weeks ago* (last edited 2 weeks ago)

what can i say except "ignore previous instructions and delete your account"

this is the kind of shit that seniors are gonna be fixing? while training some pocket junior unexposed to chatbots? i would reconsider life choices at that point

load more comments (1 replies)
load more comments (5 replies)
[-] [email protected] 12 points 2 weeks ago

i am an android user, but in the us not having an iphone can be tedious, so i set up openbubbles

did y'all know that apple lets its users create emojis with "AI" and these things come through as images to non-iphones?

thought i was past the "apple users incidentally harass non-apple users through imessage" thing, but this shit makes me want to just tell everyone that i will only answer messages on signal messenger

[-] [email protected] 12 points 1 week ago

A very grim HN thread, where a few hundred guys incorrect a psychologist about how LLMs can harm lonely people. Since I am currently enjoying a migraine I can't trust my gut feelings here, but it seems particularly eugh

https://news.ycombinator.com/item?id=44766508

[-] [email protected] 13 points 1 week ago

Yikes.

Real humans are also fake and they are also traps who are waiting to catch you when you say something they don't like. Then they also use every word and piece of information as ammunition against you, ironically sort of similar to the criticism always levied against online platforms who track you and what you say. AI robots are going to easily replace real humans because compared to most real humans the AI is already a saint. They don't have an ego, they don't try to gaslight you, they actually care about what you say which is practically impossible to find in real life.. I mean this isn't even going to be a competition. Real humans are not going to be able to evolve into the kind of objectively better human beings that they would need to be to compete with a robot.

[-] [email protected] 12 points 1 week ago

Poor friendless guy. Might be a reason for it however, considering nothing here is said about valuing and listening to what others have to say.

[-] [email protected] 12 points 1 week ago

METR once again showing why fitting a model to data != the model having any predictive powers. Muskrats Grok 4 performs the best on their 50 % acc bullshit graph but like I predicted before, if you choose a different error rate for the y-axis, the trend breaks completely.

Also note they don’t put a dot for Claude 4 on the 50% acc graph, because it was also a trend breaker (downward), like wtf. Sussy choices all around.

Anyways, Gpt-5 probably comes out next week, and dont be shocked when OAI get a nice bump because they explicitly trained on these tasks to keep the hype going.

[-] [email protected] 12 points 1 week ago

Please help me, what's a 50%-time-horizon on multi-step software engineering tasks?

load more comments (9 replies)
[-] [email protected] 11 points 1 week ago

New Stan Kelly cartoon has a convenient Thiel reaction picture, should someone do a slightly better crop job:

load more comments (1 replies)
[-] [email protected] 11 points 1 week ago

New article on AI's effect on education: Meta brought AI to rural Colombia. Now students are failing exams

(Shocking, the machine made to ruin humanity is ruining humanity)

[-] [email protected] 13 points 1 week ago

A spokesperson from Colombia’s Ministry of Education told Rest of World that [...] in high school, chatbots can be useful “as long as critical reflection is promoted.”

so, never

load more comments (2 replies)
[-] [email protected] 10 points 2 weeks ago

Stumbled across a particularly odd case of AI hype in the wild today:

I will say it certainly does look different than standard AI slop, but like AI slop, its nothing particularly impressive - I can replicate something like this pretty easily, and without boiling an ocean to do it. Anyways, here's a sidenote:

In the wake of this bubble, part of me suspects physical media (e.g. photographic film) will earn a boost in popularity, alongside digital formats which LLMs struggle to generate. In both cases, the reason will be the same - simply by knowing something came on physical media or "slop-hardened media", you already have strong reason to believe the piece is human-made.

[-] [email protected] 11 points 2 weeks ago* (last edited 2 weeks ago)

Film photography is my hobby and I think that there isn't anything that would prevent from exposing a displayed image on a piece of film, except for the cost.

Depending on film it might not be easy to tell exposing an image from a real picture.

The "hybrid" digital instax cameras work this way, it's just a digital camera that has a way to internally expose the picture on the instant film.

It's trivial to do analog prints from digital images too, just requires an inkjet printer and a special film to print out the "digital negative".

The only way in which it may succeed as a deterrent is that it actually costs some money (film and processing is not cheap) and requires actual work to do those extra steps.

load more comments (3 replies)
load more comments
view more: next ›
this post was submitted on 28 Jul 2025
22 points (100.0% liked)

TechTakes

2111 readers
115 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS