22

Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this. Also, hope you had a wonderful Valentine's Day!)

top 50 comments
sorted by: hot top new old
[-] nfultz@awful.systems 2 points 42 minutes ago

https://old.reddit.com/r/indieheads/comments/1r6x1ix/fresh_failure_the_air_is_on_fire_from_location/

I looked it up, and this one is credited to Glen Wexler, who is an actual artist with a pretty distinct style and yes, he's been incorporating AI into his process lately, and I guess he did use it here (those windows on those buildings are sus as hell, and the overall sharpness of the image just screams AI).

So it's not outright slop, but still pretty disappointing and incongruous coming from this band. Their last two records were examining our society's alienation through technology, at times to the point of "phone bad!" level nagging, but using the most literally destructive technology of them all is fine, as long as it helps keep the costs down, I guess?

And it just doesn't look good, but come to think of it, most of their albums have bad cover art, it's almost like they do it on purpose. Love the music, though.

It's too bad if true, I can't unsee it now. for reference: https://failureband.bandcamp.com/album/location-lost

[-] CinnasVerses@awful.systems 2 points 59 minutes ago* (last edited 38 minutes ago)

Posting for archival and indexing purposes: u/GorillasAreForEating found an Urbit post titled "Quis cancellat ipsos cancellores?" which complains that Aella takes it on herself to exclude people and movements from the broader LessWrong/Effective Altruist community. The poster says that Aella was the anonymous person who pushed CFAR to finally do something about Brent Dill, because she was roommates with "Persephone." He or she does not quite say that any of the accusations were untrue, just that "an anonymous, unverified report" says that some details were changed by an editor, and that her Medium post was of "dramatically lower fidelity, but higher memetic virulence" than Brent's buddies investigating him behind closed doors (Dill posted about domming a 16-year-old who he met when she was 15). The poster accuses Aella of using substances and BDSM games to blur the line of consent.

The post names Joscha Bach as someone Aella tried to exclude. We recently talked abut Bach's attempt to get Jeffrey Epstein to fund an event where our friends would speak.

Often, people in messed-up situations point at a very similar situation and say "at least we are not like that." I hope that all of these people find friends who can give them perspective that none of these communities are healthy or just. Whether you are in to bull sessions or polyamory, there are healthy communities to explore in any medium-sized city!

[-] nfultz@awful.systems 4 points 3 hours ago* (last edited 3 hours ago)
[-] CinnasVerses@awful.systems 4 points 4 hours ago

Do we have any idea why some of the Zizians ended up in Vermont? The only thing in their network that comes to mind is the Monastic Academy for the Preservation of Life on Earth (MAPLE, a Buddhist-flavoured CFAR offshoot with the usual Medium post accusing leaders of sexual and psychological abuse)

Vermont and New Hampshire have clusters of generic Libertarians.

[-] lurker@awful.systems 3 points 4 hours ago* (last edited 4 hours ago)
[-] self@awful.systems 4 points 3 hours ago

In March 2025, the large language model (LLM) GPT-4.5, developed by OpenAI in San Francisco, California, was judged by humans in a Turing test to be human 73% of the time — more often than actual humans were. Moreover, readers even preferred literary texts generated by LLMs over those written by human experts.

do you know how hard it is to write something that aged poorly months before it was written? it’s in the public consciousness that LLMs write like absolute shit in ways that are very easy to pick out once you’ve been forced to read a bunch of LLM-extruded text. inb4 some asshole with AI psychosis pulls out “technically ChatGPT’s more human than you are, look at the statistics” regarding the 73% figure I guess. but you know when statistics don’t count!

A March 2025 survey by the Association for the Advancement of Artificial Intelligence in Washington DC found that 76% of leading researchers thought that scaling up current AI approaches would be ‘unlikely’ or ‘very unlikely’ to yield AGI

[…] What explains this disconnect? We suggest that the problem is part conceptual, because definitions of AGI are ambiguous and inconsistent; part emotional, because AGI raises fear of displacement and disruption; and part practical, as the term is entangled with commercial interests that can distort assessments.

no you see it’s the leading researchers that are wrong. why are you being so emotional over AGI. we surveyed Some Assholes and they were pretty sure GPT was a human and you were a bot so… so there!

[-] BlueMonday1984@awful.systems 4 points 5 hours ago

Baldur Bjarnason gives his thoughts on the software job market, predicting a collapse regardless of how AI shakes out:

If you model the impact of working LLM coding tools (big increase in productivity, little downside) where the bottlenecks are largely outside of coding, increases in coding automation mostly just reduce the need for labour. I.e. 10x increase means you need 10x fewer coders, collapsing the job market

If you model the impact of working LLM coding tools with no bottlenecks, then the increase in productivity massively increases the supply of undifferentiated software and the prices you can charge for any software drops through the floor, collapsing the job market

If the models increase output but are flawed, as in they produce too many defects or have major quality issues, Akerlof's market for lemons kicks in, bad products drive out good, value of software in the market heads south, collapsing the job market

If the model impact is largely fictitious, meaning this is all a scam and the perceived benefit is just a clusterfuck of cognitive hazards, then the financial bubble pop will be devastating, tech as an industry will largely be destroyed, and trust in software will be zero, collapsing the job market

I can only think of a few major offsetting forces:

  • If the EU invests in replacing US software, bolstering the EU job market.
  • China might have substantial unfulfilled domestic demand for software, propping up their job market
  • Companies might find that declining software quality harms their bottom-line, leading to a Y2K-style investment in fixing their software stacks

But those don't seem likely to do more than partially offset the decline. Kind of hoping I'm missing something

[-] sc_griffith@awful.systems 5 points 6 hours ago* (last edited 5 hours ago)

need a word for the sort of tech 'innovation' that consists of inventing and monetizing new types of externalities which regulators aren't willing to address. like how bird scooters aren't a scam, but they profit off of littering sidewalk space so that ppl with disabilities can't get around

EDIT: a similar, perhaps the same concept is innovation which functions by capturing or monopolizing resources that aren't as yet understood to be resources. in the bird example, we don't think of sidewalk space as a capturable resource, and yet

In economic terms it's less rent seeking and more rent creation. Like, taking advantage of public sidewalk space may not be a rent in the strictest sense given that the revenue model is still people paying for the service, but the ability to provide that service is absolutely predicated on taking over and monopolizing this public resource to the maximal degree possible.

By historical allegory, harkening back to the original destruction of the Commons, we're looking at Enclosure 2: Frisco Drift.

Let's also not lose sight of the fact that those sidewalks aren't a natural formation, and that it's the city government who ultimately takes on the burden of their construction and maintenance. This kind of neo-enclosure of public resources is then another kind of invisible subsidy.

[-] froztbyte@awful.systems 4 points 6 hours ago
[-] froztbyte@awful.systems 4 points 6 hours ago

I guess that doesn’t emphasise the “innovation” aspect much

[-] sc_griffith@awful.systems 3 points 5 hours ago

maybe "parasitic innovation"?

[-] sc_griffith@awful.systems 6 points 6 hours ago

new episode of odium symposium. we look at rousseau's program for using universal education to turn woman into drones

https://www.patreon.com/posts/project-1789-150782184

[-] blakestacey@awful.systems 6 points 9 hours ago
[-] BlueMonday1984@awful.systems 3 points 6 hours ago

The phrase "ambient AI listening in our hospital" makes me hear the "Dies Irae" in my head.

I'm personally hearing "Morceaux" myself.

[-] blakestacey@awful.systems 5 points 10 hours ago

A longread on AI greenwashing begins thusly:

The expansion of data centres - which is driven in large part by AI growth - is creating a shocking new demand for fossil fuels. The tech companies driving AI expansion try to downplay AI’s proven climate impacts by claiming that AI will eventually help solve climate change. Our analysis of these claims suggests that rather than relying on credible and substantiated data, these companies are writing themselves a blank cheque to pollute on the empty promise of future salvation. While the current negative effects of AI on the climate are clear, proven and growing, the promise of large-scale solutions is often based on wishful thinking, and almost always presented with scant evidence.

(Via.)

[-] lurker@awful.systems 8 points 11 hours ago
[-] sinedpick@awful.systems 6 points 8 hours ago

can all of rationalism be reduced to logorrhea with load-bearing extreme handwaving (in this case, agentic self preservation arises through RL scaling)?

[-] fullsquare@awful.systems 7 points 6 hours ago

no there's also racist twitter

[-] fullsquare@awful.systems 9 points 18 hours ago* (last edited 17 hours ago)

i've collided with an article* https://harshanu.space/en/tech/ccc-vs-gcc/

you might be wondering why it doesn't highlight that it fails to compile linux kernel, or why it states that using pieces of gcc where vibecc fails is "fair", or why it neglects to say that failing linker means it's not useful in any way, or why just relying on "no errors" isn't enough when it's already known that vibecc will happily eat invalid c. it's explained by:

Disclaimer

Part of this work was assisted by AI. The Python scripts used to generate benchmark results and graphs were written with AI assistance. The benchmark design, test execution, analysis and writing were done by a human with AI helping where needed.

even with all this slant, by their own vibecoded benchmark, vibecc is still complete dogshit with sqlite compiled with it being slower up to 150000x times in some cases

[-] lagrangeinterpolator@awful.systems 11 points 17 hours ago

This is why CCC being able to compile real C code at all is noteworthy. But it also explains why the output quality is far from what GCC produces. Building a compiler that parses C correctly is one thing. Building one that produces fast and efficient machine code is a completely different challenge.

Every single one of these failures is waved away because supposedly it's impressive that the AI can do this at all. Do they not realize the obvious problem with this argument? The AI has been trained on all the source code that Anthropic could get their grubby hands on! This includes GCC and clang and everything remotely resembling a C compiler! If I took every C compiler in existence, shoved them in a blender, and spent $20k on electricity blending them until the resulting slurry passed my test cases, should I be surprised or impressed that I got a shitty C compiler? If an actual person wrote this code, they would be justifiably mocked (or they're a student trying to learn by doing, and LLMs do not learn by doing). But AI gets a free pass because it's impressive that the slop can come in larger quantities now, I guess. These Models Will Improve. These Issues Will Get Fixed.

[-] V0ldek@awful.systems 7 points 13 hours ago* (last edited 13 hours ago)

Building a compiler that parses C correctly is one thing. Building one that produces fast and efficient machine code is a completely different challenge.

Ye, the former can be done in a month of non-full-time work by an undergrad who took Compilers 101 this semester or in literally a single day by a professional, and the latter is an actual useful product.

So of course AI will excel at doing the first one worse (vibecc doesn't even reject invalid C) and at an insane resource cost.

[-] istewart@awful.systems 7 points 14 hours ago

spent $20k on electricity blending them

They would probably be even more impressed that you only spent $20k

[-] Soyweiser@awful.systems 10 points 20 hours ago* (last edited 15 hours ago)

AI bros do new experiments in making themselves even stupider. Going from 'explain what you did but dumb it down for me and my degraded attention span' into 'just make a simplified cartoon out of it'.

Proud of not understanding what is going on. None of these people could hack the Gibson.

E: If they all hate programming so much, perhaps a change of job is in question, sure might not pay as much, but it might make them happier.

[-] istewart@awful.systems 4 points 14 hours ago

E: If they all hate programming so much, perhaps a change of job is in question, sure might not pay as much, but it might make them happier.

Surely at least a few of them have worked up enough seed capital to try their hand at used-car dealerships. I can attest that the juicier markets just outside the Bay Area are fairly saturated, but maybe they could push into lesser-served locales like Lost Hills or Weaverville.

[-] lagrangeinterpolator@awful.systems 6 points 16 hours ago* (last edited 16 hours ago)

my current favorite trick for reducing "cognitive debt" (h/t @simonw ) is to ask the LLM to write two versions of the plan:

  1. The version for it (highly technical and detailed)
  2. The version for me (an entertaining essay designed to build my intuition)

I don't know about them, but I would be offended if I was planning something with a collaborator, and they decide to give me a dumbed down, entertaining, children's storybook version of their plan while keeping all the technical details to themselves.

Also, this is absolutely not what "cognitive debt" means. I've heard technical debt refers to bad design decisions in software where one does something cheap and easy now but has to constantly deal with the maintenance headaches afterwards. But the very concept of working through technical details? That's what we call "thinking". These people want to avoid the burden of thinking.

[-] Architeuthis@awful.systems 6 points 16 hours ago

Eh, one might say that going by the broad strokes version while letting the expert do their thing is basically what management is all about, especially if they ignore the part where he wants his version to be light and entertaining.

This isn't about managing subordinates though, this is about devising ways to be complacent about not double checking what the LLM generates in your name.

[-] slowe@mastodon.me.uk 5 points 18 hours ago

@Soyweiser @BlueMonday1984 I like* how the structure of the boat changes from moment to moment. I like* how the radio dishes just beam from some random place between the transmitter and the dish. I like* that the original person who was waiting for a live stream doesn't get it (because it goes to a different group of people) and is just eating popcorn watching the mess unfold. I like* how the "audience" have their backs to the "live stream" screen and are excited to be looking away from it.

[-] JFranek@awful.systems 6 points 19 hours ago

I think I understand it. Think of an alcoholic that's trying every sort of miracle hangover "cure" instead of drinking less.

[-] BurgersMcSlopshot@awful.systems 5 points 17 hours ago* (last edited 17 hours ago)

Scott Shambaugh mulls about an AI alignment issue following his run-in with a bot last week

[-] froztbyte@awful.systems 5 points 19 hours ago* (last edited 19 hours ago)

in today's news about magical prompts that super totes give you superpowers:

We introduced SKILLSBENCH, the first benchmark to systematically evaluate Agent Skills as first-class artifacts. Across 84 tasks, 7 agent-model configurations, and 7,308 trajectories under three conditions (no Skills, curated Skills, self-generated Skills), our evaluation yields four key findings: (1) curated Skills provide substantial but variable benefit (+16.2 percentage points average, with high variance across domains and configurations); (2) self-generated Skills provide negligible or negative benefit (–1.3pp average), demonstrating that effective Skills require human-curated domain expertise

I am jack's surprised face

...and given I have other yaks, I shall not step on my "software and tools don't have to suck" soapbox right now

[-] istewart@awful.systems 6 points 14 hours ago* (last edited 14 hours ago)

This reminds me of when Steve Jobs would introduce every new Mac release by talking about how fast it could render in Photoshop. I wonder how he would do in our brave new era of completely ass-pulling your own bespoke benchmark frameworks.

load more comments
view more: next ›
this post was submitted on 16 Feb 2026
22 points (89.3% liked)

TechTakes

2442 readers
80 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS