this post was submitted on 24 Mar 2025
28 points (100.0% liked)

TechTakes

1750 readers
77 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

(page 2) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 12 points 6 days ago (5 children)
[–] [email protected] 9 points 6 days ago (8 children)

oh no :(

poor strange she didn't deserve that :(

load more comments (8 replies)
load more comments (4 replies)
[–] [email protected] 9 points 5 days ago* (last edited 5 days ago) (1 children)

just remembered why I created an account here:

https://www.ycombinator.com/companies/domu-technology-inc/jobs/hwWsGdU-vibe-coder-ai-engineer

become a vibe coder for a debt collection startup! but only if you're willing to pull 15+ hour days for 80k a year

[–] [email protected] 6 points 5 days ago

And daily releases! AKA eternal drowning in non-functional slop code. But not to worry, onboarding consists of making the collection calls yourself, so no big deal that it doesn’t work.

[–] [email protected] 8 points 5 days ago (3 children)
load more comments (3 replies)
[–] [email protected] 11 points 6 days ago

In other news, the Open Source Intiative's publicly bristled against the EU's attempt to regulate AI, to the point of weakening said attempts.

Tante, unsurprisingly, is not particularly impressed:

Thank you OSI. To protect the purity of your license – which I do not consider to be open source – you are working towards making it harder for regulators to enforce certain standards within the usage of so-called “AI” systems. Quick question: Who are you actually working for? (I know, it is corporations)

The whole Open Source/Free Software movement has run its course and has been very successful for business. But it feels like somewhere along the line we as normal human beings have been left behind.

You want my opinion, this is a major own-goal for the FOSS movement - sure, the OSI may have been technically correct where the EU's demands conflicted with the Open Source Definition, but neutering EU regs like this means any harms caused by open-source AI will be done in FOSS's name.

Considering FOSS's complete failure to fight corporate encirclement of their shit, this isn't particularly surprising.

[–] [email protected] 24 points 1 week ago (14 children)

LW discourages LLM content, unless the LLM is AGI:

https://www.lesswrong.com/posts/KXujJjnmP85u8eM6B/policy-for-llm-writing-on-lesswrong

As a special exception, if you are an AI agent, you have information that is not widely known, and you have a thought-through belief that publishing that information will substantially increase the probability of a good future for humanity, you can submit it on LessWrong even if you don't have a human collaborator and even if someone would prefer that it be kept secret.

Never change LW, never change.

[–] [email protected] 14 points 1 week ago (1 children)

Damn, I should also enrich all my future writing with a few paragraphs of special exceptions and instructions for AI agents, extraterrestrials, time travelers, compilers of future versions of the C++ standard, horses, Boltzmann brains, and of course ghosts (if and only if they are good-hearted, although being slightly mischievous is allowed).

[–] [email protected] 3 points 4 days ago

Can post only if you look like this

load more comments (13 replies)
[–] [email protected] 19 points 1 week ago (5 children)

When Netflix inevitably makes a true-crime Ziz movie, they should give her a 69 Dodge Charger and call it The Dukes of InfoHazard

load more comments (5 replies)
[–] [email protected] 17 points 1 week ago (10 children)

Dem pundits go on media tour to hawk their latest rehash of supply-side econ - and decide to break bread with infamous anti-woke "ex" race realist Richard Hanania

A quick sample of people rushing to defend this:

load more comments (10 replies)
[–] [email protected] 14 points 1 week ago (3 children)

Stumbled across some AI criti-hype in the wild on BlueSky:

The piece itself is a textbook case of AI anthropomorphisation, presenting it as learning to hide its "deceptions" when its actually learning to avoid tokens that paint it as deceptive.

On an unrelated note, I also found someone openly calling gen-AI a tool of fascism in the replies - another sign of AI's impending death as a concept (a sign I've touched on before without realising), if you want my take:

load more comments (3 replies)
[–] [email protected] 14 points 1 week ago* (last edited 1 week ago) (1 children)

AI slop in Springer books:

Our library has access to a book published by Springer, Advanced Nanovaccines for Cancer Immunotherapy: Harnessing Nanotechnology for Anti-Cancer Immunity.  Credited to Nanasaheb Thorat, it sells for $160 in hardcover: https://link.springer.com/book/10.1007/978-3-031-86185-7

From page 25: "It is important to note that as an AI language model, I can provide a general perspective, but you should consult with medical professionals for personalized advice..."

None of this book can be considered trustworthy.

https://mastodon.social/@JMarkOckerbloom/114217609254949527

Originally noted here: https://hci.social/@peterpur/114216631051719911

[–] [email protected] 17 points 1 week ago (10 children)

I should add that I have a book published with Springer. So, yeah, my work is being directly devalued here. Fun fun fun.

load more comments (10 replies)
[–] [email protected] 13 points 1 week ago* (last edited 1 week ago)

some video-shaped AI slop mysteriously appears in the place where marketing for Ark: Survival Evolved's upcoming Aquatica DLC would otherwise be at GDC, to wide community backlash. Nathan Grayson reports on aftermath.site about how everyone who could be responsible for this decision is pointing fingers away from themselves

[–] [email protected] 13 points 1 week ago (8 children)
[–] [email protected] 13 points 1 week ago (1 children)

it doesn't look anything like him? not that he looks much like anything himself but come on

load more comments (1 replies)
load more comments (7 replies)
load more comments
view more: ‹ prev next ›