strange æons takes on hpmor :o
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
just remembered why I created an account here:
https://www.ycombinator.com/companies/domu-technology-inc/jobs/hwWsGdU-vibe-coder-ai-engineer
become a vibe coder for a debt collection startup! but only if you're willing to pull 15+ hour days for 80k a year
And daily releases! AKA eternal drowning in non-functional slop code. But not to worry, onboarding consists of making the collection calls yourself, so no big deal that it doesn’t work.
Bill Gates is having a normal one.
https://www.cnbc.com/2025/03/26/bill-gates-on-ai-humans-wont-be-needed-for-most-things.html
In other news, the Open Source Intiative's publicly bristled against the EU's attempt to regulate AI, to the point of weakening said attempts.
Tante, unsurprisingly, is not particularly impressed:
Thank you OSI. To protect the purity of your license – which I do not consider to be open source – you are working towards making it harder for regulators to enforce certain standards within the usage of so-called “AI” systems. Quick question: Who are you actually working for? (I know, it is corporations)
The whole Open Source/Free Software movement has run its course and has been very successful for business. But it feels like somewhere along the line we as normal human beings have been left behind.
You want my opinion, this is a major own-goal for the FOSS movement - sure, the OSI may have been technically correct where the EU's demands conflicted with the Open Source Definition, but neutering EU regs like this means any harms caused by open-source AI will be done in FOSS's name.
Considering FOSS's complete failure to fight corporate encirclement of their shit, this isn't particularly surprising.
LW discourages LLM content, unless the LLM is AGI:
https://www.lesswrong.com/posts/KXujJjnmP85u8eM6B/policy-for-llm-writing-on-lesswrong
As a special exception, if you are an AI agent, you have information that is not widely known, and you have a thought-through belief that publishing that information will substantially increase the probability of a good future for humanity, you can submit it on LessWrong even if you don't have a human collaborator and even if someone would prefer that it be kept secret.
Never change LW, never change.
Damn, I should also enrich all my future writing with a few paragraphs of special exceptions and instructions for AI agents, extraterrestrials, time travelers, compilers of future versions of the C++ standard, horses, Boltzmann brains, and of course ghosts (if and only if they are good-hearted, although being slightly mischievous is allowed).
Can post only if you look like this
When Netflix inevitably makes a true-crime Ziz movie, they should give her a 69 Dodge Charger and call it The Dukes of InfoHazard
Dem pundits go on media tour to hawk their latest rehash of supply-side econ - and decide to break bread with infamous anti-woke "ex" race realist Richard Hanania
A quick sample of people rushing to defend this:
- Some guy with the same last name as a former Google CEO who keeps spamming the same article about IQ
- Our good friend Tracy
Stumbled across some AI criti-hype in the wild on BlueSky:
The piece itself is a textbook case of AI anthropomorphisation, presenting it as learning to hide its "deceptions" when its actually learning to avoid tokens that paint it as deceptive.
On an unrelated note, I also found someone openly calling gen-AI a tool of fascism in the replies - another sign of AI's impending death as a concept (a sign I've touched on before without realising), if you want my take:
AI slop in Springer books:
Our library has access to a book published by Springer, Advanced Nanovaccines for Cancer Immunotherapy: Harnessing Nanotechnology for Anti-Cancer Immunity. Credited to Nanasaheb Thorat, it sells for $160 in hardcover: https://link.springer.com/book/10.1007/978-3-031-86185-7
From page 25: "It is important to note that as an AI language model, I can provide a general perspective, but you should consult with medical professionals for personalized advice..."
None of this book can be considered trustworthy.
https://mastodon.social/@JMarkOckerbloom/114217609254949527
Originally noted here: https://hci.social/@peterpur/114216631051719911
I should add that I have a book published with Springer. So, yeah, my work is being directly devalued here. Fun fun fun.
some video-shaped AI slop mysteriously appears in the place where marketing for Ark: Survival Evolved's upcoming Aquatica DLC would otherwise be at GDC, to wide community backlash. Nathan Grayson reports on aftermath.site about how everyone who could be responsible for this decision is pointing fingers away from themselves
>sam altman is greentexting in 2025
>and his profile is an AI-generated Ghibli picture, because Miyazaki is such an AI booster
it doesn't look anything like him? not that he looks much like anything himself but come on