The comment section seems to be 50% that dude by word count. He must be a perfectly healthy amount online.
You're certainly hitting some nails on their heads here. The normalisation of AI is absolutely happening, mostly because the buttons start showing up on Google/Microsoft/etc. products with massive market share.
Also the manlyman blogs bitching about beaver hair brushes. I was looking up safety razors in german ("Rasierhobel" btw, totally unaware of that until now), and Wikipedia was referencing one such archived blog, bitching about pig bristle brushes being "drug store" garbage. I might still get one. (The razor that is, not the brush. Spray on foam will do for me.)
Not so sure about the scythe/mower thing. My battery powered mower & motor scythe slap. (Stihl btw)
I can see tante's point. Besides AI datacenters being used for surveillance tech, I can also see LLM tech itself used nefariously post-bubble. Maybe maintaining an up-to-date LLM as a product is not viable, but a custom-trained model to snipe public online discourse around a crucial election could remain affordable for a wealthy fascist.
On the bright side, I am hoping for a brief period of powerful yet affordable gaming PCs thanks to retrofitted, slightly singed Blackwells.
be me, super genius autodidact
be deeply moved by the prospect of defeating death with technology, write some sick prose about it because am eloquent as fuck
proceed to punt this goal decades or centuries by helping to justify a tech bubble which consumes tons of R&D resources for no apparent benefit and will bind further resources in the future to adapt to an aggravated climate crisis, and also inspiring a slew of technofascists too dumb to tell the difference between tech that benefits mankind and tech that exploits and oppresses
mein face when
Coroner says it was sudoku.
"In popular culture" section coming in clutch per usual:
The two Argentine developers, Jaun Linietsky & Ariel Manzur, were repeatedly tasked with updating the engine from a period of time from 2001 to 2014, and chose the name "Godot" due to its relation to the play, as it represents the never-ending wish of adding new features in the engine, which would get it closer to an exhaustive product, but would never actually be completed.
His Wikipedia article is quite a ride. Apparently he and a Stephen Chamberlain were recently found innocent for a bunch of fraud charges. They boil down to inflating the value of a SW company he sold to Hewlett-Packart. They died within a day of each other in unrelated accidents. Must be rough.
The countersuit went so far as to ask the court to force Altman to “change its deceptive and misleading name to ClosedAI or a different more appropriate name.”
top kek
The guy (pun not intended) seems honestly as decent as you might hope for in a serial entrepreneur. Maybe a bit naive for expecting better from the players involved, but to me he comes off as endearingly earnest.
Phase 3: Reality Distortion (2041-2050)
2041-2043: Financial Multiverse Modeling
- Quantum computers simulate multiple financial realities simultaneously
- Development of "Schrödinger's Ledger" prototype, allowing superposition of financial states
https://www.linkedin.com/pulse/roadmap-quantum-accounting-milestones-evolution-garrett-irmsc
I feel like this has to be built on a lack of appreciation for words as a facilitator of human connection. By finding means of expression and being understood we manage to link our brains together on a conceptual level. By building these skills communally we expand the possible bandwidth of connection and even the range and fidelity of our own thoughts.
This has to be motivated by a view of words as Authoritative Things that sit on shelves and bestseller lists and are authored by Smart And Successful People.
Not sure that any time someone posts cringe on HN counts as a tech take. Maybe we can bring AI into this?
What about, enslaving prisoners is not controversial if an AI is giving the orders, since it's not a person oppressing another person. I'll take my 500M VC now please.
jaschop
0 post score0 comment score
Might be semi-related: the german aerospace/automotive/industrial research agency has an "AI Safety" institute (institute = top level department).
I got a rough impression from their website. They don't seem to be doing anything that successful. Mostly fighting the unwinnable battles of putting AI in everything without sucking and twiddling machine learning models to make them resilient against malicous data. Besides trying to keep the torch of self-driving cars alive for the german car industry. Oh, and they're doing the quantum AI bit.
They're a fairly new institute, and I heard rumors they're not doing great. Maybe the organization resists the necessary insanity to generate new AI FOMO at this point. One can dream.