Why does this matter?
Well, its a perfect demonstration that LLMs flat-out do not think like us. Even a goddamn five-year old could work this shit out with flying colours.
Why does this matter?
Well, its a perfect demonstration that LLMs flat-out do not think like us. Even a goddamn five-year old could work this shit out with flying colours.
r/changemyview recently announced the University of Zurich had performed an unauthorised AI experiment on the subreddit. Unsurprisingly, there were a litany of ethical violations.
(Found the whole thing through a r/subredditdrama thread, for the record)
Hank Green (of Vlogbrothers fame) recently made a vaguely positive post about AI on Bluesky, seemingly thinking "they can be very useful" (in what, Hank?) in spite of their massive costs:
Unsurprisingly, the Bluesky crowd's having none of it, treating him as an outright rube at best and an unrepentant AI bro at worst. Needless to say, he's getting dragged in the replies and QRTs - I recommend taking a look, they are giving that man zero mercy.
Starting things off here with a sneer thread from Baldur Bjarnason:
Keeping up a personal schtick of mine, here's a random prediction:
If the arts/humanities gain a significant degree of respect in the wake of the AI bubble, it will almost certainly gain that respect at the expense of STEM's public image.
Focusing on the arts specifically, the rise of generative AI and the resultant slop-nami has likely produced an image of programmers/software engineers as inherently incapable of making or understanding art, given AI slop's soulless nature and inhumanly poor quality, if not outright hostile to art/artists thanks to gen-AI's use in killing artists' jobs and livelihoods.
Annoyed Redditors tanking Google Search results illustrates perils of AI scrapers
A trend on Reddit that sees Londoners giving false restaurant recommendations in order to keep their favorites clear of tourists and social media influencers highlights the inherent flaws of Google Search’s reliance on Reddit and Google's AI Overview.
Anyways, personal sidenote:
Beyond putting another blow to AI's reliability, this will probably also make the public more wary of user-generated material - its hard to trust something if you know the masses could be actively manipulating you.
Quick update: The open letter on AI training (https://aitrainingstatement.org/) has reached 15k signatures:
Not a sneer, but ~~Kendrick~~ Ed Zitron just dropped.
Its damn good as usual, with Zitron taking aim at the current state of SaaS and tying it into his previous sneers on AI.
At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers. (New York Times, 2005, at the end of the last AI winter.)
I expect history to repeat itself quite soon - where previously using the term "artificial intelligence" got you looked at as a wild-eyed dreamer, now, using that term's likely getting you looked at as an asshole techbro, and your research deemed a willing attempt to hurt others.
The next AI winter is coming, and it looks like its going to be a brutal one.
Tante has a couple of questions for Anthropic: