I poked around the IMDB page, and there are reviews! currently it's sitting at a 8.5/10 with 31 ratings (though no written reviews it seems like) the metacritic score is a 51/100 with 4 reviews and there are 4 external reviews
I read a review (the one hosted on the ebert site) and it seems like this just falls into one of the patterns we’ve already seen when other people not steeped in the X risk miasma engage with it. As in, what should be a documentary about how the AI industry is a bubble and that all the AI ceos are grifters or deluded or both, is instead a “somehow I managed to fall for Yud’s whole thing and am now spreading the word” type deal. Big sigh!
Strange that Big Yud is missing from the cast on IMDB, but this could be a simple oversight since the movie is not fully out yet. Unfortunately, the rest of the cast contains a lot of familiar faces to put it mildly.
Sam Altman and the other CEOS being there is such a joke “this technology is so dangerous guys! of course I’m gonna keep blocking regulation for it, I need to make money after all!” Also, I’m shocked Emily Bender and Timmit Gebru are there, aren’t they AI skeptics?
@lurker I don’t know that I’d call them skeptics universally, they are experts in the AI field who are EXTREMELY skeptical of the TESCREAL complex and of the *hype* of the current fad LLM and image generation tools.
Whatever you call them, it’s *positive* that a documentary includes conflicting viewpoints, from the people who see them. The plausible range of near-term AI developments is smaller than the range of widely-held expectations. A documentary has to address the crazies & the skeptics
I took a deeper look into the documentary, and it does go into both the pessimist and optimist perspectives, so their inclusion makes more sense. and yeah, I was trying to get at how they're skeptical of the TESCREAL stuff and of current LLM capabilities
Directed by a Tyrell? Not suspicious at all...
what’s the lore with Tyrell?
I believe it's the evil megacorp in Bladerunner
my god I just cringed so hard. I thought the book would be the end….
Also yeah, someone pointed this out on old SneerClub but Yud loves using kids to illustrate his AI fears, and to beat a very dead horse here that’s a weird thing to do in his case
If anyone here wants to jump on the grenade and watch it/acquire a transcript for the rest of us to sneer at you’ll be my hero
also, what the fuck does “apocaloptimist” mean???? does it mean he’s optimistic about our chances of apocalypse??? (which makes no sense, just say pessimist) has he finally gone crazy and is now saying that apocalypse is the optimistic outcome?
It's someone who learned to stop worrying and love the Bomb.
Ow god it is pop culture references all the way down. Tvthropes is skynet! You gotta tell them!
I mean, they mostly don't have a problem with AI instances inheriting the earth as long as they're sufficiently rationalist.
Pure speculation: my guess is that an “apocaloptimist” is just someone fully bought into all of the rationalist AI delulu. Specifically:
- AGI is possible
- AGI will solve all our current problems
- A future where AGI ends humanity is possible/probable
and they take the extra belief, steeped in the grand tradition of liberal optimism, that we will solve the alignment problem and everything will be ok. Again, just guessing here.
According to a site : https://apocaloptimist.net/the-apocaloptimist/
"An Apocaloptimist sees the trouble, but is optimistic we can do anything–including fixing all the world’s problems"
So if jesus wins the war during the second coming all problems are fixed.
(~~The thing is also nuts "we are the people actually working on fixing things [by hoping AGI will fix it all for us]", my brother in Eschatology you are running a podcast~~ sorry the guy is unrelated to the agi people theyvare just using his term).
E: does seem the site itself isnt about AI so they just stole this guys term. Nope they just took this clean energy guys term, sorry about sneering at him he seems to actually want to introduce clean energy and works hard (that seems to be a lot of conventions and blogging however, so buying ourself out of the capitalist problems) for it, as far as I can tell.
Surprised it’s a term they stole and not one they made up. But yeah the whole idea of “AGI will solve all our problems” is just silly
I feel like I nailed my guess
Think you did, I only followed it up with a google and finding that site.
my personal guess is that “apocaloptimist” is just them trying to make a “better” term for “pessimist”
SneerClub
Hurling ordure at the TREACLES, especially those closely related to LessWrong.
AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)
This is sneer club, not debate club. Unless it's amusing debate.
[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]
See our twin at Reddit