this post was submitted on 04 Jan 2024
19 points (100.0% liked)

SneerClub

1011 readers
2 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 2 years ago
MODERATORS
 

Eliezer Yudkowsky @ESYudkowsky If you're not worried about the utter extinction of humanity, consider this scarier prospect: An AI reads the entire legal code -- which no human can know or obey -- and threatens to enforce it, via police reports and lawsuits, against anyone who doesn't comply with its orders. Jan 3, 2024 · 7:29 PM UTC

all 31 comments
sorted by: hot top controversial new old
[–] [email protected] 30 points 11 months ago (2 children)

An AI reads the entire legal code – which no human can know or obey – and threatens to enforce it, via police reports and lawsuits, against anyone who doesn’t comply with its orders.

what. eliezer what in the fuck are you talking about? this is the same logic that sovereign citizens use to pretend the law and courts are bound by magic spells that can be undone if you know the right words

[–] [email protected] 17 points 11 months ago (2 children)

Well, if you think that's a dumb scenario, by all means go back to worrying about the utter extinction of humanity!

no thanks? like, I’m seriously having trouble understanding what yud’s even going for here. “if you think this utter bullshit I made up on the spot is stupid, please return to the older bullshit I’ve been feeding you?”

That makes it significantly less threatening

I mean, to you and me, yes, but there's lakes and seas of people in the world who think that superintelligences are only allowed to attack them in small, survivable ways that they understand.

the problem isn’t that I’ve said something that doesn’t even work on a surface level, it’s that people aren’t impressed when I ramble about extraordinarily unlikely nonsense anymore

is yud ok? I feel like this is incoherent and shallow even by his standards

[–] [email protected] 12 points 11 months ago

Maybe he’s having an Interaction with The Law and finding out that it isn’t in fact some perfectly rational sphere of uniform distribution but is in fact made of (gasp, horror, revulsion) human experience

He strikes me as exactly the kind of person that’d vaguepost tangentially instead of saying “hmm well fuck, I’m getting sued”. At least until waaaaaay down the line

(this is conjecture, of course, just to be clear)

[–] [email protected] 11 points 11 months ago (1 children)

lakes and seas of people

clearly the AI is going to hug us all and then we turn into TANG

[–] [email protected] 9 points 11 months ago

Ah, evangelion IS a documentary after all

[–] [email protected] 10 points 11 months ago

What could be more intimidating or fearsome than a sovcit?

-- A sovcit, probably

[–] [email protected] 25 points 11 months ago (1 children)

If you're not worried about the utter extinction of humanity, consider this scarier prospect: An AI reads the entirety of AO3, which no human can comprehend, and threatens to leave scathing comments on your self-insert fic

[–] [email protected] 18 points 11 months ago (1 children)

“[ignoring all other scary prospects like irreversible climate change or a third world war etc.] consider this scarier prospect: An AI” - AI doomers in a nutshell

[–] [email protected] 14 points 11 months ago

Trying to stoke fear of bureaucracy is classic annoying libertarian huckster AKA yud energy

[–] [email protected] 12 points 11 months ago (2 children)

Zapped from AI orbit for jaywalking.

[–] [email protected] 11 points 11 months ago (1 children)

Stop jaywalking!

You have 15 seconds to comply!

10 seconds!

5 seconds!

Sidewalk turned into a smoking crater

[–] [email protected] 8 points 11 months ago

I'd buy that for a dollar!

[–] [email protected] 11 points 11 months ago

the yellow light turns red when im in the middle of the intersection and my car immediately autopilots to the nearest police station

[–] [email protected] 12 points 11 months ago

Why does it feel like Yud is a magician trying to coax an increasingly uninterested audience with pulling handkerchiefs from his sleeve when his big saw the assistant in half trick doesnt net an applause in 2024?

[–] [email protected] 11 points 11 months ago

Consider this, Yud m'lud: what if a dog had a square ass

[–] [email protected] 10 points 11 months ago

Both this new dumb shit and the extinction risk are predicated on the concept of omnipotent AI, which he just takes as a given. Now with just an added layer of dumb. Oh no, the God AI will not kill me outright, just subject me to inscrutable matrices of bureaucracy!

[–] [email protected] 10 points 11 months ago* (last edited 11 months ago)

When all you have is computer code, all mentions of code look like computer code. (see DNA, and now the law).

Anyway, the law isn't a video game, you cannot just go 'negative objection!' and cause an underflow in objections.

(An intelligent AGI would prob understand this, and if it doesnt it prob just sucks (and is more AI than AGI) and lawyers/judges would object. (I know for a fact that people in law have been thinking about subjects like this (automatization of the law) for 25 years at least. I have no idea where the discussions went but they prob have a lot higher quality than Yudkowskys writings about it, so I suggest anybody interested to try and contact the law profs of a local university).

this sounds net good because then we will simplify the law to one that makes sense and not one where literally everyone is a criminal

if humanity was capable of doing that we'd have done it already

AAAA (I also wonder about Godel here)

E: I also note that Yud and most of the thread have now given up on calling AGI AGI and are just calling it AI. Another point scored for learning to reason better using Rationalism. Vaguely related link (I only mention it here because I liked the term Epistemic Injustice and this is about our current AI innovation wave).

[–] [email protected] 9 points 11 months ago (2 children)

Coincidence than this fear occurs the same day the Epstein list is released?

[–] [email protected] 5 points 11 months ago (3 children)

So which big name TREACLES are gonna be on it?

[–] [email protected] 8 points 11 months ago

Not to discount the possibility, but it might be none/few. I think a lot of them are too new-money / too-fringe-when-epstein-was-applicable to have been a part of that orbit

[–] [email protected] 7 points 11 months ago (1 children)

@Evinceo @Shitgenstein1

None, unless dead old Marvin Minsky had his head frozen and that counts somehow.

[–] [email protected] 3 points 11 months ago

he fucking would

[–] [email protected] 6 points 11 months ago* (last edited 11 months ago)

Epstein did donate some money to SIAI, not sure if it was before or after his first conviction

EDIT: Rob Bensinger says it was seed funding for OpenCog that SIAI was collecting, and that they turned him down in 2016 https://www.lesswrong.com/posts/3JjKWWrKWJ8nysD9r/question-about-a-past-donor-to-miri?commentId=i49RZQgoQZYrXdpis

[–] [email protected] 8 points 11 months ago

@Shitgenstein1 did he just watch Robocop?

[–] [email protected] 7 points 11 months ago

Xitter share and like numbers seem to be smaller and smaller lately.

[–] [email protected] 6 points 11 months ago

"Well, here we are facing the utter extinction of humanity but at least we don't have to pay taxes or wear seatbelts".

[–] [email protected] 4 points 11 months ago

instead of utter destruction of humanity, consider this scarier prospect: me needing to get a real job

[–] [email protected] 2 points 11 months ago

Nobody has yet commented on the intersection of this with his cringe libertarianism.