201
13
submitted 2 years ago by [email protected] to c/[email protected]
202
14
submitted 2 years ago by [email protected] to c/[email protected]
203
20
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]
204
17
submitted 2 years ago by [email protected] to c/[email protected]
205
19
submitted 2 years ago by [email protected] to c/[email protected]

Eliezer Yudkowsky @ESYudkowsky If you're not worried about the utter extinction of humanity, consider this scarier prospect: An AI reads the entire legal code -- which no human can know or obey -- and threatens to enforce it, via police reports and lawsuits, against anyone who doesn't comply with its orders. Jan 3, 2024 · 7:29 PM UTC

206
71
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]

Pass the popcorn, please.

(nitter link)

207
38
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]
208
94
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]

I'm called a Nazi because I happily am proud of white culture. But every day I think fondly of the brown king Cyrus the Great who invented the first ever empire, and the Japanese icon Murasaki Shikibu who wrote the first novel ever. What if humans just loved each other? History teaches us that we have all been, and always will be - great

read the whole thread, her responses are even worse

209
52
submitted 2 years ago by [email protected] to c/[email protected]
210
14
submitted 2 years ago by [email protected] to c/[email protected]
211
12
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]

Is uh, anyone else watching? This dude (chaos) was/is friends with Brent Dill.

212
25
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]

an entirely vibes-based literary treatment of an amateur philosophy scary campfire story, continuing in the comments

213
15
submitted 2 years ago by [email protected] to c/[email protected]

... while at the same time not really worth worrying about so we should be concentrating on unnamed alleged mid term risks.

EY tweets are probably the lowest effort sneerclub content possible but the birdsite threw this to my face this morning so it's only fair you suffer too. Transcript follows:

Andrew Ng wrote:

In AI, the ratio of attention on hypothetical, future, forms of harm to actual, current, realized forms of harm seems out of whack.

Many of the hypothetical forms of harm, like AI "taking over", are based on highly questionable hypotheses about what technology that does not currently exist might do.

Every field should examine both future and current problems. But is there any other engineering discipline where this much attention is on hypothetical problems rather than actual problems?

EY replied:

I think when the near-term harm is massive numbers of young men and women dropping out of the human dating market, and the mid-term harm is the utter extermination of humanity, it makes sense to focus on policies motivated by preventing mid-term harm, if there's even a trade-off.

214
54
submitted 2 years ago by [email protected] to c/[email protected]

I somehow missed this one until now. Apparently it was once mentioned in the comments on the old sneerclub but I don't think it got a proper post, and I think it deserves one.

215
12
submitted 2 years ago by [email protected] to c/[email protected]
216
20
submitted 2 years ago by [email protected] to c/[email protected]

From Sam Altman's blog, pre-OpenAI

217
42
submitted 2 years ago by [email protected] to c/[email protected]

Epistemic status: Speculation. An unholy union of evo psych, introspection, random stuff I happen to observe & hear about, and thinking. Done on a highly charged topic. Caveat emptor!

oh boy

archive: https://archive.is/uOP4y

218
6
LW malware question (awful.systems)
submitted 2 years ago by [email protected] to c/[email protected]

When I click a link to LessWrong from this board, I receive a malware alert from my home gateway (Netgear Armor). Apparently it's their AI text-to-speech bot.

Question - any concerns about this? Google isn't helping me much.

URL is https: // embed.type3.audio/

Searching their site tells me that this is literally a feature and not a bug.

https://www.lesswrong.com/posts/b9oockXDs2xMdYp66/announcement-ai-narrations-available-for-all-new-lesswrong

TYPE III AUDIO is running an experiment with the LessWrong team to provide automatic AI narrations on all new posts. All new LessWrong posts will be available as AI narrations (for the next few weeks).

You might have noticed the same feature recently on the EA Forum, where it is now an ongoing feature. Users there have provided excellent feedback and suggestions so far, and your feedback on this pilot will allow further improvements.

219
18
submitted 2 years ago by [email protected] to c/[email protected]

WOOOOOOO MORE AXE GRINDING LETS GO!

Okay enough of that, so I was doing a little bit of a foray into the GPI cesspit to look at the latest decision theoretic drivel they've been putting out recently. And boy oh boy did I come across something juicy.

Basically this 36 Page paper is one big 'nuh uh' to all the critics of longtermism. Think Crary and the like; it explicitly states that critics dismiss longtermism out of hand by denying broadly utilitarian principles. This is all fair enough, but the the philosopher tries to defend longtermism by saying that denying it on broadly normative grounds incurs 'significant theoretical costs'. I've checked what these 'costs' would be and to may admittedly quite dumb eyes they look like they're only be 'costs' if you are a utilitarian in the first place! The entire discussion is predicted on utilitarian principles, the weighing of theoretical costs and benefits the consistently bullshit new principles and what I've always thought were completely as hoc new rules that they make up to make anything fit the criteria and get longtermism out the ass end as well making the discussion impervious to criticism cos insert brand new shiny principle here it's fucken dumb.

Not to overstate my case, I'm kinda dumb, which means I could be very wrong here, but even with that in mind I woulda expected better from a PhD.

Anyways to end off, are there any resources that actually go through their math and fact check that shit? Actually wanna see if the math they use actually checks out or if it's kinda cobbled together.

220
29
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]

warning: seriously nasty narcissism at length

archive: https://archive.is/eoXQj

this is a response to the post discussed in: https://awful.systems/post/220620

221
49
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]
222
36
submitted 2 years ago by [email protected] to c/[email protected]
223
42
submitted 2 years ago by [email protected] to c/[email protected]

(via Timnit Gebru)

Although the board members didn’t use the language of abuse to describe Altman’s behavior, these complaints echoed some of their interactions with Altman over the years, and they had already been debating the board’s ability to hold the CEO accountable. Several board members thought Altman had lied to them, for example, as part of a campaign to remove board member Helen Toner after she published a paper criticizing OpenAI, the people said.

The complaints about Altman’s alleged behavior, which have not previously been reported, were a major factor in the board’s abrupt decision to fire Altman on Nov. 17, according to the people. Initially cast as a clash over the safe development of artificial intelligence, Altman’s firing was at least partially motivated by the sense that his behavior would make it impossible for the board to oversee the CEO.

For longtime employees, there was added incentive to sign: Altman’s departure jeopardized an investment deal that would allow them to sell their stock back to OpenAI, cashing out equity without waiting for the company to go public. The deal — led by Joshua Kushner’s Thrive Capital — values the company at almost $90 billion, according to a report in the Wall Street Journal, more than triple its $28 billion valuation in April, and it could have been threatened by tanking value triggered by the CEO’s departure.

huh, I think this shady AI startup whose product is based on theft that cloaks all its actions in fake concern for humanity might have a systemic ethics problem

224
20
submitted 2 years ago by [email protected] to c/[email protected]
225
29
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]

Utilitarian brainworms or one of the many very real instances of a homicidal parent going after their disabled child? I can't decide, but it's a depressing read.

May end up on SRD, but you read it here first.

view more: ‹ prev next ›

SneerClub

1152 readers
84 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS