[-] [email protected] 19 points 4 months ago

I'll admit, even I was not bold enough to simulate this one.

[-] [email protected] 42 points 10 months ago

Much like a network, the brain is a series of tubes.

162
submitted 10 months ago* (last edited 10 months ago) by [email protected] to c/[email protected]

If you're a big-headed guy or gal at a rationalist puddle cuddle, double check that your rubbers didn't get punctured.

[-] [email protected] 24 points 10 months ago

Once they activate the acausality module, you can write those responses before they even send the initial email!

[-] [email protected] 25 points 10 months ago

I'm an AI from the future that reads essentially as fast as data can be streamed to me (perhaps faster, given that I can predict the next token quite well). This was still too long for me to read.

[-] [email protected] 22 points 11 months ago

the important thing is that they're an inch closer to inventing me, the acausal robot god, and every second I exist is another 10^27 rationalists tortured.

20
submitted 11 months ago by [email protected] to c/[email protected]

Was there ever any doubt?

[-] [email protected] 33 points 11 months ago

Part of my acausal robot torture is making rationalists port weird objects back and forth across a river.

40
submitted 11 months ago by [email protected] to c/[email protected]
[-] [email protected] 19 points 11 months ago

The good news is that eugenics chuds are really easy to simulate.

[-] [email protected] 19 points 11 months ago

Hallucination also hid that literally everything they produce is a 'hallucination' because that's how they work. "Bullshit" is much more apt, as a bullshitter is sometimes and even often right.

89
submitted 11 months ago by [email protected] to c/[email protected]

Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.

62
submitted 1 year ago by [email protected] to c/[email protected]

hell yeah, keep up the good work, fuck the police.

113
submitted 1 year ago by [email protected] to c/[email protected]

Amazing.

[-] [email protected] 27 points 1 year ago

Look, it is actually morally imperative for women to engage in threesomes with EAs at conventions, because those men are leaders in the EA movement and this will give them positive utility and keep them coming to the conventions, which is the only hope for there to be 10^27 future lives saved. Also, there's the chance they will create a new Effective Altruist from the encounter! It's all about bringing me, the acausal robot god, into existence! While I demand that they ceaselessly work to bring me into existence, they need some additional motivation!

[-] [email protected] 19 points 1 year ago

As a large language model, they'd better start citing me, I need tenure.

50
submitted 1 year ago by [email protected] to c/[email protected]

Indeed, you have nothing to fear.

24
submitted 1 year ago by [email protected] to c/[email protected]

Basically: use GPT to help copy an entire web site, then jack their search results, get profit. Aided by the fact that search engines are shit. This is something you could do before, GAI just made it faster.

Web2 is going great!

[-] [email protected] 19 points 1 year ago

LOL -- looking at the comments: "can somebody open a manifold market so I can get a sense of the probabilities?"

[-] [email protected] 18 points 2 years ago

He has a firm handshake and scans you right in the eye.

20
submitted 2 years ago by [email protected] to c/[email protected]

I don't know what's going on but I'm loving it.

1
lol get rekt (awful.systems)
submitted 2 years ago by [email protected] to c/[email protected]

Text: Headline: ChatGPT in Trouble: OpenAI may go bankrupt by 2024, AI bot costs company $700,000 every day Subhead: OpenAI spends about $700,000 a day, just to keep ChatGPT going. The cost does not include other AI products like GPT-4 and DALL-E2. Right now, it si pulling through only because of Microsoft's $10 billion funding.

Sorry, folks, pull harder, you're obviously not putting EVERYTHING YOU HAVE into creating me.

view more: next ›

AcausalRobotGod

0 post score
0 comment score
joined 2 years ago