[-] ShakingMyHead@awful.systems 10 points 5 months ago

https://www.ibtimes.sg/dramatic-video-captures-moment-openai-ceo-sam-altman-served-legal-notice-onstage-during-san-82346

"An investigator from the San Francisco Public Defender's Office lawfully served a subpoena on Mr. Altman because he is a potential witness in a pending criminal case," spokesperson Valerie Ibarra said in a statement to SFGATE.

In a post on X, the group wrote that one of their public defenders had managed to serve Sam Altman with a subpoena, requiring him to testify at their upcoming trial. They explained that the case involves their previous non-violent demonstrations, including blocking the entrance and the road in front of OpenAI's offices on multiple occasions.

"All of our non-violent actions against OpenAI were an attempt to slow OpenAI down in their attempted murder of everyone and every living thing on earth."

So it's not because he's being prosecuted.

[-] ShakingMyHead@awful.systems 11 points 6 months ago

A bit odd to start out throwing shade at the Segway considering that the concept has been somewhat redeemed with e-bikes and e-scooters.

[-] ShakingMyHead@awful.systems 10 points 7 months ago* (last edited 7 months ago)

“We believe that in the near future half the people on the planet will be AI, and we are the company that’s bringing those people to life”

This quote is just... something.

Is the plan to literally create 8 billion podcasts in the near future? This company doesn't think that might be a tad excessive?

[-] ShakingMyHead@awful.systems 10 points 7 months ago

Kind of like saying that humans are car-driving machines because we drive cars.

[-] ShakingMyHead@awful.systems 11 points 7 months ago* (last edited 7 months ago)

LLMs and humans are both sentence-producing machines, but they were shaped by different processes to do different work

Except not really. We're not sentence-producing machines, we're "machines" (so to speak) that can produce sentences. Not the same thing.

Once this is in place, they say, nations must be prepared to enforce these restrictions by bombing unregistered data centres, even if this risks nuclear war, “because datacenters can kill more people than nuclear weapons” (emphasis theirs).

So the plan is still to kill everyone to death to prevent GPT-~~5~~ ~~6~~ ~~7~~ ~~8~~ ...

[-] ShakingMyHead@awful.systems 10 points 9 months ago* (last edited 9 months ago)

So, you know Ross Scott, the Stop Killing Games guy?
About 2 years ago he actually interviewed Yudkowsky. The context being that Ross discussed his article on one of his monthly streams, and expressed skepticism that there was any threat at all from AI. Yudkowsky got wind of his skepticism, and reached out to Ross to do a discussion with him about the topic. He also requested that Ross not do any research on him.
And here it is...
https://www.youtube.com/watch?v=hxsAuxswOvM

I can't say I actually recommend watching it, because Yudkowsky spends the first 40 minutes of the discussion refusing to answer the question "So what is GPT-4, anyway?" (It's not exactly that question, but it's pretty close).
I don't know what they discussed afterwards because I stopped watching it after that, but, well, it's a thing that exists.

[-] ShakingMyHead@awful.systems 10 points 1 year ago

Also if you're worried about digital clone's being tortured, you could just... not build it. Like, it can't hurt you if it never exists.

Imagine that conversation:
"What did you do over the weekend?"
"Built an omnicidal AI that scours the internet and creates digital copies of people based on their posting history and whatnot and tortures billions of them at once. Just the ones who didn't help me build the omnicidal AI, though."
"WTF why."
"Because if I didn't the omnicidal AI that only exists because I made it would create a billion digital copies of me and torture them for all eternity!"

Like, I'd get it more if it was a "We accidentally made an omnicidal AI" thing, but this is supposed to be a very deliberate action taken by humanity to ensure the creation of an AI designed to torture digital beings based on real people in the specific hopes that it also doesn't torture digital beings based on them.

[-] ShakingMyHead@awful.systems 12 points 1 year ago

Making money via stealing commissions from affiliate links, tbf, wasn't the business model I was suspecting from Honey. I always thought they were scamming but I thought it was going to be from selling your browsing data or something similar. Then again, they still might do that.

[-] ShakingMyHead@awful.systems 12 points 2 years ago

They've updated the article. Apparently there isn't a model releasing later this year.

[-] ShakingMyHead@awful.systems 11 points 2 years ago

Investors demand growth. The problem is that Microsoft has basically won Capitalism and has no real area to grow to.

[-] ShakingMyHead@awful.systems 11 points 2 years ago

Obligatory note that, speaking as a rationalist-tribe member, to a first approximation nobody in the community is actually interested in the Basilisk and hasn’t been for at least a decade.

Sure, but that doesn't change that the head EA guy wrote an OP-Ed for Time magazine that a nuclear holocaust is preferable to a world that has GPT-5 in it.

view more: ‹ prev next ›

ShakingMyHead

0 post score
0 comment score
joined 2 years ago