this post was submitted on 25 Oct 2024
313 points (100.0% liked)

TechTakes

1430 readers
125 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 44 points 4 weeks ago (2 children)

really stretching the meaning of the word release past breaking if it’s only going to be available to companies friendly with OpenAI

Orion has been teased by an OpenAI executive as potentially up to 100 times more powerful than GPT-4; it’s separate from the o1 reasoning model OpenAI released in September. The company’s goal is to combine its LLMs over time to create an even more capable model that could eventually be called artificial general intelligence, or AGI.

so I’m calling it now, this absolute horseshit’s only purpose is desperate critihype. as with previous rounds of this exact same thing, it’ll only exist to give AI influencers a way to feel superior in conversation and grift more research funds. oh of course Strawberry fucks up that prompt but look, my advance access to Orion does so well I’m sure you’ll agree with me it’s AGI! no you can’t prompt it yourself or know how many times I ran the prompt why would I let you do that

That timing lines up with a cryptic post on X by OpenAI Altman, in which he said he was “excited for the winter constellations to rise soon.” If you ask ChatGPT o1-preview what Altman’s post is hiding, it will tell you that he’s hinting at the word Orion, which is the winter constellation that’s most visible in the night sky from November to February (but it also hallucinates that you can rearrange the letters to spell “ORION”).

there’s something incredibly embarrassing about the fact that Sammy announced the name like a lazy ARG based on a GPT response, which GPT proceeded to absolutely fuck up when asked about. a lot like Strawberry really — there’s so much Binance energy in naming the new version of your product after the stupid shit the last version fucked up, especially if the new version doesn’t fix the problem

[–] [email protected] 24 points 4 weeks ago (2 children)

You forgot the best part, the screenshot of the person asking ChatGPT's "thinking" model what Altman was hiding:

Thought for 95 seconds ... Rearranging the letters in "they are so great" can form the word ORION.

AI is a complete joke, and I have no idea how anyone can think otherwise.

[–] [email protected] 27 points 4 weeks ago (1 children)

I'm already sick and tired of the "hallucinate" euphemism.

It isn't a cute widdle hallucination, It's the damn product being wrong. Dangerously, stupidly, obviously wrong.

In a world that hadn't already gone well to shit, this would be considered an unacceptable error and a demonstration that the product isn't ready.

Now I suddenly find myself living in this accelerated idiocracy where wall street has forced us - as a fucking society - to live with a Ready, Fire, Aim mentality in business, especially tech.

[–] [email protected] 15 points 4 weeks ago (2 children)

I think it's weird that "hallucination" would be considered a cute euphemism. Would you trust something that's perpetually tripping balls and confidently announcing whatever comes to them in a dream? To me that sounds worse than merely being wrong.

[–] [email protected] 12 points 4 weeks ago (1 children)

I think the problem is that it portrays them as weird exceptions, possibly even echoes from some kind of ghost in the machine. Instead of being a statistical inevitability when you're asking for the next predicted token instead of meaningfully examining a model of reality.

"Hallucination" applies only to the times when the output is obviously bad, and hides the fact that it's doing exactly the same thing when it incidentally produces a true statement.

[–] [email protected] 2 points 4 weeks ago (1 children)

I get the gist, but also it's kinda hard to come up with a better alternative. A simple "being wrong" doesn't exactly communicate it either. I don't think "hallucination" is a perfect word for the phenomenon of "a statistically probable sequence of language tokens forming a factually incorrect claim" by any means, but in terms of the available options I find it pretty good.

I don't think the issue here is the word, it's just that a lot of people think the machines are smart when they're not. Not anthropomorphizing the machines is a battle that was lost no later than the time computer data representation devices were named "memory", so I don't think that's really the issue here either.

As a side note, I've seen cases of people (admittedly, mostly critics of AI in the first place) call anything produced by an LLM a hallucination regardless of truthfulness.

[–] [email protected] 1 points 1 week ago

Obvious bullshit is a good way to put it. It even implies the existence of less obvious bullshit.

[–] [email protected] 3 points 4 weeks ago

Reminds me of A Scanner Darkly a bit, yeah I would not trust someone like that

[–] [email protected] 19 points 4 weeks ago

[ChatGPT interrupts a Scrabble game, spills the tiles onto the table, and rearranges THEY ARE SO GREAT into TOO MANY SECRETS]

[–] [email protected] 15 points 4 weeks ago (2 children)

teased by an OpenAI executive as potentially up to 100 times more powerful

"potentially up to 100 times" is such a peculiar phrasing too... could just as well say "potentially up to one billion trillion times!"

[–] [email protected] 9 points 4 weeks ago

I'd love to get an interview with saltman and ask him to explain how they measure "power" of those things. What's the methodology? Do you have charts? Or does it just somehow consume 100x more power as in watts.