this post was submitted on 28 Jul 2024
218 points (98.7% liked)

technology

23303 readers
17 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 133 points 3 months ago

Have they tried replacing their workers with AI to save money?

[–] [email protected] 93 points 3 months ago (2 children)

Now that capital has integrated them into their system they will not be allowed to fail. At least for now.

[–] [email protected] 64 points 3 months ago (2 children)

doomjak The iron law of "nothing ever happens" necessitates this

spoilerNah but for real how much life can this bubble still have left?

[–] [email protected] 32 points 3 months ago

A lot, because nothing ever happens.

[–] [email protected] 22 points 3 months ago (1 children)

The iron law of "nothing ever happens"

There are decades where nothing ever happens, and there are weeks where we are so back

load more comments (1 replies)
[–] [email protected] 30 points 3 months ago (2 children)

Or Microsoft and Meta will make sure there's less competition in the future for their own LLMs?

[–] [email protected] 23 points 3 months ago (1 children)

It seems like MS could really fuck them up if they stopped using OpenAI for all their azure stuff. As of now I don't think MS relies on their own LLM for anything?

[–] [email protected] 14 points 3 months ago (2 children)

MS abandons basically anything new that doesn't make them even more absurdly rich instantly these days.

load more comments (2 replies)
load more comments (1 replies)
[–] [email protected] 86 points 3 months ago* (last edited 3 months ago) (1 children)

Good, please take the entire fake industry with you

No offense to the AI researchers here (actually maybe only one person lol), but the people who lead/make profit off of/fundraise off of your efforts now are demons

[–] [email protected] 64 points 3 months ago (2 children)

I do think that if OpenAI goes bust that's gonna trigger a market panic that's gonna end the hype cycle.

[–] [email protected] 48 points 3 months ago (1 children)

Inshallah I am fed up of dealing with these charlatans at work

A solution in search of a problem

[–] [email protected] 35 points 3 months ago* (last edited 3 months ago) (1 children)

I just know the AI hype guys in my dept are gonna get promoted and I'll be the one answering why our Azure costs are astronomical while we have not changed our portfolio size at all lol

[–] [email protected] 37 points 3 months ago* (last edited 3 months ago) (3 children)

My guess for the dynamics: openAI investors panic, force the company to cut costs and increase pricing, other AI company investors panic, same result, AI becomes prohibitively expensive for a lot of use cases ending the hype cycle.

[–] [email protected] 23 points 3 months ago (1 children)

I think that's the best argument for why the tech industry won't let that happen. All of the big tech stocks are getting a boost from this massive grift.

Worst case scenario one of the tech giants buys them. Then they pare back the expenses and hide it in their balance sheet, and keep everyone thinking AGI is just around the corner.

[–] [email protected] 17 points 3 months ago (1 children)

It's certainly possible, but I don't think any of the tech giants are in a position to do that today. Google, Microsoft, and Amazon are in a cost cutting cycle, Meta's csuite is probably on a short leash after the metaverse boomdoggle. Apple is the most likely one because they're generally behind everyone else across all ML products but especially LLMs, but afaik they're bracing for seeing drops in sales for the first time in 15 years, so buying openAI might be a tough pitch.

[–] [email protected] 13 points 3 months ago

I believe that Microsoft owns a huge portion of OpenAI, like just short of majority stake

[–] [email protected] 19 points 3 months ago

yeah I think that's very plausible

load more comments (1 replies)
[–] [email protected] 64 points 3 months ago
[–] [email protected] 60 points 3 months ago (6 children)

I hate when people say 'LLMs have legitimate uses but...'. NO! THEY DONT! Its entirely a platform for building scams! It should be burnt to the ground entirely

[–] [email protected] 46 points 3 months ago (1 children)

But then how will people write 20 cover letters a day to keep up with the increasing rate of instant rejections?

Saw a really depressing ad at work the other day where Google was advertising their thing and it was some person asking their LLM to write a letter for their daughter to this athlete bragging about how she'll break her record one day. They couch it in "here's a draft" but it's just so bleak. The idea that a child so excited about doing a sport and dreaming of going to the Olympics and getting a world record can't just write a bit of a clumsy letter expressing themselves to their hero is just beyond depressing. Writing swill for automated systems that are going to reject you anyway is one thing, but the idea that they think that this is a legitimate use of these models just highlights how obnoxiously out of touch they are.

How do we learn and grow as people and find our own writing voices if we don't write some of the most cringe shit imaginable when we're young. I wrote a weird letter to Emma Watson in middle school, nobody ever read it, but it was a learning experience and made me actually have to think about my own feelings. These techbros have to have been grown in vats.

[–] [email protected] 44 points 3 months ago (11 children)

I've hesitated to ever write anything about it thinking it'd come across as too yells-at-cloud or Luddite, but this comment kind of inspired me to flesh out something that's been simmering in the back of my head ever since LLMs became kelly latest fad after the NFT boom.

One of the most unnerving things to me about "AI" in the common understanding is that its entire hype cycle and main use cases are all tacit admissions that all of the professional and academic uses of it are proof that their pre-"AI" standards were perfunctory hoop jumping bullshit to join the professional managerial class, and their "artistic" uses are almost entirely utilized by people with zero artistic sensibilities or weirdo porno sickos. All of it belies a deep cynicism about the status quo where what could have been heartfelt but clumsy writing by young students or the athlete in your example are being unknowingly robbed of their agency and the humanizing future of looking back on clunky immature writing as a personal marker of growth. They're just hoops to jump through to get whatever degree or accolade you're seeking, with whatever personal growth that those achievements originally meant stripped of anything other than "achieving them is good because it advances your career and earning potential." Techbros' most fawning and optimistic pitches of "AI" and "The Singularity" instead read to me as the grimmest and most alienating version of neoliberal "end of history" horseshit where even art and language themselves are reduced to SEO marketized min/maxxed rat races.

I hope this doesn't sound too a-guy but I had to get that rant out

Maybe I'll expand that into something

load more comments (11 replies)
[–] [email protected] 22 points 3 months ago (3 children)

So the emotional resonance I felt when I asked ChatGPT to write me a song about my experiences still loving the parent that abused me was what to you?

Like the results were objectively artless glurge of course but I needed that in that moment.

[–] [email protected] 16 points 3 months ago* (last edited 3 months ago)

I mean this is exactly part of the reason they’re going bankrupt which is good so you should keep doing it. Companies have been using other forms of AI with some success whereas LLM just regurgitates too much random fake information for anyone serious to use professionally.

If it goes under, use open source LLMs which have been steadily improving and almost surpassing proprietary ones.

load more comments (1 replies)
[–] [email protected] 17 points 3 months ago* (last edited 3 months ago) (7 children)

I promise this isn't true. AI is absolutely a scam in the sense that it's overhype as fuck, but LLMs are frequently of practical use to me when doing basically anything technical. It has helped me solve real-life problems that actually materially helps others.

load more comments (7 replies)
load more comments (3 replies)
[–] [email protected] 52 points 3 months ago

1 trillion more parameters just a trillion more parameters bro i swear we'll be profitable then bro

[–] [email protected] 43 points 3 months ago (1 children)
[–] [email protected] 36 points 3 months ago
[–] [email protected] 39 points 3 months ago (2 children)

As far as "AI" goes, it's here to stay. As for OpenAI they will probably be bought off by one of the big ones, as is usually the case with these companies.

[–] [email protected] 35 points 3 months ago (1 children)

I agree that this tech has lots of legitimate uses, and it's actually good for the hype cycle to end early so people can get back to figuring out how to apply this stuff where it makes sense. LLMs also managed to suck up all the air in the room, but I expect the real value is going to come from using them as a component in larger systems utilizing different techniques.

[–] [email protected] 14 points 3 months ago (3 children)

Yeah but integrating LLMs with other systems is already happening.

Most recent case is out of Deepmind, where they managed to get silver medalist score in the International Mathematics Olympiad (IMO) using a LLM with a formal verification language (LEAN) and then using synthetic data and reinforcement learning. Although I think they had to manually formalize the problem before feeding it to the algorithm, and also it took several days to solve the problems (except for one that took minutes), so there's still a lot of space for improvement.

load more comments (3 replies)
[–] [email protected] 35 points 3 months ago

Nature is healing.

[–] [email protected] 32 points 3 months ago

and nothing of value is at risk of being lost

[–] [email protected] 25 points 3 months ago
[–] [email protected] 24 points 3 months ago (11 children)

Is this because AI LLMs don't do anything good or useful? They get very simple questions wrong, will fabricate nonsense out of thin air, and even at their most useful they're a conversational version of a Google search. I haven't seen a single thing they do that a person would need or want.

Maybe it could be neat in some kind of procedurally generated video game? But even that would be worse than something written by human writers. What is an LLM even for?

[–] [email protected] 13 points 3 months ago (3 children)

I think there are legitimate uses for this tech, but they're pretty niche and difficult to monetize in practice. For most jobs, correctness matters, and if the system can't be guaranteed to produce reasonably correct results then it's not really improving productivity in a meaningful way.

I find this stuff is great in cases where you already have domain knowledge, and maybe you want to bounce ideas off and the output it generates can stimulate an idea in your head. Whether it understands what it's outputting really doesn't matter in this scenario. It also works reasonably well as a coding assistant, where it can generate code that points you in the right direction, and it can be faster to do that than googling.

We'll probably see some niches where LLMs can be pretty helpful, but their capabilities are incredibly oversold at the moment.

load more comments (3 replies)
load more comments (10 replies)
[–] [email protected] 24 points 3 months ago* (last edited 3 months ago)

big holders with insider information change to short positions to make money during the crash by putting their shares up as collateral to investment banks in exchange for loans, the bubble bursts, smaller investors lose money, the government steps in and bails them out because they're "too big to fail" the torment nexus continues humming along

[–] [email protected] 23 points 3 months ago

my-hero it's because chatgpt didn't say enough slurs

[–] [email protected] 22 points 3 months ago

I think a solution could be to make it burn even more fossil fuels per query

[–] [email protected] 22 points 3 months ago* (last edited 3 months ago) (5 children)

The thing that isn't really mentioned here is that the largest OpenAI investor is Microsoft, and most of the money OpenAI spends is on Microsoft cloud services. So basically OpenAI is an internal Microsoft capital investment. They won't let it fail, but they might kill it if it loses money for long enough.

load more comments (5 replies)
[–] [email protected] 18 points 3 months ago (2 children)

I like how it mentions Nvidia and Microsoft as if this shit is an anomaly and it's actually profitable for the other guys and won't collapse we promise

[–] [email protected] 19 points 3 months ago (2 children)

Nvidia is in sell the shovels business, they'll be fine even if stock craters

load more comments (2 replies)
load more comments (1 replies)
[–] [email protected] 15 points 3 months ago (1 children)

Microsoft won't let them fail, it would be too emberassing.

load more comments (1 replies)
[–] [email protected] 14 points 3 months ago

Startups having 12 months of runway before insolvency is pretty normal. OpenAI's valuation and burn rate might be a problem since they'll need to do a bigger round, but I doubt it. They are basically the hottest startup on the planet right now. I think this article is interesting but ultimately doesn't mean anything.

load more comments
view more: next ›