this post was submitted on 27 Jan 2025
654 points (97.7% liked)

Technology

35483 readers
409 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 6 points 2 days ago (8 children)

Looks like it is not any smarter than the other junk on the market. The confusion that people consider AI as "intelligence" may be rooted in their own deficits in that area.

And now people exchange one American Junk-spitting Spyware for a Chinese junk-spitting spyware. Hurray! Progress!

[–] [email protected] 6 points 2 days ago* (last edited 2 days ago)

Looks like it is not any smarter than the other junk on the market. The confusion that people consider AI as “intelligence” may be rooted in their own deficits in that area.

Yep, because they believed that OpenAI's (two lies in a name) models would magically digivolve into something that goes well beyond what it was designed to be. Trust us, you just have to feed it more data!

And now people exchange one American Junk-spitting Spyware for a Chinese junk-spitting spyware. Hurray! Progress!

That's the neat bit, really. With that model being free to download and run locally it's actually potentially disruptive to OpenAI's business model. They don't need to do anything malicious to hurt the US' economy.

[–] [email protected] 10 points 2 days ago* (last edited 2 days ago)

It is progress in a sense. The west really put the spotlight on their shiny new expensive toy and banned the export of toy-maker parts to rival countries.

One of those countries made a cheap toy out of jank unwanted parts for much less money and it's of equal or better par than the west's.

As for why we're having an arms race based on AI, I genuinely dont know. It feels like a race to the bottom, with the fallout being the death of the internet (for better or worse)

[–] [email protected] 8 points 2 days ago (1 children)

With understanding LLM, I started to understand some people and their "reasoning" better. That's how they work.

[–] [email protected] 1 points 2 days ago

That's a silver lining, at least.

[–] [email protected] 8 points 2 days ago

The difference is that you can actually download this model and run it on your own hardware (if you have sufficient hardware). In that case it won't be sending any data to China. These models are still useful tools. As long as you're not interested in particular parts of Chinese history of course ;p

[–] [email protected] 4 points 2 days ago* (last edited 2 days ago) (2 children)

And now people exchange one American Junk-spitting Spyware for a Chinese junk-spitting spyware.

LLMs aren't spyware, they're graphs that organize large bodies of data for quick and user-friendly retrieval. The Wikipedia schema accomplishes a similar, abet more primitive, role. There's nothing wrong with the fundamentals of the technology, just the applications that Westoids doggedly insist it be used for.

If you no longer need to boil down half a Great Lake to create the next iteration of Shrimp Jesus, that's good whether or not you think Meta should be dedicating millions of hours of compute to this mind-eroding activity.

[–] [email protected] 3 points 2 days ago (1 children)

I think maybe it's naive to think that if the cost goes down, shrimp jesus won't just be in higher demand. Shrimp jesus has no market cap, bullshit has no market cap. If you make it more efficient to flood cyberspace with bullshit, cyberspace will just be flooded with more bullshit. Those great lakes will still boil, don't worry.

[–] [email protected] 1 points 2 days ago

I think maybe it’s naive to think that if the cost goes down, shrimp jesus won’t just be in higher demand.

Not that demand will go down but that economic cost of generating this nonsense will go down. The number of people shipping this back and forth to each other isn't going to meaningfully change, because Facebook has saturated the social media market.

If you make it more efficient to flood cyberspace with bullshit, cyberspace will just be flooded with more bullshit.

The efficiency is in the real cost of running the model, not in how it is applied. The real bottleneck for AI right now is human adoption. Guys like Altman keep insisting a new iteration (that requires a few hundred miles of nuclear power plants to power) will finally get us a model that people want to use. And speculators in the financial sector seemed willing to cut him a check to go through with it.

Knocking down the real physical cost of this boondoggle is going to de-monopolize this awful idea, which means Altman won't have a trillion dollar line of credit to fuck around with exclusively. We'll still do it, but Wall Street won't have Sam leading them around by the nose when they can get the same thing for 1/100th of the price.

[–] [email protected] 0 points 2 days ago

There’s nothing wrong with the fundamentals of the technology, just the applications that Westoids doggedly insist it be used for.

Westoids? Are you the type of guy I feel like I need to take a shower after talking to?

[–] [email protected] 5 points 2 days ago

It is open source, so it should be audited and if there are back doors they can be plugged in a fork

[–] [email protected] 3 points 2 days ago (2 children)

artificial intelligence

AI has been used in game development for a while and i havent seen anyone complain about the name before it became synonymous with image/text generation

[–] [email protected] 4 points 2 days ago

It was a misnomer there too, but at least people didn't think a bot playing C&C would be able to save the world by evolving into a real, greater than human intelligence.

[–] [email protected] 3 points 2 days ago

Well, that is where the problems started.

[–] [email protected] 1 points 2 days ago

I'm tired of this uninformed take.

LLMs are not a magical box you can ask anything of and get answers. If you are lucky and blindly asking questions it can give some accurate general data, but just like how human brains work you aren't going to be able to accurately recreate random trivia verbatim from a neural net.

What LLMs are useful for, and how they should be used, is a non-deterministic parsing context tool. When people talk about feeding it more data they think of how these things are trained. But you also need to give it grounding context outside of what the prompt is. give it a PDF manual, website link, documentation, whatever and it will use that as context for what you ask it. You can even set it to link to reference.

You still have to know enough to be able to validate the information it is giving you, but that's the case with any tool. You need to know how to use it.

As for the spyware part, that only matters if you are using the hosted instances they provide. Even for OpenAI stuff you can run the models locally with opensource software and maintain control over all the data you feed it. As far as I have found, none of the models you run with Ollama or other local AI software have been caught pushing data to a remote server, at least using open source software.