this post was submitted on 06 Feb 2024
119 points (98.4% liked)

Technology

34698 readers
297 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
top 13 comments
sorted by: hot top controversial new old
[–] [email protected] 14 points 8 months ago (2 children)

Companies like OpenAI and Google believe that this technology is so powerful, they can release it to the public only in the form of an online chatbot after spending months applying digital guardrails that prevent it from spewing disinformation, hate speech and other toxic material.

Google Bard is currently free to use for now, so the danger is not locking up tech behind a subscription (though Google will 100% do that eventually).

[–] [email protected] 21 points 8 months ago (1 children)

The only reason Bard is free for now is because Google is building up a base of users who become invested in the service before turning it into a subscription. The business model will clearly be to sell the access to the service, and people being able to run their own models is the core danger for them.

[–] [email protected] 4 points 8 months ago (2 children)

I absolutely agree with you. That is the internet platform business model after all.

Still though, OpenAI and Google, I think, have a legitimate argument that LLMs without limitation may be socially harmful.

That doesn't mean a $20 subscription is the one and only means of addressing that problem though.

In other words, I think we can take OpenAI and Google at face value without also saying their business model is the best way to solve the problem.

[–] [email protected] 12 points 8 months ago (1 children)

Personally, I think it's far more socially harmful to allow a handful of megacorps to control this technology going forward.

[–] [email protected] 1 points 8 months ago (1 children)

I agree, but I think that computational power requirements already do that – complex models that do interesting stuff need a bunch of special v-cards to train for days, and they need a lot of data to train on – so it's natural that those who already have data and money to process it get there first.

I think, their argument is not even about their monopoly, but to shut down the question of why and how to trust THEM with policing their LLMs before it happened. Open system can be investigated and we can find out that they over or underregulated some stuff, made it biased, find copyrighted materials, personal information, gore or CSAM in their training samples et cetera. They save metric tons of possible lawsuits by making it a rule in the industry that no one can see under the roof of their machines.

[–] [email protected] 1 points 8 months ago

Initial training of the models is expensive, but a trained model can be run on a laptop from that point. The problem of initial training can also be addressed by doing it in distributed fashion. There are also open source projects, such as Petals, that allow you running distributed models Bittorrent style. Other approaches like LoRA allow taking existing models and turning them for a particular task without the need to do training from scratch. There's a pretty good article from Steve Yegge on the recent advances in open source models.

I do agree that avoiding regulation and scrutiny are most definitely additional goals these companies have. They want to keep this tech opaque and frame themselves as responsible guardians of the technology that shouldn't fall into the hands of unwashed masses who can't be trusted with it.

[–] [email protected] 5 points 8 months ago

Still though, OpenAI and Google, I think, have a legitimate argument that LLMs without limitation may be socially harmful.

I actually do agree with this. Because of the massive potential for harm if it goes wrong, AI development is one of very, very few types of technology where it does actually make some sense to try restrict its development to the big entities so that you can have some vaguely-realistic hope of placing regulation on it so that it can be developed safely and responsibly. (Whether the big entities will develop it safely and responsibly is a separate, though related, issue.)

That said, good fuckin luck, the genie's pretty much out at this point.

[–] [email protected] 2 points 8 months ago

for now

(Mistral is actually open source (code and models), which is very much not true of GPT or Bard.)

[–] [email protected] 8 points 8 months ago* (last edited 8 months ago) (1 children)

Strange how much Google used to loooove open source when it hurt Microsoft....

Google supporting an open source project ain't a life raft....

It's a warning sign.

[–] [email protected] 1 points 8 months ago
[–] [email protected] 5 points 8 months ago

This is the best summary I could come up with:


Mistral is also among the companies that believe in sharing this technology as open-source software — computer code that can be freely copied, modified and reused — providing outsiders with everything they need to quickly build chatbots of their own.

Rival companies like OpenAI and Google argue that the open-source approach is dangerous and that the raw technology could be used to spread disinformation and other harmful material.

Mistral’s fate has taken on considerable importance in France, where leaders like Bruno Le Maire, the finance minister, have pointed to the company as providing the nation a chance to challenge U.S. tech giants.

Europe has not produced many meaningful tech companies dating back to the dot-com boom and sees artificial intelligence as a field where it can gain ground.

Companies like OpenAI and Google believe that this technology is so powerful, they can release it to the public only in the form of an online chatbot after spending months applying digital guardrails that prevent it from spewing disinformation, hate speech and other toxic material.

Widely sharing the underling code for A.I., Mr. Midha said, is the safest path because more people can review the technology, find its flaws and work to remove or mitigate them.


The original article contains 745 words, the summary contains 204 words. Saved 73%. I'm a bot and I'm open source!

[–] [email protected] 5 points 8 months ago

CAN'T YOU SEE THAT THEY'RE HURTING THE DEFENSELESS MONEY???

[–] [email protected] 4 points 8 months ago

Site doesn't load without JavaScript