106

Definition of can dish it but can't take it

top 17 comments
sorted by: hot top new old
[-] humanspiral@lemmy.ca 2 points 4 hours ago

old (1 year = 1000 years in AI) accusation not relevant to expected upcoming deepseek breakthrough model. Distillation is used to make smaller models, and they are always crap compared to training on open data. Distillation is not a common technique anymore, though it's hard to prove that more tokens wouldn't be "cheat code"

This is more a desperation play from US models, even as youtube is in full, "buy $200/month subscriptions now or die" mode.

[-] Tm12@lemmy.ca 4 points 7 hours ago

Big “I’m telling Mom” energy.

[-] bennieandthez@lemmygrad.ml 3 points 7 hours ago* (last edited 7 hours ago)

Scrapers getting mad for being scraped will never not be funny for me. Deepseek surgence is such an awesome story.

[-] skip0110@lemmy.zip 47 points 1 day ago

Classic pull up the ladder behind you move.

Kind of hilarious that one component of their complaint is that the DeepSeek model is more energy/computation efficient than theirs. Welcome to the free market?!

[-] HiddenLayer555@lemmy.ml 41 points 1 day ago* (last edited 1 day ago)

OpenAI: "They stole our technology!"

Also OpenAI: "Uh, well, our technology is actually inferior to theirs, but they must have stole it and made massive sweeping improvements to it that we weren't able to! How dare they!"

[-] p03locke@lemmy.dbzer0.com 7 points 7 hours ago

OpenAI should have been fucking open in the first place. The Chinese are the only ones bother to open-source their models, and the US corpo's decision to immediately close-source everything going to fuck them over in the end.

[-] RobotToaster@mander.xyz 37 points 1 day ago
[-] hperrin@lemmy.ca 24 points 1 day ago

How dare you steal our technology! We stole it first!

[-] DrSleepless@lemmy.world 15 points 23 hours ago

Ain’t no copyright in an AI world

[-] ThomasWilliams@lemmy.world 9 points 22 hours ago

But it's Open AI ?

[-] your_good_buddy@lemmy.world 14 points 1 day ago

Oh no!

OpenAI should copywrite their work. I'm sure no one would dare steal someone else's hard work for their AI model development!

[-] RindoGang@lemmygrad.ml 9 points 23 hours ago

"We do not steal inferior technology."

[-] finickydesert_1@social.vivaldi.net 7 points 22 hours ago

It steals from everyone, but if it's from itself, it's too much.

[-] JustJack23@slrpnk.net 9 points 1 day ago

Can dish what? They haven't made a profitable product ever. If you had a lemonade stand you are more profitable than those fucks.

[-] obbeel@mander.xyz 2 points 23 hours ago

DeepSeek API isn't free, and to use Qwen you'd have to sign up for Ollama Cloud or something like that, as Local deploying is prohibitive.

They're trying to link DeepSeek to the old tale freeride companies that apparently have ties to the original company product and gets a "look the other way" attitude from it (e.g. Meta with their Whatsapp products). This situation is nothing like it.

[-] p03locke@lemmy.dbzer0.com 3 points 5 hours ago

DeepSeek API isn’t free, and to use Qwen you’d have to sign up for Ollama Cloud or something like that

To use Qwen, all you need is a decent video card and a local LLM server like LM Studio.

Local deploying is prohibitive

There's a shitton of LLM models in various sizes to fit the requirements of your video card. Don't have the 256GB VRAM requirements for the full quantized 8-bit 235B Qwen3 model? Fine, get the quantized 4-bit 30B model that fits into a 24GB card. Or a Qwen3 8B Base with DeepSeek-R1 post-trained Q 6-bit that fits on a 8GB card.

There are literally hundreds of variations that people have made to fit whatever size you need... because it's fucking open-source!

[-] obbeel@mander.xyz 1 points 2 hours ago

Training LLMs is very costly, and open-weights aren't open-source. For example, there are some LLMs in Brazil, but there is a notable case for a brazilian student on the University of Dusseldorf that banded together with two other students of non-brazilian origin to make a brazilian LLM. 4B model. They used Google to train the LLM, I think, because any training on low VRAM won't work. It took many days and over $3000 dollars. The name is Tucano.

I know it looks cheap because there are many, but many country initiatives are eager on AI technology. It's costly.

this post was submitted on 13 Feb 2026
106 points (100.0% liked)

Technology

41813 readers
426 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 6 years ago
MODERATORS