732
submitted 2 days ago by [email protected] to c/[email protected]
you are viewing a single comment's thread
view the rest of the comments
[-] [email protected] 133 points 2 days ago* (last edited 2 days ago)

Nitpick: it was never 'filtered'

LLMs can be trained to refuse excessively (which is kinda stupid and is objectively proven to make them dumber), but the correct term is 'biased'. If it was filtered, it would literally give empty responses for anything deemed harmful, or at least noticably take some time to retry.

They trained it to praise hitler, intentionally. They didn't remove any guardrails. Not that Musk acolytes would know any different.

[-] [email protected] 22 points 2 days ago

If you wanted to nitpick honestly, you would say what is actually going on and the data it is trained on is from the internet and they were discouraging it from being offensive. The internet is a pretty offensive place when people don't have to censor themselves and speak without inhibitions, like on 4chan or Twitter comments.

Grok losing the guardrails means it will be distilled internet speech deprived of decency and empathy.

DeepSeek, now that is a filtered LLM.

[-] [email protected] 23 points 2 days ago* (last edited 2 days ago)

DeepSeek, now that is a filtered LLM.

The web version has a strict filter that cuts it off. Not sure about API access, but raw Deepseek 671B is actually pretty open. Especially with the right prompting.

There are also finetunes that specifically remove China-specific refusals. Note that Microsoft actually added saftey training to "improve its risk profile":

https://huggingface.co/microsoft/MAI-DS-R1

https://huggingface.co/perplexity-ai/r1-1776

That's the virtue of being an open weights LLM. Over filtering is not a problem, one can tweak it to do whatever you want.


Grok losing the guardrails means it will be distilled internet speech deprived of decency and empathy.

Instruct LLMs aren't trained on raw data.

It wouldn't be talking like this if it was just trained on randomized, augmented conversations, or even mostly Twitter data. They cherry picked "anti woke" data to placate Musk real quick, and the result effectively drove the model crazy. It has all the signatures of a bad finetune: specific overused phrases, common obsessions, going off-topic, and so on.


...Not that I don't agree with you in principle. Twitter is a terrible source for data, heh.

[-] [email protected] 2 points 2 days ago

That model is over a terabyte, I don’t know why I thought it was lightweight. Not that any reporting on machine learning has been particularly good, but this isn’t what I expected at all.

What can even run it?

[-] [email protected] 4 points 1 day ago* (last edited 1 day ago)

A lot, but less than you’d think! Basically a RTX 3090/threadripper system with a lot of RAM (192GB?)

With this framework, specifically: https://github.com/ikawrakow/ik_llama.cpp?tab=readme-ov-file

The “dense” part of the model can stay on the GPU while the experts can be offloaded to the CPU, and the whole thing can be quantized to ~3 bits average, instead of 8 bits like the full model.


That’s just a hack for personal use, though. The intended way to run it is on a couple of H100 boxes, and to serve it to many, many, many users at once. LLMs run more efficiently when they serve in parallel. Eg generating tokens for 4 users isn’t much slower than generating them for 2, and Deepseek explicitly architected it to be really fast at scale. It is “lightweight” in a sense.


…But if you have a “sane” system, it’s indeed a bit large. The best I can run on my 24GB vram system are 32B - 49B dense models (like Qwen 3 or nemotron), or 70B mixture of experts (like the new Hunyuan 70B).

[-] [email protected] 1 points 1 day ago

Data centers or a dude with a couple gpus and time on his hands?

load more comments (2 replies)
this post was submitted on 08 Jul 2025
732 points (98.3% liked)

Technology

72646 readers
3706 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS