49
submitted 2 weeks ago by [email protected] to c/[email protected]

I'm curious about what the consensus is here for which models are used for general purpose stuff (coding assist, general experimentation, etc)

What do you consider the "best" model under ~30B parameters?

all 18 comments
sorted by: hot top new old
[-] [email protected] 18 points 2 weeks ago

In my opinion, Qwen3-30B-A3B-2507 would be the best here. Thinking version likely best for most things as long as you don’t mind a slight penalty to speed for more accuracy. I use the quantized IQ4_XS models from Bartowski or Unsloth on HuggingFace.

I’ve seen the new OSS-20B models from OpenAI ranked well in benchmarks but I have not liked the output at all. Typically seems lazy and not very comprehensive. And makes obvious errors.

If you want even smaller and faster the Qwen3 Distill of DeepSeek R1 0528 8B is great for its size (esp if you’re trying to free up some VRAM to use larger context lengths)

[-] [email protected] 3 points 2 weeks ago

That's what I'm using, and it's pretty nice. Thanks for your input!

[-] [email protected] 7 points 2 weeks ago* (last edited 2 weeks ago)

Qwen 2.5 VL and Code. I have a VL doing image captions for LoRA training running now. A 14B is okay for basic code. A quantized 32B 6KL gguf of the same Qwen 2.5 code model runs on 16GB but at a third of the speed of the 14B in bits and bytes 4b. The latter is reasonably fast enough for a couple layers of agentic stuff in emacs with gptel and hits thinking or function calling out of a llama.cpp server better than 50% of the time.

I still haven't tried the new 20B out of Open AI yet.

[-] [email protected] 4 points 2 weeks ago* (last edited 2 weeks ago)

I really liked Mistral-Nemo-Instruct for its allround capabilities. But it's old and hardly the best any more. But I feel lots of newer models are sycophants and tuned more for question answering and assistant stuff, and their ability to write long-form prose or role play as a Podcast host hasn't really improved substancially. These days I switch models. I'll use something more creative if I want that, or switch to a model dedicated to coding if I want autocomplete. But to be honest, coding isn't on my list of requirements any more. I've tried AI coding and it's not really helping with what I do. I regularly waste some extra 30%-100% of time if I do it with AI, and that's with the huge commercial services like AIstudio or ChatGPT.

[-] [email protected] 2 points 2 weeks ago

Yeah a true Nemo successor is sorely overdue

[-] [email protected] 3 points 2 weeks ago

I'm a big fan of NousResearch their deephermes release was awesome and now I'm trying out Hermes 4. I have an 8gb 1070ti GPU was able to fully offload a medium quant of hermes 4 14b with an okay amount of context.

I'm a big fan of the hybrid reasoning models I like being able to turn thinking on or of depending on scenario.

I had a vision model document scanner + TTS going on with a finetune of qwen 2.5 vl and outetts.

If you care more about character emulation for writing and creativity then mistral 2407 and mistral NeMo are other models to check out.

[-] [email protected] 3 points 2 weeks ago

I use Qwen coder 30B, testing Venice 24B, also going to play with Qwen embedding 8B and Qwen resorter(?) 8B. All with Q4.

They all run pretty well on the new MacBook I got for work. My Linux home desktop has far more modest capabilities, and I generally run 7B models, though gpt-oss-20B-Q4 runs decently. It's okay for a local model.

None of them really blow me away, though Cline running in VSCode with Qwen 30B is okay for certain tasks. Asking it to strip all of the irrelevant html out of a table to format as markdown or asciidoc had it thinking for about 10 minutes before asking specifically which one I wanted - my fault, I should've picked one. Wanted markdown but thought adoc would reproduce it with better fidelity (table had embedded code blocks) and so I left it open to interpretation.

By comparison, ChatGPT ingested it and popped an answer back out in seconds that was wrong. So Idk, nothing ventured, nothing gained. Emphasis on the latter.

[-] [email protected] 2 points 1 week ago

Qwen3-30B-A3B-2507 family is an absolute beast. The reasoning models are seriously chatty in their chain of thought, but the results speak for themselves. I’m running a Q4 on a 5090, and with a Q8 KV quant, I can run 60k token context entirely in vram, which gets me up to 200 tokens per second.

[-] [email protected] 0 points 1 week ago

Not sure I want to name any names... 😂

[-] [email protected] 0 points 2 weeks ago* (last edited 2 weeks ago)

Unlike most of you here reading this, I don't allow a corporate executive/billionaire or a distant nation-state to tell me what I am permitted to say or what my model is allowed to output, so I use an uncensored general model from here (first uncheck "proprietary model" box).

[-] [email protected] 3 points 1 week ago

How do you remove all the propaganda they are already trained on ? You reject Deepseek, but you are just allowing yourself to being manipulated by a throng of old propaganda/censorship from the normal internet - garbage manipulative information that is stored in the weights of your 'uncensored' model. 'Freeing' a model to say "shit", is not the same as an uncensored model that you can trust. I think we need a dataset cleansed from the current popular ideology and all propaganda against 'wevil nationstates' that have just rejected the western/US dominance (giving the middle-finger to western oligarchs)..

[-] [email protected] 3 points 2 weeks ago

That is awesome, thank you for that link!

[-] [email protected] 1 points 2 weeks ago

This leaderboard is a gem! This should be a separate post, thank you!

[-] [email protected] 2 points 2 weeks ago

The votes here seem to disagree with you

[-] [email protected] 1 points 2 weeks ago

Oh no, I must be wrong then :(

this post was submitted on 06 Sep 2025
49 points (96.2% liked)

LocalLLaMA

3691 readers
11 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

Rules:

Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.

Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.

Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.

Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.

founded 2 years ago
MODERATORS