21

cross-posted from: https://lemmy.ml/post/45766694

Hey :) For a while now I use gpt-oss-20b on my home lab for lightweight coding tasks and some automation. I'm not so up to date with the current self-hosted LLMs and since the model I'm using was released at the beginning of August 2025 (From an LLM development perspective, it feels like an eternity to me) I just wanted to use the collective wisdom of lemmy to maybe replace my model with something better out there.

Edit:

Specs:

GPU: RTX 3060 (12GB vRAM)

RAM: 64 GB

gpt-oss-20b does not fit into the vRAM completely but it partially offloaded and is reasonably fast (enough for me)

all 8 comments
sorted by: hot top new old
[-] Eyekaytee@aussie.zone 6 points 1 month ago

I’m using was released at the beginning of August 2025 (From an LLM development perspective, it feels like an eternity to me)

I mean yeah, more than 6 months in AI world is an eternity 🤣

the big ones are gemma 4 and qwen 3.5

I'm using Gemma 4 and it works really really well, it's sad to me that I'm using big tech's model but it's just so far ahead of mistral and others that I have no choice

Qwen is really good with thinking turned off, turned on it has a massive overthinking problem, like you say "hi" and it'll think for 3 minutes on how best to reply

Still waiting for Deepseek to come out with v4 at this stage but Gemma 4 is my current sota self hosted model

[-] HelloRoot@lemy.lol 3 points 1 month ago* (last edited 1 month ago)

I think people are sleeping on GLM.

Tried it out recently and I like the results a lot so far.

GLM4.5 and 4.7 was good already, now they released 5 and 5.1 https://github.com/zai-org/GLM-5

It says it's for vibecoding but I use it like I would use chatgpt and it gives useable ansers to all of my varied questions. (ofc. you always have to check for correctness, even if it's correct most of the time, which I do cause I'm paranoid)

I guess the only downside is how frigging huge it is.

[-] Eyekaytee@aussie.zone 0 points 1 month ago* (last edited 1 month ago)

I guess the only downside is how frigging huge it is.

Yep :D

I saw 5.1 came out however it required a data centre to run :X

Hoping to see if they release smaller models how they do

[-] sobchak@programming.dev 5 points 1 month ago* (last edited 1 month ago)

I tried some new ones recently (though I have a 24GB GPU). Qwen3.5 9B is pretty impressive for such a small model for agentic stuff like Claude Code. (I can run the Opus distilled model quantized to 6 bit with the full 256k context and no CPU offloading). Gemma4 26B is good if I don't need agentic stuff or a lot of context (it sucks for agentic stuff). You can probably run the smaller versions of these, or with less context.

[-] panda_abyss@lemmy.ca 2 points 1 month ago

Definitely give Gemma4 26ba4b a try

It’s MOE so you should be able to get the same offload, and a4b can be plenty fast.

It has decent world knowledge for the size, and from what I can tell is any at small scale coding in common languages like Python.

[-] p4rzivalrp2@piefed.social 1 points 1 month ago

I've been using gemma4:26b, it's pretty good, although a bit slow even on a 3090,and idk how smaller versions compare

this post was submitted on 11 Apr 2026
21 points (86.2% liked)

LocalLLaMA

4722 readers
11 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

Rules:

Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.

Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.

Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.

Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.

founded 2 years ago
MODERATORS