190
AI chatbots could be making you stupider
(www.bbc.com)
This is a most excellent place for technology news and articles.
I recommend Qwen3.6, either the 27B dense or the 35B MoE model. Both outstanding for local models.
What hardware are you using?
I am using qwen3.5 9b. And it is barely working.
I have a Radeon RX 7800 XT.
Qwen 3.5-9b is blazingly fast on it. However while it’s its impressive for its size, it has its limitations. Complex tasks with several steps are too much for it.
So now I run the 3.6-35B model with llama.cpp It’s too big for my VRAM so I had to split it: everything that doesn’t fit on the graphics’s card runs in the normal RAM. That slows everything down, but with the right flags I get a bit over 20 tokens/s.
If you have problems with speed and you’re using ollama I would replace it with something faster like llama.cpp.