I have a Radeon RX 7800 XT.
Qwen 3.5-9b is blazingly fast on it. However while it’s its impressive for its size, it has its limitations. Complex tasks with several steps are too much for it.
So now I run the 3.6-35B model with llama.cpp It’s too big for my VRAM so I had to split it: everything that doesn’t fit on the graphics’s card runs in the normal RAM. That slows everything down, but with the right flags I get a bit over 20 tokens/s.
If you have problems with speed and you’re using ollama I would replace it with something faster like llama.cpp.
Sure! How much experience do you have with LLMs?