This is like a perfect model for a Strix Halo mini PC.
Man, I really want one of those Framework Desktops now...
This is like a perfect model for a Strix Halo mini PC.
Man, I really want one of those Framework Desktops now...
Havent heard of this one before now. It will be interesting to see how it actually performs. I didnt see what license the models will be released under hope its a more permissive one like apache. Their marketing should try cooking up a catchy name thats easy to remember. It seems they're a native western language company so also hope it doesnt have too much random Chinese characters like qwen does sometimes
Ive never really gotten into MoE models, people say you can get great performance gains with clever partial offloading strategy between various experts. Maybe one of these days!
Yes with llamacpp its easy to put just the experts on the CPU. Since only some of the experts are used every time, the GB moved to RAM slows things down way less than moving parts of the model that are used every time. And now parts that are used every time get to stay on the GPU. I was able to get llama4 scout running at around 15 T/s on 96GB RAM and 24GB VRAM with a large context. The whole GGUF was about 80GB.
Also they actually are a Chinese company. I am pretty sure it is the company that makes RedNote (Chinese tiktok) and thats why they had access to so much non-synthetic data. I tried the demo on huggingface and never got any Chinese characters.
I also really enjoyed it's prose. I think this will be a winner for creative writing.
Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.
Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.
As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.
Rules:
Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.
Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.
Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.
Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.