105
submitted 4 days ago by mothasa@x69.org to c/technology@beehaw.org

Taalas HC1: 17,000 tokens/sec on Llama 3.1 8B vs Nvidia H200's 233 tokens/sec. 73x faster at one-tenth the power. Each chip runs ONE model, hardwired into the transistors.

you are viewing a single comment's thread
view the rest of the comments
[-] dieICEdie@lemmy.org 7 points 4 days ago

This would be great if you could have a machine that would allow you to swap chips… and then they only charge < 50 USD for each chip.

[-] BarbecueCowboy@lemmy.dbzer0.com 4 points 4 days ago

Would be great, but feels unlikely, most of the gains they're making rely on the lack of versatility.

[-] boonhet@sopuli.xyz 2 points 4 days ago

Can't be that cheap unfortunately if they maxed out the die area. Though it is an older node so maybe not as expensive as flagship GPU chips and shit

[-] tetrislife@leminal.space 1 points 4 days ago
[-] dieICEdie@lemmy.org 2 points 4 days ago

That’s all technology though, sadly.

This one feels shorter-lived than the average chip, tho.

With the hardwiring and all.

[-] MagicShel@lemmy.zip 3 points 4 days ago

The thing that differentiates ChatGPT and Claude is likely more the RAG pipeline that backs them and feeds them context. The models really aren't getting better, we're just getting better at using them to break tasks down into units so small AI can figure it out. I'd bet a GPT 5 model or a Claude Opus 4.6 model would last 5, maybe 10 years before you really start to notice its capabilities are falling behind. I'll bet you could use GPT 4o for 5-10 years and it would be fine.

[-] dieICEdie@lemmy.org 1 points 4 days ago

But if they could make it so the chip is the only thing that is obsolete, That could be recycled pretty easily, or resold.

Then it would stop being 73 times faster than NVIDIA.

[-] dieICEdie@lemmy.org 2 points 4 days ago

If you add levels of indirection, extra transistors and such, it would be surprising to manage to maintain the same level of performance, especially since this design seems to rely on hardwiring to achieve its speed...

[-] dieICEdie@lemmy.org 1 points 4 days ago

Pretty sure the advantage is the AI directly on the chip.

[-] FurryMemesAccount@lemmy.blahaj.zone 1 points 4 days ago* (last edited 4 days ago)

Now it's your proposal's turn not to make any sense. This is an article about a chip with a hardwired model being super fast.

Of course the hardwiring is inflexible, and much, much faster.

[-] dieICEdie@lemmy.org 1 points 4 days ago

I just think you want to argue

this post was submitted on 27 Feb 2026
105 points (100.0% liked)

Technology

42399 readers
252 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 4 years ago
MODERATORS