this post was submitted on 27 Jun 2024
13 points (100.0% liked)
Technology
962 readers
40 users here now
A tech news sub for communists
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
AI/ML research has long been notorious for choosing bullshit benchmarks that make your approach look good, and then nobody ever uses it because it's not actually that good in practice.
It's totally possible that there will be legitimate NLP use-cases where this approach makes sense, but that is almost entirely separate from the current LLM craze. Also, transformer-based LLMs pretty much entirely supplanted recurrent networks as early as like 2018 in basically every NLP task. So even if the semiconductor industry massively reoriented to producing chips that support "MatMul-free" models like this one to even get an energy reduction, that would still mean that the model outputs would be even more garbage than they already are.
Sure, that's why I said other people will benchmark it as well at some point and we'll know definitively. Based on my reading, the idea here is to combine both approaches as an optimization technique. Using GPT as a hammer for every problem has been the hype phase. Now, people are starting to realize that other approaches have value too, and it's likely that combining different approaches will in fact produce interesting results.