86
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 11 Jul 2023
86 points (93.9% liked)
Technology
72360 readers
2804 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
And I disagree with it too. And it's not because of how good the models are in technical terms, the corporate juggernauts are only just ahead of OSS on that front... it's server space and the money to acquire it that is the moat.
An average internet user will not install the Vicunas and the Pygmalions and the LLaMAs of the LLM space. Why?
For one, the field is too complicated to get into, but, more importantly, a lot of people can't.
Even the lowest complexity models require a PC + graphics card with a fairly beefy amount of VRAM (6GB at bare minimum), and the ones that can go toe-to-toe with ChatGPT are barely runnable on even the most monstrous of cards. No one is gonna shell out 1500 bucks for the 4090 just so they can run Vicuna-30B.
They are gonna use online, free-to-use, no BS, no technical jargon LLM services. All the major companies know that.
ChatGPT and services like it have made the expectation: "just type it in, get amazing response in seconds, no matter where".
OSS can't beat that, at least not right now. And until it can, the 99% will be in Silicon Valley's jaws.
Have you looked into AIHorde?
It's clearly harder to use than the commercial alternatives but at first glance it doesn't seem to bad.
It looks about as complicated as setting up any of the other volunteer compute projects (like SETI@home).
I didn't know about it, but it looks really neat. Gonna give it a spin to help me summarize documentation.