249
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 25 May 2025
249 points (97.3% liked)
Technology
70711 readers
3587 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
That's awesome! Thank you!
I absolutely do. What I find as a potential showstopper for me right now, is that I don't have a nonintegrated GPU, which makes complex LLMs hard to run. Basically, if I can't push the processing to CPU, I'm looking at around 2-5 seconds per token; it's rough. But I like your workflow a lot, and I'm going to try to get something similar going with my incredibly old hardware, and see if CPU-only processing of this would be something feasible (though, I'm not super hopeful there).
And, yes, I, too, am aware of the hallucinations and such that come from the technology. But, honestly, for this non-critical use case, I don't really care.
I only just recently discovered that my installation of Whisper was completely unaware that I had a GPU, and was running entirely on my CPU. So even if you can't get a good LLM running locally you might still be able to get everything turned into text transcripts for eventual future processing. :)
Nicceeeee! Thank you!