Lots of places do a better job providing DRM free or DRM Lite ebooks (Chicago press only ties your name to the files so you'd have to doxx yourself to share it, but you can share it), but the sheer library of self published books on Amazon is hard to find.
There's an author I've become good friends with who I pay him (in coffee) for his books because I disagree with giving Amazon a cent. But he noted that's just where the masses are still and it's hard to break that momentum.
For simply productivity like Copilot or Text Gen like ChatGPT.
It absolutely is doable on a local GPU.
Source: I do it.
Sure I can't do auto running simulations to find new drugs and protein sequencing or whatever. But it helps me code. It helps me digest software manuals. That's honestly all I want
Also, massive compute projects for the @home project are good?
Local LLMs runs fine on a 5 year old GPU, a 3060 12 gig. I am getting performance on par with cloud ran models. I'm upgrading to a 5060ti just because I wanted to play with image Gen.