vivendi

joined 3 weeks ago
[–] vivendi@programming.dev 4 points 1 day ago (1 children)

between lapse and zed I also decided on Lapse because it feels much more community-oriented than Zed; maybe you should look into that

[–] vivendi@programming.dev 2 points 1 day ago

Technically this shit isn't even free (libre); atleast with corpo projects we can always fork them

[–] vivendi@programming.dev -2 points 1 day ago

I wish there was a GCC equivalent; but even if clang is a corpowhore project it's atleast OSS

[–] vivendi@programming.dev 20 points 1 day ago (2 children)

Well; companies used to get anti-trust laser canon'ed from orbit for less; but good luck with that in modern America

[–] vivendi@programming.dev 4 points 1 day ago (1 children)

I know people like to hate on google, but google is actually like 3 companies in a trench coat.

They do highly valuable open source / open ecosystem work (I will say the chance of you indirectly using a google tool without knowing is over 90% now) and if the American government, a capitalist fascist government no less, gets their hands on it, we're fucked

Not all of google is adsense or YouTube.

[–] vivendi@programming.dev 3 points 1 day ago

Another great thing about BTRFS is that it can detect hardware problems sooner: if your BTRFS drive keeps losing data to corruption; that's because it has detected a corruption that other FS's would silently work with

[–] vivendi@programming.dev 5 points 1 day ago (1 children)

BTRFS FUCKS; HARD

[–] vivendi@programming.dev 15 points 1 day ago* (last edited 1 day ago) (1 children)

Inference costs are very, very low. You can run Mistral Small 24B finetunes that are better than GPT-4o and actually quite usable on your own local machine.

As for training costs, Meta's LLAMA team displace their emissions with environmental programs, which is more green than 99.9% of any company making any product you use

TLDR; don't use ClosedAI use Mistral or other foss projects

EDIT: I recommend cognitivecomputations Dolphin 3.0 Mistral Small R1 fine tune in particular. I've only used it for mathematical workloads in truth, but it has been exceedingly good at my tasks thus far. The training set and the model are both FOSS and uncensored. You'll need a custom system prompt to activate the Chain of Thought reasoning, and you'll need a comparatively low temperature to keep the model from creating logic loops for itself (0.1 - 0.4 range should be OK)

[–] vivendi@programming.dev 6 points 1 day ago (1 children)

I'm not going to gore websites for your pleasure, but liveleaks used to be a gold mine for you apparently

This is a proven fact. If you want to ve a complete moron, then you're no better than a flat earther or an antivaxxer, in which case fuck you and I'm not going to waste my time on you

[–] vivendi@programming.dev 21 points 2 days ago (12 children)

?????? Astronomically low? Even a crash at 10 to 20 Km/h can turn you into a meat projectile, dumbass

[–] vivendi@programming.dev 30 points 2 days ago (33 children)

No. It also puts the other party's life (in a crash) in danger.

[–] vivendi@programming.dev 1 points 2 days ago (1 children)

I realized that talking to a piece of shit like you is wasting my own time

There is hardly anything in your favor here, scum

view more: next ›