Or, to be even more crude, him as a caricature of Mohamad.
(Please don't ban me mods, we live in a world where this could actually happen on national TV).
Or, to be even more crude, him as a caricature of Mohamad.
(Please don't ban me mods, we live in a world where this could actually happen on national TV).
Sort of?
A comprehensive look at voter turnout from 2000 onwards reveals that the average turnout rate for primary elections is 27% of registered voters, compared to 60.5% for general elections. It should be noted that less than half of the voters who cast a ballot in the general election participate in primaries.
https://goodparty.org/blog/article/primary-vs-general-election
All sorts of problems have solutions. I see this a lot in the tech space, like the need to save a video, Adblock, whatever.
…But generally, people don’t use them. Or know about them.
US primaries feel similar, where voters technically have the ability to choose candidates but, statistically, they don’t.
Attention is finite. Many dont know about primaries. To me, giving people the choice doesn’t matter if it’s obscure and inaccessibly designed.
Oh actually that's a great card for LLM serving!
Use the llama.cpp server from source, it has better support for Pascal cards than anything else:
https://github.com/ggml-org/llama.cpp/blob/master/docs/multimodal.md
Gemma 3 is a hair too big (like 17-18GB), so I'd start with InternVL 14B Q5K XL: https://huggingface.co/unsloth/InternVL3-14B-Instruct-GGUF
Or Mixtral 24B IQ4_XS for more 'text' intelligence than vision: https://huggingface.co/unsloth/Mistral-Small-3.2-24B-Instruct-2506-GGUF
I'm a bit 'behind' on the vision model scene, so I can look around more if they don't feel sufficient, or walk you through setting up the llama.cpp server. Basically it provides an endpoint which you can hit with the same API as ChatGPT.
Is this an ADHD meme?
I'm afraid it might be, cause I have a trail of 'one giant playlists' and songs on repeat.
I know. I am on Lemmy, heh.
I feel a strong Streisand Effect here.
Democratic boogoeypeople who aren't literally Hillary Clinton are exactly what the party needs. They wanna obsess over him? Well, Trump seems to have forgotten "there's no such thing as bad attention."
Not in niche games. Rimworld and Stellaris (for instance) are dramatically faster on Windows, hence I keep a partition around. I'm talking 40%ish better simulation speeds vs Linux native (and still a hit with Proton, though much less).
Minecraft and Starsector, on the other hand, freaking love Linux. They’re dramatically faster.
These are kinda extreme scenarios, but the point is AAA benchmarks don’t necessarily apply to the spectrum of games across hardware, especially once you start looking at simulation heavy ones.
Thanks.
I found morgthorak (as their post was deleted).
(Offensive Language Warning) https://mk.gabe.rocks/@[email protected]
I figured "Is Morg that bad? Maybe Linus is overreacting, or there's missing context?" Ohhh boy. I can't even quote their posts without violating Lemmy's rules, but its openly white supremacist and crudely homophobic.
But it is kinda... morbidly fascinating to peer into that sort of community. I haven't heard some of those slurs since high school.
Doing some sleuthing, it appears a former follower was mad about some incidents at the shelter, and started a subreddit against it:
SFW, but click at your own risk.
https://web.archive.org/web/20250307232100/https://www.reddit.com/r/saveafoxsnark/
The last web archive backup is old, but my guess is the subreddit snowballed (due to the Reddit engagement algo loving hatesubs, of course) and got us here.
So... yeah.
Rest in peace.
TBH you should fold this into localllama? Or open source AI?
I have very mixed (mostly bad) feelings on ollama. In a nutshell, they're kinda Twitter attention grabbers that give zero credit/contribution to the underlying framework (llama.cpp). And that's just the tip of the iceberg, they've made lots of controversial moves, and it seems like they're headed for commercial enshittification.
They're... slimy.
They like to pretend they're the only way to run local LLMs and blot out any other discussion, which is why I feel kinda bad about a dedicated ollama community.
It's also a highly suboptimal way for most people to run LLMs, especially if you're willing to tweak.
I would always recommend Kobold.cpp, tabbyAPI, ik_llama.cpp, Aphrodite, LM Studio, the llama.cpp server, sglang, the AMD lemonade server, any number of backends over them. Literally anything but ollama.
...TL;DR I don't the the idea of focusing on ollama at the expense of other backends. Running LLMs locally should be the community, not ollama specifically.
You can still use the IGP, which might be faster in some cases.