this post was submitted on 09 Aug 2024
68 points (92.5% liked)

Linux

47287 readers
997 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
top 23 comments
sorted by: hot top controversial new old
[–] [email protected] 19 points 1 month ago (2 children)

Also recommended (I use the Flatpak version): GPT4All

And no, this has nothing to do with ChatGPT. It can download different AI models from HuggingFace and run them on CPU or GPU.

[–] [email protected] 11 points 1 month ago (2 children)

I actually found GPT4ALL through looking into Kompute (Vulkan Compute), and it led me to question why anyone would bother with ROCm or OpenCL at all.

[–] [email protected] 5 points 1 month ago

I run models like Stable Diffusion and Llama with ROCm but models like RealESRGAN for upscaling or Rife for interpolation with Tencents Vulkan thingy (forgot what it's called) and that's far easier. Would be cool if LLMs and stuff could just be run with Vulkan too.

[–] [email protected] 2 points 1 month ago (1 children)

OpenCL is needed for me for non AI stuff, so that Darktable (an image program) can use my GPU; which is much faster. But for AI? No idea how they compare, as I did not use it for that purpose. ROCm itself also is troubling...

Do you have the new Llama 3.1 8B Instruct 128k model? It's quite slow on my GPU (I have a weak beginner class GPU with 8GB, but plan to upgrade). To the point its almost as slow as my CPU. I've read complains in the Github tracker from others too and wonder if its an issue with AMD cards. BTW the previous model Llama 3.0 8B Instruct is miles faster.

[–] [email protected] 3 points 1 month ago (2 children)

I have a fairly substantial 16gb AMD GPU, and when I load in Llama 3.1 8B Instruct 128k (Q4_0), it gives me about 12 tokens per second. That's reasonably fast enough for me, but only 50% faster than CPU (which I test by loading mlabonne's abliterated Q4_K_M version, which runs on CPU in GPT4All, though I have no idea if that's actually meant to be comparable in performance).

Then I load in Nous Hermes 2 Mistral 7B DPO (also Q4_0) and it blazes through at 50+ tokens per second. So I don't really know what's going on there. Seems like performance varies a lot from model to model, but I don't know enough to speculate why. I can't even try Gemma2 models, GPT4All just crashes with them. I should probably test Alpaca to see if these perform any different there...

[–] [email protected] 2 points 1 month ago (1 children)

Wow it got worse for me. Maybe through last update? Is this probably related to he application? Now I get 12 t/s on my CPU and switching to GPU it's only 1.5 t/s. Something is fishy. With Nous hermes 2 Mistral 7B DPO with q4 I get 33 t/s (I believe it was up to 44 before).

Now I'm curious if this will happen with a different application too, but I have nothing else than GPT4All installed.

[–] [email protected] 2 points 1 month ago (1 children)

Unfortunately I can't even test Llama 3.1 in Alpaca because it refuses to download, showing some error message with the important bits cut off.

That said, the Alpaca download interface seems much more robust, allowing me to select a model and then select any version of it for download, not just apparently picking whatever version it thinks I should use. That's an improvement for sure. On GPT4All I basically have to download the model manually if I want one that's not the default, and when I do that there's a decent chance it doesn't run on GPU.

However, GPT4All allows me to plainly see how I can edit the system prompt and many other parameters the model is run with, and even configure multiple sets of parameters for the same model. That allows me to effectively pre-configure a model in much more creative ways, such as programming it to be a specific character with a specific background and mindset. I can get the Mistral model from earlier to act like anything from a very curt and emotionally neutral virtual intelligence named Jarvis to a grumpy fantasy monster whose behavior is transcribed by a narrator. GPT4All can even present an API endpoint to localhost for other programs to use.

Alpaca seems to have some degree of model customization, but I can't tell how well it compares, probably because I'm not familiar with using ollama and I don't feel like tinkering with it since it doesn't want to use my GPU. The one thing I can see that's better in it is the use of multiple models at the same time; right now GPT4All will unload one model before it loads another.

[–] [email protected] 2 points 1 month ago* (last edited 1 month ago)

That's quite unfortunate. ~~Alpaca needs to support those explicitly to work with the new 3.1 128k models; GPT4All was not compatible with it before update either. There was a bug in some library they was using and needed a patch. So maybe that's why you can't use the new Llama 3.1 in Alpaca.~~ (Edit: Never mind. On the webpage they advertise and talk about 3.1 being working, so a wrong guess by me probably.)

Actually that sounds very useful and I missed that option, to be able to select from a set of related models. One thing that GPT4All can also do is, analyzing text files and then using the data to ask questions about it. It will also output the exact lines of the file in relation to the answer. I only experimented a little bit with this, but sounds useful too. The team also experiments and works on a web search using, but no idea how that would work with a local model if ever.

[–] [email protected] 2 points 1 month ago* (last edited 1 month ago)

Hi I just wanted let you know that I managed to get Gemma 2 model to work (didn't work previously too).

These are the new ones Gemma 2. I wasn't 100% sure first, so looked up at Gemma models list: https://ai.google.dev/gemma/docs/get_started ~~and the only 9b variants are the new Gemma 2 versions~~ (Edit: I mislooked. There are Gemma 1 versions with 9b too, so never mind this comment. ). If this works on my low end GPU, it should work on yours too.

[–] [email protected] 4 points 1 month ago (2 children)

Bookmarking it to check this out after work

... I should really go through these bookmarks one day

[–] [email protected] 3 points 1 month ago

My bookmarks are competing with my unplayed steam library.

[–] [email protected] 3 points 1 month ago (1 children)

I have a separate "ToDo" bookmark folder with temporary content, that I want to look in the near future. And for things I am looking into in near future, the pages are already in the browser open as tabs and loaded everytime I start the browser (but in an unloaded state until I click it).

... I also should really go through these bookmarks and tabs one day.^^

[–] [email protected] 2 points 1 month ago (1 children)

I bookmarked it in Lemmy, available through both my PC browser and my mobile app. But I'm not sure if I can make bookmark folders/groups there.

[–] [email protected] 2 points 1 month ago (1 children)

Oh right. I never used the Lemmy bookmarking. And was thinking of browser bookmarks (Firefox). Right. I never thought about that.

[–] [email protected] 2 points 1 month ago

It's nice to have any device with access to my Lemmy account also have access to my bookmarks.

... So I can ignore them all those devices simultaneously 😅

[–] [email protected] 3 points 1 month ago
[–] [email protected] 2 points 1 month ago

I've used LM Studio for a while now. It's pretty good!

[–] [email protected] 2 points 1 month ago (1 children)

While downloading models, the progress bar is getting decreased sometimes, like from 11% it'll go back to 10%. Wired.

[–] [email protected] 2 points 1 month ago

Literally the average experience working with ML tools. All of it feels so hacked together and barely functional.

[–] [email protected] 2 points 1 month ago (1 children)

Looks great, will try later today

[–] [email protected] 1 points 1 month ago* (last edited 1 month ago) (1 children)

Let me know how it goes. Haven't mustered up the courage to try on my computer yet. But I definitely will.

[–] [email protected] 3 points 1 month ago (1 children)

I did try with a very small model. Its quick and you can download 20+ models from the list.

[–] [email protected] 1 points 1 month ago

Nice. I'm definitely giving this a shot.