this post was submitted on 09 Sep 2024
296 points (99.0% liked)

Technology

58133 readers
4068 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 week ago (1 children)

6650 XT. Honestly no idea. When I run make LLAMA_HIPBLAS=1 GPU_TARGETS=gfx1032 -j$(nproc) in the Fedora distrobox on kobolcpp it throws a bunch of

fatal error: 'hip/hip_fp16.h' file not found
   36 | #include <hip/hip_fp16.h>

errors and koboldcpp does not give an option to use Vulkan.

[–] [email protected] 1 points 1 week ago (1 children)

Ah, strange. I don't suppose you specifically need a Fedora container? If not, I've been using this Ubuntu based distrobox container recipe for anything that requires ROCM and it has worked flawless for me.

If that still doesn't work (I haven't actually tried out kobolcpp yet), and you're willing to try something other than kobolcpp, then I'd recommend the text-generation-webui project which supports a wide array of model types, including the GGUF types that Kobolcpp utilizes. Then if you really want to get deep into it, you can even pair it with SillyTavern (it is purely a frontend for a bunch of different LLM backends, text-generation-webui is one of the supported ones)!

[–] [email protected] 1 points 1 week ago (1 children)

I don't think so, it's just what I'm more familiar with and I usually try to avoid apt's PPA hell as much as I can. But maybe I should try some others, as I couldn't get Mullvad to run either yet. :/

Text gen web ui I tried quite a while before I went to koboldcpp on my previous distro and I could not get that to run without crashing whenever I tried to generate anything. Sillytavern is my standard frontend that I use, so any text gen software should inherently compatible with that anyway.

[–] [email protected] 1 points 1 week ago

Hmm, gotcha. I just tried out a fresh copy of text-gen-webui and it seems like the latest version is borked with ROCM (I get the CUDA error: invalid device function error).

My next recommendation then would be LM Studio which to my knowledge can still output an OpenAI compatible API endpoint to be used in SillyTavern - I've used it in the past before and I didn't even need to run it within Distrobox (I have all of the ROCM stuff installed locally, but I generally run most of the AI stuff in distrobox since it tends to require an older version of Python than Arch is currently using) - it seems they've recently started supporting running GGUF models via Vulkan, which I assume probably doesn't require the ROCM stuff to be installed perhaps?

Might be worth a shot, I just downloaded the latest version (the UI has definitely changed a bit since I last used it) and just grabbed a copy of the Gemma model and ran it, and it seemed to work without an issue for me directly on the host.

The advanced configuration settings no longer seem to directly mention GPU acceleration like it used to, however I can see it utilizing GPU resources in nvtop currently, and the speed it was generating at (the one in my screenshot was 83 tokens a second) couldn't have possibly been done on the CPU so it seems to be fine on my side.