Ollama lets you download and run large language models (LLMs) on your device.
Install Ollama on Arch Linux
- Check whether your device has an
AMD GPU
, NVIDIA GPU
, or no GPU
. A GPU is recommended but not required.
- Open
Console
, type only one of the following commands and press return. This may ask for your password but not show you typing it.
sudo pacman -S ollama-rocm # for AMD GPU
sudo pacman -S ollama-cuda # for NVIDIA GPU
sudo pacman -S ollama # for no GPU (for CPU)
- Enable the Ollama service [on-device and runs in the background] to start with your device and start it now.
sudo systemctl enable --now ollama
Test Ollama alone
- Open
localhost:11434
in a web browser and you should see Ollama is running
. This shows Ollama is installed and its service is running.
- Run
ollama run deepseek-r1
in a console and ollama ps
in another, to download and run the DeepSeek R1 model while seeing whether Ollama is using your slow CPU or fast GPU.
AMD GPU issue fix
https://lemmy.world/post/27088416
Use with Open WebUI
See this guide: https://lemmy.world/post/28493612