44
you are viewing a single comment's thread
view the rest of the comments
[-] poo_22@lemmygrad.ml 2 points 1 year ago

According to this page to run the full model you need about 1.4TB of memory, or about 16 A100 GPUs. Which is still prohibitively expensive for an individual enthusiast, but yes you can run a simplified model locally with ollama. Still probably needs a GPU with a lot of memory.

[-] yogthos@lemmygrad.ml 2 points 1 year ago

I got deepseek-r1:14b-qwen-distill-fp16 running locally with 32gb ram and a GPU, but yeah you do need a fairly beefy machine to run even medium sized models.

this post was submitted on 24 Jan 2025
44 points (97.8% liked)

Technology

1380 readers
29 users here now

A tech news sub for communists

founded 3 years ago
MODERATORS