12
Which AI model to run locally on android
(lemm.ee)
Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.
In this community everyone is welcome to post links and discuss topics related to privacy.
much thanks to @gary_host_laptop for the logo design :)
It very much depends on your phone hardware, RAM affects how big models can be and CPU affects how fast you'll get the replies. I've successfully ran 4B models on my 8GB RAM phone, but since it's the usual server and client setup which needs full internet access due to the lack of granular permissions on Android (Even AIO setups needs open ports to connect to itself) I prefer a proper home server. Which, with a cheap GFX card, is indescribably faster and more capable.
I was honestly impressed with the speed and accuracy I was getting with Deepseek, llama, and Gemma on my 1660ti.
$100 used and it was seconds to get responses.