this post was submitted on 04 Jun 2025
9 points (64.5% liked)

Privacy

38448 readers
596 users here now

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

Related communities

much thanks to @gary_host_laptop for the logo design :)

founded 5 years ago
MODERATORS
 

If you were running a LLM locally on android through llama.cpp for use as a private personal assistant. What model would you use?

Thanks for any recommendations in advance.

all 35 comments
sorted by: hot top controversial new old
[–] [email protected] 9 points 2 days ago (2 children)

Running an llm on a phone will absolutely destroy your battery life. It also is imperative that you understand that the comfort of ai is bought with killing of innocents (through expediency of climate catastrophe, exploitation of the planet and the poorest on it).

I think using ai to experiment on a home server which already exists wouldnt be problematic IN A VACUUM but you would still normalize using the tech which is morally corrupt.

[–] [email protected] 8 points 2 days ago (2 children)

Not trying to instigate a fight but if you make this argument, I hope you're also vegan

[–] [email protected] 3 points 1 day ago

I mean, ideally yes, everyone would go vegan, but that's a far bigger lifestyle change than just continuing to not use AI like we've done for decades

[–] [email protected] -1 points 2 days ago (1 children)

Oh yeah. So i cant ask for one thing and not do another. Classic bad faith argument. Good try.

[–] [email protected] 6 points 2 days ago (2 children)

I'm not even trying to argue against you, I'm arguing for veganism. The same arguments that you used for why the use of AI is bad can be used for why not being vegan is bad. The production of animal products even has a way bigger impact.

[–] [email protected] 2 points 2 days ago

In that case i very much suggest a different approach.

"If that is your take, you will love veganism."

Btw, i dont eat animal products. Pretty recently too.

[–] [email protected] 1 points 2 days ago

I am a fan of LLMs and what they can do, and as such have a server specifically for running AI models. However, I've been reading "Atlas of AI" by Kate Crawford and you're right. So much of the data that they're trained on is inherently harmful or was taken without consent. Even in the more ethical data sets it's probably not great considering the sheer quantity of data needed to make even a simple LLM.

I still like using it for simple code generation (this is just a hobby to me so Vibe coding isn't a problem in my scenario) and corporate tone policing. And I tell people non stop that it's worthless outside of these use cases and maybe as a search engine, but I recommend Wikipedia as a better start almost Everytime.

[–] [email protected] 0 points 1 day ago

I don't recommend it. I ran local AI in my phone before (iPhone but same difference), and just asking it stuff makes it warm up to touch. Battery also takes a hit.

It also messes up multitasking features since it uses up most memory which kills background apps. Phones weren't designed for this.

Best way is to host it in an actual dedicated machine that can be accessed remotely.

[–] [email protected] 6 points 2 days ago (1 children)

It very much depends on your phone hardware, RAM affects how big models can be and CPU affects how fast you'll get the replies. I've successfully ran 4B models on my 8GB RAM phone, but since it's the usual server and client setup which needs full internet access due to the lack of granular permissions on Android (Even AIO setups needs open ports to connect to itself) I prefer a proper home server. Which, with a cheap GFX card, is indescribably faster and more capable.

[–] [email protected] 2 points 2 days ago

I was honestly impressed with the speed and accuracy I was getting with Deepseek, llama, and Gemma on my 1660ti.

$100 used and it was seconds to get responses.

[–] throwawayacc0430 4 points 2 days ago

Not sure if a mobile device have that type of processing power lol

[–] [email protected] 2 points 2 days ago* (last edited 2 days ago)

maid + VPN to Ollama on your own computer.

Use an Onion service with client authorisation to avoid needing a domain or static IP.