143
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 05 May 2026
143 points (97.4% liked)
Fight For Privacy
93 readers
1 users here now
Privacy is a fundamental human right, we have to fight for in data capitalism.
Everything about privacy, and means , from legal to obfuscation we can use to protect human way of life
founded 5 months ago
MODERATORS
downloading? the clear issue is running a billion mini geminis nobody asked for
The energy spent running it is going to be even more negligible than the bandwidth.
Everyone is freaking out about energy use of AI data centers, not the energy use of ISPs. It's the energy used to run AI that's the issue.
It's a local model. It uses a fraction of the power a cloud AI query uses, and cloud AI queries already use much less power than you obviously think they do (it is AI training -- specifically training frontier models -- that burns power like crazy).
Whether cloud or local, it takes CPU/GPU use. That's what takes power. It's not magically less because it's on a personal PC rather than a data center.
Yes it is. Small models like this are on the order of 100x more efficient than the big models backing ChatGPT or Gemini proper.
Press X to doubt.
But in any case allow me to amend my statement:
Whether cloud or local, it takes CPU/GPU use. It's not magically ~~less~~ free because it's on a personal PC rather than a data center.
That's still what takes power. This is AI use that's not needed. And multiply by hundreds of millions of devices, it's a shit ton of energy.
No it's not. You clearly have zero perspective on energy consumption.
The power draw on a phone with an NPU (where Gemini Nano is mostly used) is comparable to watching a video on your phone, maybe a couple of watts. On devices without NPUs (e.g. PCs) it will be more, but not dramatically so. The power use of this is absolutely zilch in the grand scheme of things.
To be extremely generous, let's say the average power draw is 50 watts, and that the model generates on average 10 tok/s, and that the average user has it generate 500 tokens per day (about 400 words). That's 50 seconds of 50 watts for every user, and let's say this is done by a billion users. This is a very generous estimate: in reality the average power draw is lower, the average tokens generated is likely lower (the intended use is generating short snippets like, say, email titles based on the email's content), and this definitely won't be used by a billion people.
WolframAlpha tells us that this takes 694 MWh of power, and helpfully mentions that this is 74% the fuel energy of an Airbus A330-300, and indeed this energy use is roughly in the ballpark of one transatlantic flight. There's about 500 transatlantic flights every day. Two offshore wind turbines will generate this much power on a windy day.
In all likelihood an order of magnitude more energy is spent every day watching short form videos. I'm not going to do the napkin math on that though.
edit: in reality, local models like this will likely reduce net power consumption as fewer API calls are made to cloud LLMs, which are both less power efficient and have overhead from the whole internet thing.
Like the other guy said, it is magically more efficient because it's magically significantly smaller. This model is likely a few billion parameters and frontier models are in the 1 - 3+ trillion parameter range.
Yeah, people's mobile phones that run this model might die slightly faster, but playing a mobile game or doing any type of hardware intense process will kill your battery faster. It's no different.
Do the math please. Go on, I'll wait. I want you to see your own process any why you're wrong.
If it is not immediately obvious to you how negligible the cost is going to be, you have no clue how little compute small models like this require. Apply a bit of common sense: this is a model designed to run locally on smartphones. If it used a lot of power, the phone would run out of battery.
It's hard (if not impossible) to find power usage figures for Gemini Nano, because they're going to depend on the efficiency of the device it's running on. If it's on a phone (where most Chrome installs are), that phone likely has an NPU, in which case the power draw will be negligible. If it has to run on the CPU, it'll be more.
So let's instead assume every user will be using a model comparable to ChatGPT, for which we do have reasonable estimates. According to this estimate, 500 output tokens would use about 0.3 Wh of energy. 500 output tokens is about 400 words, which is probably more than the average user will be using Gemini Nano for (it is intended for small tasks), but let's assume that as the average daily use. 1 billion users times 0.3 Wh is 300 MWh. Fuck all on a global scale, about 0.0015% of the world's energy production (20 TWh per day).
Keep in mind that figure is for the full ChatGPT, which runs on 1500-watt GPUs. Gemini Nano runs on chips that draw more like 1.5 W, and on devices that physically cannot draw more than 15 W. It's thus reasonable to estimate that it is on the order of 100x more efficient.
Their estimate of energy uses was only based on FLOPs, but I'd assume for real world energy usage the KV cache would be very impactful if not eventually dominant. It's probably also a bit unfair of them to ignore the Internet traffic and likely all the extra network traffic behind the load balancer.
Not a fan of their analysis, but I wonder if it's potentially close to accurate to this deployment? I can't imagine they're having large contexts and ballooning caches on a model meant for a phone.
They talk about this in the appendix where they go over the (estimated) effects of large amounts of input tokens (up to 100k). This isn't really relevant for Gemini Nano because it only has a max 32k context window, and the deployment in Chrome probably caps it at far less than that.
I'm inclined to believe the main analysis is reasonably accurate. The numbers are similar to what I get on my local machine with local models. Granted, I tested with smaller models (7b parameter Mistral in this case) on weaker hardware (AMD 6700XT), but on a quick test I get about 50 tok/s locally at 180 W power use, which is about 0.5 Wh for 500 tokens. AMD GPUs suck for AI, so I think it's plausible that dedicated compute hardware would get basically the same energy efficiency on a frontier model.
Gemini Nano on a phone NPU is obviously going to be far more efficient -- by all accounts it gets the same or better tok/s I am getting at like 1/50th the TDP.