this post was submitted on 30 Jul 2024
173 points (94.4% liked)
Fuck AI
1346 readers
579 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
founded 7 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I find that hard to believe. Search engines just regurgitate what is in a database. LLMs have to do calculations to create the sentences they produce. That takes more energy.
Believe what you will. I'm not an authority on the topic, but as a researcher in an adjacent field I have a pretty good idea. I also self host Ollama and SearXNG (a metasearch engine, to be clear, not a first party search engine) so I have some anecdotal inclinations.
Training even a teeny tiny LLM or ML model can run a typical gaming desktop at 100% for days. Sending a query to a pretrained model hardly even shows up on HTop unless it's gigantic. Even the gigantic models only spike the CPU for a few seconds (until the query is complete). SearXNG, again anecdotally, spikes my PC about the same as Mistral in Ollama.
I would encourage you to look at more explanations like the one below. I'm not just blowing smoke, and I'm not dismissing the very real problem of massive training costs (in money, energy, and water) that you're pointing out.
https://www.baeldung.com/cs/chatgpt-large-language-models-power-consumption