this post was submitted on 23 Nov 2024
552 points (95.8% liked)
Technology
59669 readers
3078 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I have read the comments here and all I understand from my small brain is that, because we are using bigger models which are online, for simple tasks, this huge unnecessary power consumption is happening.
So, can the on-device NPUs we are getting on flagship mobile phones solve these problems, as we can do most of those simple tasks offline on-device?
I’ve run an LLM on my desktop GPU and gotten decent results, albeit not nearly as good as what ChatGPT will get you.
Probably used less than 0.1Wh per response.
Is this for inferencing only? Do you include training?
Inference only. I’m looking into doing some fine tuning. Training from scratch is another story.