this post was submitted on 08 Feb 2025
25 points (96.3% liked)

Asklemmy

44895 readers
1135 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_[email protected]~

founded 5 years ago
MODERATORS
 

My office computer has a Ryzen 7 5700, RX 580x, and 32gb of ram. Running ollama with deepseekv2 or llama3 is much slower than chatgpt in the browser. Same with my newer, more powerful home computer.

What kind of hardware do you need to run with comparable responsiveness to chatgpt? How much does it cost? Presuming such hardware is commercial, where do you find it?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 8 points 3 days ago* (last edited 3 days ago) (1 children)

Install LocalAI and ensure it’s using acceleration. It’s one of the best solutions we have at the moment.

Are you sure you’re not running these small models off of CPU and no acceleration? Because I’m running these small models pretty quickly. Nearly instant responses using a NVIDIA titanXP from a gaming rig I built in 2017 ish.

[–] [email protected] 5 points 3 days ago

AMD is quite awful in this regard. Rn with my rx6650xt using Vulcan acceleration, I get the same speed as running on my r5 7600