523
AI one-percenters seizing power forever is the real doomsday scenario, warns AI godfather
(www.businessinsider.com)
This is a most excellent place for technology news and articles.
Uhh what? You can totally run LLMs locally.
Inference, yes. Training, no. Derived models don’t count.
I have Llama 2 running on localhost, you need a fairly powerful GPU but it can totally be done.
I’ve run one of the smaller models on my i7-3770 with no GPU acceleration. It is painfully slow but not unusably slow.
To get the same level as something like chat gpt?