this post was submitted on 31 Oct 2023
2 points (100.0% liked)

AMD

25 readers
1 users here now

For all things AMD; come talk about Ryzen, Radeon, Threadripper, EPYC, rumors, reviews, news and more.

founded 11 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 10 months ago

as a reference for those looking to train a LLM locally

It took me hours to finetune a small (for today's standards) BERT model with an RTX 4090, I can't imagine doing anything on chips like those referenced in the article, even inference.

I wouldn't do any training that's not at least with a 7800/7900 XTX, if you can get them to work.