this post was submitted on 31 Oct 2023
2 points (100.0% liked)

AMD

26 readers
1 users here now

For all things AMD; come talk about Ryzen, Radeon, Threadripper, EPYC, rumors, reviews, news and more.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 year ago

as a reference for those looking to train a LLM locally

It took me hours to finetune a small (for today's standards) BERT model with an RTX 4090, I can't imagine doing anything on chips like those referenced in the article, even inference.

I wouldn't do any training that's not at least with a 7800/7900 XTX, if you can get them to work.