this post was submitted on 29 Nov 2023
1 points (66.7% liked)
Hardware
33 readers
1 users here now
A place for quality hardware news, reviews, and intelligent discussion.
founded 11 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It wouldn’t. Training the neural nets for LLMs are all about brute force, and it’s only been possible in the last few years to train these models without spending billions. Even going back to 2010 I think it’d be largely infeasible.
The good news is if we fast forward even just a few years, training will become relatively cheap compared to today.
They didn't ask if it it could be done without spending billions, or whether it would be feasible, i.e., practical, just whether it would be possible.