this post was submitted on 22 Sep 2023
6 points (100.0% liked)
LocalLLaMA
2908 readers
62 users here now
Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.
Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.
As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Definitely very interesting, but figuring out what layers to skip is a relatively difficult problem.
I really wish they'd shown an example of the optimal layers to skip for the 13B model. Like the paper notes, using the wrong combination of skipped layers can be worse overall. So it's not just about how many layers you skip, but which ones as well.
It would also be interesting to see if there are any common patterns in which layers are most skippable. It probably would be model architecture specific but it would be pretty useful if you could calculate the optimal skip pattern for say a 3B model and then translate that to a 30B with good/reasonable results.
The good news is if you do it wrong, much like regular speculative generation, you will still get the right result that the full model would output at the end, so there won't be any loss in quality, just loss in speed
It's definitely a good point tho, finding the optimal configuration is the difference between slower/minimal speedup and potentially huge amounts of speedup