Or just … put a calculator in the middle of the LLM and let it learn how to use it. Why are you jumping through so many hoops to help a language model learn an incredibly inefficient way to inaccurately do math in its head? The whole point of machines is to do things in ways that are better than what we do in our heads.
this post was submitted on 18 Oct 2023
20 points (91.7% liked)
LocalLLaMA
2268 readers
1 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 1 year ago
MODERATORS
This is really interesting, and I suspect we'll be seeing a lot more papers like this in the next few years. There's a lot of room for these sorts of tokenization optimizations right now.