this post was submitted on 13 Jun 2024
506 points (94.9% liked)
Memes
45877 readers
1020 users here now
Rules:
- Be civil and nice.
- Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
While I agree that LLMs can achieve human-tier efficiency at most tasks eventually (some architectural changes will be necessary, but the core approach seems sound), it's wrong to say it's modeled after the human brain. We have no idea how brains work as they're super complex, we're building artificial neural networks from the ground up. AI uses centuries' worth of math, but with our current maths knowledge the code isn't too complicated. Human brains aren't like that, they can't be summed up in a few lines of code because DNA is a huge mess that contains so much more than just "learning", so many inactive or redundant bits and pieces. We're building LLMs with knowledge of how languages work, not how brains work.
Transformers are not built with our knowledge of language. That's a gross approximation -- it would honestly be more accurate to say they're modelled after the human brain than that they're built with our understanding of language. A big problem is that the connection between AI and language is poorly understood -- we can't even understand what the word2vec axes are.
i'm not talking about knowing about how humans perceive/learn languages, i'm talking about language structure. Perhaps it's wrong to call it "how languages work"
That's what I meant, yes. They're not built based on any linguistic field
different neural network types excel at different tasks - image recognition was invented way before LLMs, not only for lack of processing power, but also because the previous architectures didn't work with languages. New architectures don't appear out of thin air, they are created with a rough idea of what we could need to make the network do a certain task (e.g. NLP) better. Even tokenization isn't blind codepoint separation but is based on an analysis of languages. But yes, natural languages aren't "parsed" for neural networks, they don't even have a formal grammar.