That is the thing, they are not "only going to get better" because the training has hit a wall and the compute used will have to be reduced since they are losing money with every request currently.
The difference is that the actual revolutions like that generally use technology that actually works.
The problem isn't the dependencies, the problem is dependencies written by people who only put the minimum effort into writing the dependency for their own use case because their manager was breathing down their neck.
To be fair for a gish gallop style of bad faith argument the way religious people like to use LLMs are probably a good match. If all you want is a high number of arguments it is probably easy to produce those with an LLM. Not to mention that most of their arguments have been repeated countless times anyway so the training data probably has them in large numbers. It is not as if they ever cared if their arguments were any good anyway.
But Wikipedia is basically correct 99% of the time on basic facts if you look at non-controversial topics where nobody has an incentive to manipulate it. LLMs meanwhile are lucky if 20% of what they see even has any relationship to reality. Not just complex facts either, if an LLM got wrong how many hands a human being has I wouldn't be surprised.
Or for travelling: there already is a phone app to translate signs but it would be so much more to have that live
Most countries use street signs that do not require translations, that is more of a US thing.
The one they use on gemini.google.com (which is 2.5 right now but was awful in earlier versions too).
Obviously you are meant to place trebuchets along the route and be shot from one to the next. With exact timing you could even use another trebuchet to slow you down each time.
After just trying it again a few times today for a few practical problems that it not only misunderstood at first completely and then gave me a completely hallucinated answer to every single one I am sorry, but the only thing shocking about it is how stupid it is despite Google's vast resources. Not that stupid/smart really apply to statistical analysis of language.
No, it is mediocre at best compared to other models but LLMs in general have a very minimal usefulness.
Peace isn't something that can be managed in its own department. Peace is the (unstable) result of a lot of other things being very carefully managed.
I would be very surprised if 30% of their code lines had even been touched at all by anyone since AI coding assistants became a thing.