taladar

joined 2 years ago
MODERATOR OF
[–] taladar 16 points 2 weeks ago (2 children)

I would be very surprised if 30% of their code lines had even been touched at all by anyone since AI coding assistants became a thing.

[–] taladar 1 points 2 weeks ago (5 children)

That is the thing, they are not "only going to get better" because the training has hit a wall and the compute used will have to be reduced since they are losing money with every request currently.

[–] taladar 18 points 2 weeks ago (3 children)

The difference is that the actual revolutions like that generally use technology that actually works.

[–] taladar 14 points 2 weeks ago

The problem isn't the dependencies, the problem is dependencies written by people who only put the minimum effort into writing the dependency for their own use case because their manager was breathing down their neck.

[–] taladar 9 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

To be fair for a gish gallop style of bad faith argument the way religious people like to use LLMs are probably a good match. If all you want is a high number of arguments it is probably easy to produce those with an LLM. Not to mention that most of their arguments have been repeated countless times anyway so the training data probably has them in large numbers. It is not as if they ever cared if their arguments were any good anyway.

[–] taladar 0 points 3 weeks ago (7 children)

But Wikipedia is basically correct 99% of the time on basic facts if you look at non-controversial topics where nobody has an incentive to manipulate it. LLMs meanwhile are lucky if 20% of what they see even has any relationship to reality. Not just complex facts either, if an LLM got wrong how many hands a human being has I wouldn't be surprised.

[–] taladar 2 points 3 weeks ago (1 children)

Or for travelling: there already is a phone app to translate signs but it would be so much more to have that live

Most countries use street signs that do not require translations, that is more of a US thing.

[–] taladar 2 points 3 weeks ago (2 children)

The one they use on gemini.google.com (which is 2.5 right now but was awful in earlier versions too).

[–] taladar 17 points 3 weeks ago

Obviously you are meant to place trebuchets along the route and be shot from one to the next. With exact timing you could even use another trebuchet to slow you down each time.

[–] taladar 6 points 3 weeks ago (4 children)

After just trying it again a few times today for a few practical problems that it not only misunderstood at first completely and then gave me a completely hallucinated answer to every single one I am sorry, but the only thing shocking about it is how stupid it is despite Google's vast resources. Not that stupid/smart really apply to statistical analysis of language.

[–] taladar 6 points 3 weeks ago (9 children)

No, it is mediocre at best compared to other models but LLMs in general have a very minimal usefulness.

[–] taladar 2 points 3 weeks ago

Peace isn't something that can be managed in its own department. Peace is the (unstable) result of a lot of other things being very carefully managed.

view more: ‹ prev next ›