this post was submitted on 05 Sep 2024
1259 points (99.3% liked)

Science Memes

11441 readers
150 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 11 points 3 months ago (2 children)

I was just making a clever reference

[–] [email protected] 3 points 3 months ago (2 children)

The problem is that some people like me won't get that reference and instead think AIs are universally bad. A lot of people already think this way, and it's hard to know who believes what.

[–] [email protected] 0 points 3 months ago

Clearly, based on your responses, you don't think AI/LLMs are universally bad. And anyone who is that easily swayed by what is essentially a clever shitpost likely also thinks the earth is flat and birds aren't real.

You know. Morons.

[–] [email protected] -2 points 3 months ago (1 children)

The problem is that people selling LLMs keep calling them AI, and people keep buying their bullshit.

AI isn't necessarily bad. LLMs are.

[–] [email protected] 6 points 3 months ago* (last edited 3 months ago) (1 children)

LLMs have legitimate uses today even if they are currently somewhat limited. In the future they will have more legitimate and illegitimate uses. The capabilities of current LLMs are often oversold though, which leads to a lot of this resentment.

Edit: also LLMs very much are AI (specifically ANI) and ML. It's literally a form of deep learning. It's not AGI, but nobody with half a brain ever claimed it was.

[–] [email protected] 2 points 3 months ago (1 children)

LLMs have legitimate uses today

No they don't. The only thing they can be somewhat reliable for is autocomplete, and the slight improvement in quality doesn't compensate the massive increase in costs.

In the future they will have more legitimate and illegitimate uses

No. Thanks to LLM peddlers being excessively greedy and saturating the internet with LLM generated garbage newly trained models will be poisoned and only get worse with every iteration.

The capabilities of current LLMs are often oversold

LLMs have only one capability: to produce the most statistically likely token after a given chain of tokens, according to their model.

Future LLMs will still only have this capability, but since their models will have been trained on LLM generated garbage their results will quickly diverge from anything even remotely intelligible.

[–] [email protected] 6 points 3 months ago* (last edited 3 months ago)

This is false. Anyone who has used these tools for long enough can tell you this is false.

LLMs have been used to write computer code, craft malware, and even semi-independently hack systems with the support of other pieces of software. They can even grade student's work and give feedback, but it's unclear how accurate this will be. As someone who actually researches the use of both LLMs and other forms of AI you are severely underestimating their current capabilities, never mind what they can do in the future.

I also don't know where you came to the conclusion that hardware performance is always an issue, given that LLM model size varies immensely as does the performance requirements. There are LLMs that can run and run well on an average laptop or even smartphone. It honestly makes me think you have never heard of LLaMa models inc. TinyLLaMa or similar projects.

Future LLMs will still only have this capability, but since their models will have been trained on LLM generated garbage their results will quickly diverge from anything even remotely intelligible.

You can filter data you get from the internet to websites archived before LLMs were even invented as a concept. This is trivial to do for some data sets as well. Some data sets used for this training have already been created without LLM output (think about how the first LLM was trained).

Sources:

[–] [email protected] 3 points 3 months ago

I appreciate it <3