this post was submitted on 16 Oct 2024
643 points (96.9% liked)

Science Memes

11261 readers
2731 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 1 month ago

not once did I mention ChatGPT or LLMs. why do aibros always use them as an argument? I think it’s because you all know how shit they are and call it out so you can disarm anyone trying to use it as proof of how shit AI is.

You were talking about generative AI. Of that category, only text and image generation are mature and producing passable output (music gen sounds bad, video gen is existentially horrifying, code gen or Photoshop autofill etc. are just subsets of text or image gen). I don't think LLMs or image gen are shit. LLMs in particular are easy to mischaracterize and therefore misuse, but they do have their uses. And image gen is legitimately useful.

Also, I wouldn't characterize myself as an "ai bro". I've tested text and image generation like half a dozen times each, but I tend to avoid them by default. The exception is Google's AI search, which can be legitimately useful for summarizing concepts that are fundamental to some people but foreign to me extremely quickly, and then I can go verify it later. I've been following AI news closely but I don't have much of a stake in this myself. If it helps my credibility, I never thought NFTs were a good idea. I think that's a good baseline for "are your tech opinions based on hype or reality", because literally every reasonable person agrees that they were stupid.

everything you mentioned is ML and algorithm interpretation, not AI. fuzzy data is processed by ML. fuzzy inputs, ML.

ML is a type of AI, but clearly you have a different definition; what do you mean when you say "AI"?

AI stores data similarly to a neural network, but that does not mean it “thinks like a human”.

That was poorly worded on my part. I know that it doesn't actually "think". My point was that it can approach tasks which require heuristic rather than exact algorithms, which used to be exclusively in the human-only category of data processing capabilities. I hope that's a more clear statement.

if nobody can provide peer reviewed articles, that means they don’t exist

"won't" =/= "can't", but fine, if you specify what you're looking for I'm willing to do your job for you and find articles on this. However, if you waste my time by making me search for stuff and then ignore it, you're going on my shared blocklist. What exactly are you looking for? I will try my best to find it, I assure you.

if they existed, just pop it into your little LLM and have it spit the articles out.

Again, I feel like you're using "AI" to mean "human-level intelligence", which is incorrect. Anyways, you know that if I asked an LLM to do this it would generate fake citations. I'm not arguing against that; LLMs don't posess knowledge and do not know what truth is. That's not why they're useful.

AI is a marketing joke like “the cloud” was 20 years ago.

I think they're a bit more useful than the cloud was, but this comparison isn't entirely inaccurate.