this post was submitted on 04 Aug 2023
104 points (88.2% liked)

Technology

34988 readers
376 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
 

cross-posted from: https://lemmy.ml/post/2811405

"We view this moment of hype around generative AI as dangerous. There is a pack mentality in rushing to invest in these tools, while overlooking the fact that they threaten workers and impact consumers by creating lesser quality products and allowing more erroneous outputs. For example, earlier this year America’s National Eating Disorders Association fired helpline workers and attempted to replace them with a chatbot. The bot was then shut down after its responses actively encouraged disordered eating behaviors. "

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 year ago (4 children)

A rather categorical statement given that you didn't say anything with regards to how you think.

Maybe wait until we actually know more what's going on under the hood - both in LLMs and in the human brain - before stating with such confident finality that there's absolutely no similarities.

If it turns out that LLMs aren't thinking, but they're still producing the same sort of interaction that humans are capable of, perhaps that says more about humans than it does about LLMs.

[–] [email protected] 11 points 1 year ago

sees a plastic bag being blown by the wind

Holy shit that bag must be alive

[–] [email protected] 4 points 1 year ago (1 children)

They produce this kind of output because they break doen one mostly logical system (language) onto another (numbers). The irregularities language has get compensated by the vast number of sources.

We don't need to know more about anything. If I tell you "hey, don't think of an Apple", your brain will conceptualize an Apple and then go from there. LLMs don't know "concepts". They spit out numbers just as mindlessly as your Casio calculator watch.

[–] [email protected] 5 points 1 year ago

I would argue that what's going on is that they are compressing information. And it just so happens that the most compact way to represent a generative system (like mathematical relations for instance) is to model their generative structure. For instance, it's much more efficient to represent addition by figuring out how to add two numbers, than by memorizing all possible combinations of numbers and their sum. So implicit in compression is the need to discover generalizations. But, the network has limited capacity and limited "looping power", and it doesn't really know what a number is, so it has to figure all this out by example and as a result will often come to approximate versions of these generalizations. Thus, it will often appear to be intelligent until it encounters something that doesn't quite fit whatever approximation it came up with and will suddenly get something wrong that seems outside the pattern that you thought it understood, because it's hard to predict what it's captured at a very deep level and what it only has surface concepts of.

In other words, I think it is "kind of" thinking, if thinking can be considered a kind of computation, but it doesn't always capture concepts completely because it's not quite good enough at generalizing what it's learned, but it's just good enough to appear really smart within a certain distribution of inputs.

Which, in a way, isn't so different from us, but is maybe not the same as how we learn and naturally integrate information.

[–] [email protected] 0 points 1 year ago (2 children)

I've been making the same or similar arguments you are here in a lot of places. I use LLMs every day for my job, and it's quite clear that beyond a certain scale, there's definitely more going on than "fancy autocomplete."

I'm not sure what's up with people hating on AI all of a sudden, but there seems quite a few who are confidently giving out incorrect information. I find it most amusing when they're doing that at the same time as bashing LLMs for also confidently giving out wrong information.

[–] [email protected] 2 points 1 year ago (1 children)

Can you give examples of that?

[–] [email protected] 2 points 1 year ago

The one I like to give is tool use. I can present the LLM with a problem and give it a number of tools it can use to solve the problem and it is pretty good at that. Here's an older writeup that mentions a lot of others: https://www.jasonwei.net/blog/emergence

[–] [email protected] 0 points 1 year ago (1 children)

I suspect it's rooted in defensive reactions. People are worried about their jobs, and after being raised to believe that human thought is special and unique they're worried that that "specialness" and "uniqueness" might be threatened. So they form very strong opinions that these things are nothing to worry about.

I'm not really sure what to do other than just keep pointing out what information we do have about this stuff. It works, so in the end it'll be used regardless of hurt feelings. It would be better if we get ready for that sooner rather than later, though, and denial is going to delay that.

[–] [email protected] 2 points 1 year ago (1 children)

Yeah, I think that's a big part of it. I also wonder if people are getting tired of the hype and seeing every company advertise AI enabled products (which I can sort of get because a lot of them are just dumb and obvious cash grabs).

At this point, it's pretty clear to me that there's going to be a shift in how the world works over the next 2 to 5 years, and people will have a choice of whether to embrace it or get left behind. I've estimated that for some programming tasks, I'm about 7 to 10x faster when using Copilot and ChatGPT4. I don't see how someone who isn't using AI could compete with that. And before anyone asks, I don't think the error rate in the code is any higher.

[–] [email protected] 1 points 1 year ago

I had some training at work a few weeks ago that stated 80% of all jobs on the planet are going to be changed by AI in the next 10 years. Some of those jobs are already rapidly changing, and others will take some time to spin-up the support structures required for AI integration, but the majority of people on the planet are going to be impacted by something that most people don't even know exists yet. AI is the biggest shake-up to industry in human history. It's bigger than the wheel, it's bigger than the production line, it's bigger than the dot com boom. The world is about to completely change forever, and like you said, pretending that AI is stupid isn't going to stop those changes, or even slow them. They're coming. Learn to use AI or get left behind.

[–] [email protected] -2 points 1 year ago (2 children)

The engineers of ChatGPT-4 themselves have stated that it is beginning to show signs of general intelligence. I put a lot more value in their opinion on the subject than a person on the Internet who doesn't work in the field of artificial intelligence.

[–] [email protected] 8 points 1 year ago* (last edited 1 year ago)

That wasn't the engineers of GPT-4, it was Microsoft who have been fanning the hype pretty heavily to recoup their investment and push their own Bing integration and then opened their "study" with:

“We acknowledge that this approach is somewhat subjective and informal, and that it may not satisfy the rigorous standards of scientific evaluation.”

An actual AI researcher (Maarten Sap) regarding this statement:

The ‘Sparks of A.G.I.’ is an example of some of these big companies co-opting the research paper format into P.R. pitches. They literally acknowledge in their paper’s introduction that their approach is subjective and informal and may not satisfy the rigorous standards of scientific evaluation.

[–] [email protected] 7 points 1 year ago

It's PR by Microsoft. I am beginning to doubt the intelligence of many humans rather than that of ChatGPT considering these kinds of comments.