this post was submitted on 21 Jul 2023
420 points (95.5% liked)

Technology

34981 readers
73 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] -2 points 1 year ago (10 children)

Stop making a language model do math? We have already have calculators.

[–] ThreeHalflings 6 points 1 year ago (1 children)

Do you think maybe it's a simple and interesring way of discussing changes in the inner workings of the model, and that maybe people know that we already have calculators?

[–] [email protected] 7 points 1 year ago (3 children)

I think it's a lazy way of doing it. OpenAI has clearly stated that math isn't something that they are even trying to make it good at. It's like testing how fast Usain bolt is by having him bake a cake.

If chatgpt is getting worse at math it might just be a side effect of them making it better at reading comprehension or something they want it to be good at there is no way to know that.

Measure something it is supposed to be good at.

[–] ThreeHalflings 3 points 1 year ago* (last edited 1 year ago) (1 children)

All the things it's supported to be good at are completely subjectively judged.

That's why, u less you have a panel of experts in your back pocket, you need something with a yes or no answer to have an interesting discussion.

If people were discussing ChatGPT's code writing ability, you'd complain that it wasn't designed to do that either. The problem is that it was designed to transform inputs tk relatively beliveable outputs, representative of its training set. Great. That's not super useful. It's actual utility comes from its emergent behaviours.

Lemme know when you make a post detailing the opinions of some university "Transform inputs to outputs" professors. Until then, well ocmrinue to discuss its behaviour in observable, verifiable and useful areas.

[–] [email protected] 2 points 1 year ago (1 children)

We have people that assign numerical values to peoples ability to read and write every day. They are english teachers. They test all kinds of stuff like vocabulary, reading comprehension and grammar and in the end they assign grades to those skills. I don't even need tiny professors in my pocket, they are just out there being teachers to children of all ages.

One of the task I have chatGPT was to name and describe 10 dwarven characters. Their names have to be adjectives like grumpy but the description can not be based on him being grumpy. He has to be something other than grumpy.

ChatGPT wrote 5 dwarves that followed the instructions and then defaulted to describing each dwarf based on their name. Sneezy was sickly, yawny was lazy and so on. This gives a score of 5/10 on the task I gave it.

There is a tapestry of clever tests you can give it with language in focus to test the ability of a natural language model without giving it a bunch of numbers.

[–] ThreeHalflings 2 points 1 year ago (1 children)

OK, you go get a panel of highschool English teachers together and see how useful their opinions are. Lemme know when your post is up, I'll be interested then.

[–] [email protected] 2 points 1 year ago

Sorry, I thought we were having a discussion when we were supposed to just be smug cunts. I will correct my behaviour in the future.

[–] [email protected] 3 points 1 year ago

Nah, asking it to do math is perfect. People are looking for emergent qualities and things it can do that they never expected it to be able to do. The fact that it could do somewhat successful math before despite not being a calculator was fascinating, and the fact that it can't now is interesting.

Let the devs worry about how good it is at what it is supposed to do. I want to hear about stuff like this.

[–] [email protected] 1 points 1 year ago (1 children)

Has it gotten better at other stuff? Are you posing a possible scenario or asserting a fact? Would be curious about specific measurements if the later.

[–] [email protected] 0 points 1 year ago (1 children)

Possible scenario. We can't know about the internal motivations of OpenAI unless they tell us and I haven't seen any statements from them outside the fact that they don't care if it's bad at math.

[–] [email protected] 1 points 1 year ago

Would you personally believe a company if it told you what it's internal motivations were? For me I guess it would depend on the company but I struggle to think of a company that I would trust in this regard. That's especially true when it comes to tech companies which often are operate unprofitably for long stretches of time with the assumption being that they'll be able to make massive profits in the future.

load more comments (8 replies)