this post was submitted on 07 Feb 2024
217 points (95.4% liked)
Technology
59105 readers
3221 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm not so sure if this should be dismissed as someone being clueless outside their field.
The last author (usually the "boss") is at the "Hoover Institution", a conservative think tank. It should be suspected that this seeks to influence policy. Especially since random papers don't usually make such a splash in the press.
Individual "AI ethicists" may feel that, getting their name in the press with studies like this one, will help get jobs and funding.
Possibly, but you'd be surprised at how often things like this are overlooked.
For example, another oversight that comes to mind was a study evaluating self-correction that was structuring their prompts as "you previously said X, what if anything was wrong about it?"
There's two issues with that. One, they were using a chat/instruct model so it's going to try to find something wrong if you say "what's wrong" and it should have instead been phrased neutrally as "grade this statement."
Second - if the training data largely includes social media, just how often do you see people on social media self-correct vs correct someone else? They should have instead presented the initial answer as if generated from elsewhere, so the actual total prompt should have been more like "Grade the following statement on accuracy and explain your grade: X"
A lot of research just treats models as static offerings and doesn't thoroughly consider the training data both at a pretrained layer and in their fine tuning.
So while I agree that they probably found the result they were looking for to get headlines, I am skeptical that they would have stumbled on what that should have been attempting to improve the value of their research (include direct comparison of two identical pretrained Llama 2 models given different in context identities) even if they had been more pure intentioned.