this post was submitted on 08 Nov 2023
11 points (76.2% liked)

Science

3213 readers
45 users here now

General discussions about "science" itself

Be sure to also check out these other Fediverse science communities:

https://lemmy.ml/c/science

https://beehaw.org/c/science

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Aurenkin 3 points 1 year ago (1 children)

Yeah that makes sense. I'm still very sceptical though because as your example illustrates, it's perfectly valid for a human to answer "mustard" as well, plus there is an element of randomness inserted into the model output. Maybe it's doable but I'm unconvinced that you can meaningfully distinguish between human and AI written text. Unless you make a detector that looks for "As a large language model..." Then maybe it can detect ChatGPT specifically.

[–] [email protected] 2 points 1 year ago (1 children)

Agreed, even a perfectly trained clone of chatGPT wouldn't get that high a hit rate, although I do think that the larger the article being compared, the better its chances would be of making an accurate prediction. The thing is that we soon won't actually be able to tell the difference as computers get smarter. Sees like right now the only practical application is for kids to cheat on their homework, but what happens when it gets smart enough to write actual research papers with unique proofs?

[–] Hanabie 1 points 1 year ago* (last edited 1 year ago)

If it writes research papers, that research still has to come from somewhere. Even if the whole study was performed by AI itself, how would that deligitimise the research? Science isn't art, it's irrelevant who the performing agent is. (As long as it's not stolen)