this post was submitted on 16 May 2024
366 points (97.9% liked)

Technology

57432 readers
3273 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

New development policy: code generated by a large language model or similar technology (e.g. ChatGPT, GitHub Copilot) is presumed to be tainted (i.e. of unclear copyright, not fitting NetBSD's licensing goals) and cannot be committed to NetBSD.

https://www.NetBSD.org/developers/commit-guidelines.html

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 51 points 3 months ago (1 children)

Lots of stupid people asking "how would they know?"

That's not the fucking point. The point is that if they catch you they can block future commits and review your past commits for poor quality code. They're setting a quality standard, and establishing consequences for violating it.

If your AI generated code isn't setting off red flags, you're probably fine, but if something stupid slips through and the maintainers believe it to be the result of Generative AI, they will remove your code from the codebase and you from the project.

It's like laws against weapons. If you have a concealed gun on your person and enter a public school, chances are that nobody will know and you'll get away with it over and over again. But if anyone ever notices, you're going to jail, you're getting permanently trespassed from school grounds, and you're probably not going to be allowed to own guns for a while.

And, it's a message to everyone else quietly breaking the rules that they have something to lose if they don't stop.

[–] [email protected] 8 points 3 months ago (1 children)

Lots of stupid people asking "how would they know?"

That's not the fucking point.

Okay, easy there, Chief. We were just trying to figure out how it worked. Sorry.

[–] [email protected] 11 points 3 months ago* (last edited 3 months ago)

It was a fair question, but this is just going to turn out like universities failing or expelling people for alleged AI content in papers.

They can't prove it. They try to use AI tools to prove it, but those same tools will say a thesis paper from a decade ago is also AI generated. Pretty sure I saw a story of a professor accusing someone based off a tool having his own past paper fail the same tool

Short of an admission of guilt, it's a witch hunt.