this post was submitted on 17 Oct 2024
20 points (95.5% liked)

Futurology

1888 readers
113 users here now

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 6 points 3 months ago (2 children)

I disagree.

Given the error rate of so-called "AI", all 5 million of those documents must still be checked by a human for accuracy.

It'd be far more efficient to simply pay people to do the work in the first place than to pay for "AI" to do the work and also paying people to check it.

[–] [email protected] 6 points 3 months ago (1 children)

Humans will have to verify, of course. "AI" is just a really fast sort. This is fine for that with human annotators. Could have just saved a bunch of money by grepping the fucking text for keywords though.

[–] [email protected] 1 points 3 months ago
[–] [email protected] 3 points 3 months ago (1 children)

Surely highlighting 5 million out of 24 million is more efficient than checking them all?

[–] [email protected] 3 points 3 months ago (2 children)

If you don't care about false negatives, maybe.

[–] [email protected] 5 points 3 months ago (1 children)

There are only so many historical synonyms for black people, racist language should be searchable with few false negatives

[–] [email protected] 1 points 3 months ago

No "AI" required~

[–] [email protected] 2 points 3 months ago (2 children)

false negatives

I don't get your logic here either. A false negative would have zero implications for anyone. It would have no legal standing or relevance.

[–] [email protected] 3 points 3 months ago

A false negative would have zero implications for anyone. It would have no legal standing or relevance.

I don't understand, in what way does allowing a racist deed covenant stand unchallenged have zero implications or relevance?

If it did, then what would be the point of rooting them out in the first place?

[–] [email protected] 2 points 3 months ago

A false negative would, as I'm understanding the goal here, be a case where the AI missed an existing problem.

It wouldn't change the current state so it wouldn't actively hurt anything though, and of course it's plenty likely a human checker would have overlooked those misses and more.