this post was submitted on 14 Feb 2024
210 points (99.5% liked)

Technology

57453 readers
5333 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 17 comments
sorted by: hot top controversial new old
[–] [email protected] 51 points 6 months ago (2 children)

Here's a wild idea: make them publish the exact criteria and formulae used to determine coverage. Their decisions should be verifiable and reproducible.

This isn't rocket science.

[–] [email protected] 24 points 6 months ago (2 children)
[–] [email protected] 16 points 6 months ago

They will add someone who's job it is to click okay to every decision the AI makes. Therefore the AI isn't making a decision the human always clicking okay is.

[–] Reverendender 9 points 6 months ago

I'm sure it was a stern warning.

[–] [email protected] 14 points 6 months ago

AI will deny the care after being rubber stamped by a doctor who graduated last in his class and this is the only job he can get, being a traitor for the insurance companies.

[–] [email protected] 13 points 6 months ago

Oh, that's some serious finger wagging, sure to make them think twice.

[–] [email protected] 9 points 6 months ago

Yeah, sure, ok. We pinky promise not to use AI to generate leads that are then printed out on paper and put in front of a doctor's assistant's autopen for signatures denying insurance or coverage.

There is absolutely ZERO way to practically enforce this. An AI team can act like a black box, ingesting data and outputting hard copies that cannot be traced back to them. There is no way this will not happen.

"We'll audit the company!" -> they'll send the data to an offshore shell company that doesn't follow the law, then the recommendations will be sent back.

Prove that legislation can stop this, just try.

[–] [email protected] 7 points 6 months ago (1 children)

I am not from the US but it baffles me how someone can be cut off from health care in a supposed first world country.

[–] [email protected] 6 points 6 months ago

Because greed.

[–] [email protected] 3 points 6 months ago

Cruel AND unusual??

[–] [email protected] 3 points 6 months ago (1 children)

well what were they using before

[–] [email protected] 7 points 6 months ago (1 children)
[–] [email protected] 1 points 6 months ago

Here is an alternative Piped link(s):

https://www.piped.video/watch?v=tCJcrIpgrr0

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

[–] [email protected] 3 points 6 months ago* (last edited 6 months ago) (1 children)

Who needs "AI" when the simple algorithm they already use works perfectly well?

while 1==1:
    deny_coverage = True
[–] [email protected] 2 points 6 months ago

I hate that you are absolutely right.

Medical directors do not see any patient records or put their medical judgment to use, said former company employees familiar with the system. Instead, a computer does the work. A Cigna algorithm flags mismatches between diagnoses and what the company considers acceptable tests and procedures for those ailments. Company doctors then sign off on the denials in batches, according to interviews with former employees who spoke on condition of anonymity.

“We literally click and submit,” one former Cigna doctor said. “It takes all of 10 seconds to do 50 at a time.”