this post was submitted on 14 Dec 2023
273 points (99.3% liked)

Technology

59148 readers
2260 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Humana also using AI tool with 90% error rate to deny care, lawsuit claims::The AI model, nH Predict, is the focus of another lawsuit against UnitedHealth.

all 22 comments
sorted by: hot top controversial new old
[–] [email protected] 69 points 10 months ago* (last edited 10 months ago) (1 children)

Then it's not an error rate.

It's a "fuck humans for profit" rate

[–] [email protected] 13 points 10 months ago* (last edited 10 months ago)

Which is prostitution and the Republicans definitely don't like that (officially anyway) right?

I'm sure a bipartisan agreement against this is coming right up.

[–] [email protected] 33 points 10 months ago (1 children)

Did they really need an AI tool? I worked in healthcare for years before this stuff came out, and back then they didn't need AI to blanket-deny 90% of claims without reading them. United Healthcare was/is even worse.

[–] [email protected] 18 points 10 months ago (1 children)

The difference is now they don't need to be paying people to deny the stuff

[–] [email protected] 23 points 10 months ago* (last edited 10 months ago) (1 children)

Hopefully they use an AI lawyer to fight the oncoming massive pending class action suit

Edit: AiDvocate

[–] [email protected] 8 points 10 months ago (1 children)

AI Judge: DEATH BY SNU SNU!

[–] [email protected] 7 points 10 months ago* (last edited 10 months ago) (1 children)

I'd submit to that sentence

Rough but fair 💦

[–] [email protected] 7 points 10 months ago (1 children)

Perhaps a light spanking would be in order?

[–] [email protected] 4 points 10 months ago* (last edited 10 months ago) (1 children)

That the best u got?! I expect scarz

[–] [email protected] 4 points 10 months ago (1 children)
[–] [email protected] 1 points 10 months ago* (last edited 10 months ago)

Yoo. Hoo.

I've heard some guys actually like when chicks step on their balls with their high heels or in this case roll over them

Edir: "track" marks lol

[–] [email protected] 14 points 10 months ago

For profit healthcare is an oxymoron

[–] [email protected] 11 points 10 months ago

So it’s working as intended for Humana?

[–] [email protected] 10 points 10 months ago* (last edited 10 months ago)
[–] [email protected] 7 points 10 months ago (1 children)

As long as it save company money, error rate is not a concern.

[–] [email protected] 4 points 10 months ago

It’s a goal.

[–] [email protected] 4 points 10 months ago (2 children)

Is there proof of this? My mom works for Humana doing LTC and neither she or the medical directors she forwards cases to use this tool?

[–] [email protected] 7 points 10 months ago

It's probably on claim submission.

My company operates as an LTC pharmacy. We pay for every claim submission whether reject or success.

I was on the phone the other day with my pharmacy (optum) and they did a "test" claim which was free for them. I know optum owns the pharmacy, insurance, and pbm, but either their abusive their vertical integration or they have an "ai" to test claims.

[–] [email protected] 4 points 10 months ago

Proof is what the lawsuit is for.

[–] [email protected] 2 points 10 months ago

This is the best summary I could come up with:


Humana, one the nation's largest health insurance providers, is allegedly using an artificial intelligence model with a 90 percent error rate to override doctors' medical judgment and wrongfully deny care to elderly people on the company's Medicare Advantage plans.

The lawsuit, filed in the US District Court in western Kentucky, is led by two people who had a Humana Medicare Advantage Plan policy and said they were wrongfully denied needed and covered care, harming their health and finances.

It is the second lawsuit aimed at an insurer's use of the AI tool nH Predict, which was developed by NaviHealth to forecast how long patients will need care after a medical injury, illness, or event.

In November, the estates of two deceased individuals brought a suit against UnitedHealth—the largest health insurance company in the US—for also allegedly using nH Predict to wrongfully deny care.

Humana did not respond to Ars' request for comment by the time this story initially published, but a spokesperson has since provided a statement, emphasizing that there is a "human in the loop" whenever AI tools are used.

In both cases, the plaintiffs claim that the insurers use the flawed model to pinpoint the exact date to blindly and illegally cut off payments for post-acute care that is covered under Medicare plans—such as stays in skilled nursing facilities and inpatient rehabilitation centers.


The original article contains 1,016 words, the summary contains 225 words. Saved 78%. I'm a bot and I'm open source!