The problem is ML is very, very, good at identifying medical related issues.
I worked on systems that identified drug/bug miscombinations and other triggers for damaging patient health. Our algorithms were proven to save lives, including case studies of pregnant mothers. It worked really well.
The key is that it supplied notifications to a clinician. It did not make decisions. And it was not an LLM.
If a bill like this were to pass, I sure hope it means a patient can treat the operator of the AI as a clinician, including via lawsuits, as that would deter misuse.
Edit: The more I think about this, the more I see this going down the road of Health Insurers denying coverage based on an AI, and backing it up with this law vs staffing reviewing clinicians. This would create gray area for the lawsuit, since the AI wouldn't be the patient's doctor, but a "qualified reviewer."
I hate that I thought of that, because it means others have, too.
Edit 2: The sponsor's bill proposal history.. Ugh. https://www.congress.gov/member/david-schweikert/S001183