The Use of Artificial Intelligence in Health Insurance: Flaws and the Future
The revolution of artificial intelligence in healthcare is already underway. The limitless possibilities of artificial intelligence and machine learning models to save time and money–leading to saved lives and better care. But how do we ensure the ethical and equitable usage of this technology, especially when it comes to people’s health? That is the question in three recent class action lawsuits against Cigna, United Healthcare, and Humana: health insurance companies that are being sued for using artificial intelligence models to wrongly deny insurance claims. While the use of artificial intelligence in health insurance can improve efficiency, it is important to remember that a human should be the final decision maker, especially when it comes to people’s livelihoods and lives.
In the cases of Barrows et al. v. Humana, Inc.; Estate of Gene B. Lokken et al. v. UnitedHealth Group Inc. et al.; Kisting-Leung et al. v. Cigna Corporation et al., the insurance companies used an artificial intelligence algorithm without the proper input of a healthcare provider to make a coverage decision. This led to the wrongful denial of many coverage claims that prevented patients from being able to afford critical treatment and harmed patient health. For example, in the Kisting-Leung et al. v. Cigna Corporation et al. case, the Cigna corporation utilized the PXDX system, which enables Cigna doctors to automatically reject payments for treatments that do not match certain preset criteria in batches of hundreds or thousands at a time. [1] This evades the legally required process to conduct an individual physician review process. According to internal documents analyzed by ProPublica, a nonprofit newsroom that investigates abuses of power, Cigna denied over 300,000 requests over two months in 2022, spending an average of 1.2 seconds reviewing each request. [2] In the case of Estate of Gene B. Lokken et al. v. UnitedHealth Group Inc. et al., and Barrows et al. v. Humana, Inc., UnitedHealth and Humana use their “nH Predict” model to determine Medicare Advantage patient’s post-acute care coverage, which had an error rate of 90 percent. [3] The usage of these flawed artificial intelligence models was emboldened by the fact that only 0.2 percent of policyholders will appeal their denied claims, leaving the others to pay out-of-pocket or forgo their needed healthcare. [4]
These class-action lawsuits highlight two fatal flaws in artificial intelligence usage in healthcare: lack of transparency and lack of medical provider input in coverage decisions. The use of artificial intelligence to automate health insurance claims can greatly improve the efficiency at which insurance companies process claims, giving them much incentive to use these tools. However, in doing so, they fail to take into account the human lives at stake in their decisions. Not only is there an ethical duty to thoroughly review individual claims, but there is also a legal duty in the case of Cigna. California Insurance Regulations, Cal. Code Regs. tit. 10, § 2695.7 stipulates that Cigna’s medical directors must examine patient records, review coverage policies, and use their expertise to decide whether to approve or deny claims to avoid unfair denials in a “thorough,” “fair,” and “objective” manner. [5] By utilizing artificial intelligence to make a decision for them, they are failing to fulfill their statutory obligation to review individual claims in said “thorough,” “fair,” and “objective” manner. In the case of UnitedHealth, the voices of medical professionals meant to review claims were ignored and actively discouraged. UnitedHealth limited its employees’ ability to disagree with the nH Predict model’s decision by setting up targets to keep stays at skilled nursing facilities within 1 percent of the days projected by the model. According to a STAT news investigation, missing this target opened up the possibility for discipline or termination, even if the patient required more care. [6]
If artificial intelligence were to be implemented ethically, we could improve the healthcare system and public health outcomes. Artificial intelligence is shown to have outperformed healthcare providers in a variety of areas and can assist with deficits in healthcare. Artificial intelligence tools have been shown to outperform radiologists in speed and accuracy in identifying malignancies and breast imaging. [7] Additionally, a recent survey showed that an online chatbot provided higher quality and more empathetic responses than physicians. [8] Artificial intelligence can assist with “a patient's health literacy, communication skills, or cultural barriers between provider and patient.” [9] While the usage of this technology is promising, it is important to remember that unlike in the cases of Cigna, Humana, or UnitedHealth, artificial intelligence should be used to assist in clinical decision-making rather than replacing humans.
How do we ensure the ethical usage of artificial intelligence in health insurance? Trust. To build an ethical environment not only in the health insurance industry but also in healthcare as a whole, we need to implement trust between the provider and the patient. From a legal standpoint, trust is also derived from the overcoming of asymmetric information. In the health insurance field, this is modeled by how the insured gives the insurer their personal information to make a claim decision but is not aware of how the insurer is making that claim decision. There is an expectation of good faith on behalf of the insured to provide accurate information and the expectation of integrity for the insurer to make a fair and thorough decision after reviewing their information. The Organisation for Economic Co-operation and Development, a key intergovernmental organization in shaping the global economic agenda, developed the value-based principles of “(i) inclusive growth, sustainable development, and well-being; (ii) human-centered values and fairness; (iii) transparency and explainability (i.e., that processes are evident and interpretable); (iv) robustness, security and safety; and (v) accountability,” for the responsible stewardship of trustworthy artificial intelligence. [10] With these principles, we can ensure trustworthy artificial intelligence and better, more ethical outcomes for the people who need it.
Edited by Ava Betanco-Born
Endnotes
[1] Kisting-Leung et al. v. Cigna Corporation et al., No. 2:23-CV-01477, (E.D. Cal., Jul 24, 2023), https://litigationtracker.law.georgetown.edu/wp-content/uploads/2023/08/Kisting-Leung_20230724_COMPLAINT.pdf
[2] Rucker, Patrick, Maya Miller, and David Armstrong. 2023. “How Cigna Saves Millions by Having Its Doctors Reject Claims without Reading Them.” ProPublica. March 25, 2023. https://www.propublica.org/article/cigna-pxdx-medical-health-insurance-rejection-claims.
[3] Estate of Gene B. Lokken et al. v. UnitedHealth Group Inc. et al., No. 23-CV-03514, (D. Minn., Nov 14, 2023), https://litigationtracker.law.georgetown.edu/wp-content/uploads/2023/11/Estate-of-Gene-B.-Lokken-et-al_20231114_COMPLAINT.pdf
[4] Barrows et al v. Humana, Inc., No. 3:23-CV-00654, (W.D. Ky., Dec 12 2023), https://litigationtracker.law.georgetown.edu/wp-content/uploads/2023/12/Barrows_2023.12.12_COMPLAINT.pdf
[5] Estate of Gene B. Lokken et al. v. UnitedHealth Group Inc. et al., No. 23-CV-03514, (D. Minn. Nov 14, 2023), https://litigationtracker.law.georgetown.edu/wp-content/uploads/2023/11/Estate-of-Gene-B.-Lokken-et-al_20231114_COMPLAINT.pdf
[6] Ross, Casey and Bob Herman. 2023. “UnitedHealth Pushed Employees to Follow an Algorithm to Cut off Medicare Patients’ Rehab Care.” STAT. November 14, 2023. https://www.statnews.com/2023/11/14/unitedhealth-algorithm-medicare-advantage-investigation/
[7] McKinney, Scott Mayer, Marcin Sieniek, Varun Godbole, Jonathan Godwin, Natasha Antropova, Hutan Ashrafian, Trevor Back, et al. 2020. “International Evaluation of an AI System for Breast Cancer Screening.” Nature 577 (7788): 89–94. https://doi.org/10.1038/s41586-019-1799-6.
[8] Ayers, John W., Adam Poliak, Mark Dredze, Eric C. Leas, Zechariah Zhu, Jessica B. Kelley, Dennis J. Faix, et al. 2023. “Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum.” JAMA Internal Medicine 183 (6): 589–96. https://doi.org/10.1001/jamainternmed.2023.1838.
[9] MacIntyre, Michael R., Richie C. Cockerill, Omar F. Mirza, and Jacob M. Appel. 2023. “Ethical Considerations for the Use of Artificial Intelligence in Medical Decision-Making Capacity Assessments.” Psychiatry Research 328 (September): 115466. https://www.sciencedirect.com/science/article/pii/S016517812300416X.
[10] Ho, Calvin W L, Joseph Ali, and Karel Caals. 2020. “Ensuring Trustworthy Use of Artificial Intelligence and Big Data Analytics in Health Insurance.” Bulletin of the World Health Organization 98 (4): 263–69. https://pmc.ncbi.nlm.nih.gov/articles/PMC7133481/.