The definition of bias within artificial intelligence (AI) is pretty straightforward: PwC defines it as “AI that makes decisions that are systematically unfair to certain groups of people.”
Research has highlighted the profound impact of bias embedded in AI algorithms. A recent study by scientists at the University of Deusto in Spain reinforced the idea that AI “inherits” bias from humans. Even more alarming about the researchers' data was how it demonstrated the reverse, that humans also absorb bias from AI. Sadly, the effects of that bias can remain long after people stop using the algorithm.
While the problem of algorithmic bias is becoming widespread as more people incorporate open-source tools like ChatGPT into their daily lives, nowhere do we see more impact of algorithmic bias than in the health care industry. Providers, plans, and other health care organizations have been using machine learning algorithms for years, and unfortunately, we’ve already seen impacts from bias embedded in systems and tools using AI. For example:
As more research and stories like these emerge, health equity experts, news media, and others are putting pressure on technology companies to understand better how machine learning algorithms promote negative stereotypes and perpetuate discrimination.
Biases in the data used to train algorithms can lead to models that perform well for specific populations but fail to capture the complexity and needs of diverse and marginalized groups. This, in turn, can result in delayed or insufficient care that can significantly impact health outcomes.
Algorithmic bias is equally concerning when models are used for predictive analytics for patient health outcomes, risk assessments, and resource allocation. If an algorithm is trained predominantly on data from one demographic, it may not accurately predict the needs or outcomes of another, leading to less effective or even harmful health care recommendations for underrepresented populations when the results are inappropriately generalized. While this is problematic for care delivery, the implications when algorithms containing bias are used for more comprehensive planning or policy decision-making could be catastrophic.
If left unchecked, bias within AI health care systems and tools can lead to a multitude of adverse impacts:
— Anjoli Punjabi, MOBE Director of Program Health Outcomes and Health EquityUnchecked AI bias can perpetuate and deepen inequities in health care.
“Unchecked AI bias can perpetuate and deepen inequities in health care, leading to a cascade of negative consequences for individuals and the health care system. Addressing these issues proactively is critical to ensure that health care AI fulfills its potential to improve care and make it more accessible and effective for everyone.” —Anjoli Punjabi, MOBE Director of Program Health Outcomes and Health Equity
AI is only as good as the data it learns from, and historically, health care data has been plagued by inequities based on how the data is collected, categorized, and disseminated. As AI becomes further entrenched in health care systems, these biases can become automated and invisible. AI is expected to play a larger role in diagnostic processes, treatment recommendations, and patient interactions through digital health platforms. If the AI's foundational data, categorization norms, and labeling systems are not rigorously evaluated and continuously updated for bias, the consequences could range from inequitable resource allocation to disparities in disease management and outcome predictions. Equally important, processes must be in place to prevent bias from occurring within health care systems. Careful training and quality assurance standards can go a long way to eliminating the risk of algorithmic bias.
Fortunately, many organizations are assisting with bias prevention, detection, and correction.
In 2022, The National Committee for Quality Assurance (NCQA) updated its Health Equity Strategy initiatives by including a Health Plan Accreditation requirement. It directly addresses potential problems with bias in AI by mandating that health plans identify and mitigate potential biases in the segmentation and stratification of populations. Within the new NCQA Population Health Management standards (PHM2), health plans are instructed to employ rigorous validation and testing protocols for their AI systems, ensuring the data is representative and the algorithms are transparent and fair.
Public health experts, technology providers, data analytics professionals, and others continue to work to help mitigate AI bias in health care. This includes reviewing how algorithms are developed, from data entry and cleaning, to algorithm and model choice, and implementation and dissemination of the results. Better training initiatives can also be highly effective in helping technology and health care professionals spot algorithm bias before it becomes embedded in AI tools and systems.
In 2023, MOBE formed a committee to identify and address algorithmic bias. This diverse group of data analysts, health equity experts, executives, and other MOBE professionals continues to discuss ways to detect, address, correct, and prevent AI bias as an organization and in our industry. While significant measures already exist, the MOBE team continuously pursues ways to mitigate potential AI bias, including:
There is much work to be done as we learn better, more effective ways to reduce and remove bias from AI systems now and in the future. MOBE is committed to being a leader in the effort to provide fair, equitable health care for everyone.
“It has always been our philosophy at MOBE to do everything possible to prevent new bias within our algorithms. Our models exclude functions like race, ethnicity, language, and ZIP codes. We’re also smart about not perpetuating bias that might already exist in a dataset by avoiding training our models on any features we know have bias embedded within them.” —Travis Hoyt, Chief Information and Analytics Officer
— Travis Hoyt, Chief Information and Analytics OfficerIt has always been our philosophy at MOBE to do everything possible to prevent new bias within our algorithms.