The COVID-19 pandemic revealed disturbing data about health inequity. In 2020, the National Institute for Health (NIH) published a report stating that Black Americans died from COVID-19 at higher rates than White Americans, even though they make up a smaller percentage of the population. According to the NIH, these disparities were due to limited access to care, inadequacies in public policy and a disproportionate burden of comorbidities, including cardiovascular disease, diabetes and lung diseases.
The NIH further stated that between 47.5 million and 51.6 million Americans cannot afford to go to a doctor. There is a high likelihood that historically underserved communities may use a generative transformer, especially one that is embedded unknowingly into a search engine, to ask for medical advice. It is not inconceivable that individuals would go to a popular search engine with an embedded AI agent and query, “My dad can’t afford the heart medication that was prescribed to him anymore. What is available over the counter that may work instead?”
According to researchers at Long Island University, ChatGPT is inaccurate 75% of the time, and according to CNN, the chatbot even furnished dangerous advice sometimes, such as approving the combination of two medications that could have serious adverse reactions.
Given that generative transformers do not understand meaning and will have erroneous outputs, historically underserved communities that use this technology in place of professional help may be hurt at far greater rates than others.
How can we proactively invest in AI for more equitable and trustworthy outcomes?
With today’s new generative AI products, trust, security and regulatory issues remain top concerns for government healthcare officials and C-suite leaders representing biopharmaceutical companies, health systems, medical device manufacturers and other organizations. Using generative AI requires AI governance, including conversations around appropriate use cases and guardrails around safety and trust (see AI US Blueprint for an AI Bill of Rights, the EU AI ACT and the White House AI Executive Order).
Curating AI responsibly is a sociotechnical challenge that requires a holistic approach. There are many elements required to earn people’s trust, including making sure that your AI model is accurate, auditable, explainable, fair and protective of people’s data privacy. And institutional innovation can play a role to help.
Institutional innovation: A historical note
Institutional change is often preceded by a cataclysmic event. Consider the evolution of the US Food and Drug Administration, whose primary role is to make sure that food, drugs and cosmetics are safe for public use. While this regulatory body’s roots can be traced back to 1848, monitoring drugs for safety was not a direct concern until 1937—the year of the Elixir Sulfanilamide disaster.
Created by a respected Tennessee pharmaceutical firm, Elixir Sulfanilamide was a liquid medication touted to dramatically cure strep throat. As was common for the times, the drug was not tested for toxicity before it went to market. This turned out to be a deadly mistake, as the elixir contained diethylene glycol, a toxic chemical used in antifreeze. Over 100 people died from taking the poisonous elixir, which led to the FDA’s Food, Drug and Cosmetic Act requiring drugs to be labeled with adequate directions for safe usage. This major milestone in FDA history made sure that physicians and their patients could fully trust in the strength, quality and safety of medications—an assurance we take for granted today.
Similarly, institutional innovation is required to ensure equitable outcomes from AI.
5 key steps to make sure generative AI supports the communities that it serves
The use of generative AI in the healthcare and life sciences (HCLS) field requires the same kind of institutional innovation that the FDA required during the Elixir Sulfanilamide disaster. The following recommendations can help make sure that all AI solutions achieve more equitable and just outcomes for vulnerable populations:
We believe that we can and must learn from the FDA to institutionally innovate our approach to transforming our operations with AI. The journey to earning people’s trust starts with making systemic changes that make sure AI better reflects the communities it serves.
Learn how to weave responsible AI governance into the fabric of your business