Saturday, June 28, 2025
News PouroverAI
Visit PourOver.AI
No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing
News PouroverAI
No Result
View All Result

Delivering responsible AI in the healthcare and life sciences industry

January 3, 2024
in Blockchain
Reading Time: 4 mins read
0 0
A A
0
Share on FacebookShare on Twitter


The COVID-19 pandemic revealed disturbing data about health inequity. In 2020, the National Institute for Health (NIH) published a report stating that Black Americans died from COVID-19 at higher rates than White Americans, even though they make up a smaller percentage of the population. According to the NIH, these disparities were due to limited access to care, inadequacies in public policy and a disproportionate burden of comorbidities, including cardiovascular disease, diabetes and lung diseases.

The NIH further stated that between 47.5 million and 51.6 million Americans cannot afford to go to a doctor. There is a high likelihood that historically underserved communities may use a generative transformer, especially one that is embedded unknowingly into a search engine, to ask for medical advice. It is not inconceivable that individuals would go to a popular search engine with an embedded AI agent and query, “My dad can’t afford the heart medication that was prescribed to him anymore. What is available over the counter that may work instead?”

According to researchers at Long Island University, ChatGPT is inaccurate 75% of the time, and according to CNN, the chatbot even furnished dangerous advice sometimes, such as approving the combination of two medications that could have serious adverse reactions.

Given that generative transformers do not understand meaning and will have erroneous outputs, historically underserved communities that use this technology in place of professional help may be hurt at far greater rates than others.

How can we proactively invest in AI for more equitable and trustworthy outcomes?

With today’s new generative AI products, trust, security and regulatory issues remain top concerns for government healthcare officials and C-suite leaders representing biopharmaceutical companies, health systems, medical device manufacturers and other organizations. Using generative AI requires AI governance, including conversations around appropriate use cases and guardrails around safety and trust (see AI US Blueprint for an AI Bill of Rights, the EU AI ACT and the White House AI Executive Order).

Curating AI responsibly is a sociotechnical challenge that requires a holistic approach. There are many elements required to earn people’s trust, including making sure that your AI model is accurate, auditable, explainable, fair and protective of people’s data privacy. And institutional innovation can play a role to help.

Institutional innovation: A historical note

Institutional change is often preceded by a cataclysmic event. Consider the evolution of the US Food and Drug Administration, whose primary role is to make sure that food, drugs and cosmetics are safe for public use. While this regulatory body’s roots can be traced back to 1848, monitoring drugs for safety was not a direct concern until 1937—the year of the Elixir Sulfanilamide disaster.

Created by a respected Tennessee pharmaceutical firm, Elixir Sulfanilamide was a liquid medication touted to dramatically cure strep throat. As was common for the times, the drug was not tested for toxicity before it went to market. This turned out to be a deadly mistake, as the elixir contained diethylene glycol, a toxic chemical used in antifreeze. Over 100 people died from taking the poisonous elixir, which led to the FDA’s Food, Drug and Cosmetic Act requiring drugs to be labeled with adequate directions for safe usage. This major milestone in FDA history made sure that physicians and their patients could fully trust in the strength, quality and safety of medications—an assurance we take for granted today.

Similarly, institutional innovation is required to ensure equitable outcomes from AI.

5 key steps to make sure generative AI supports the communities that it serves

The use of generative AI in the healthcare and life sciences (HCLS) field requires the same kind of institutional innovation that the FDA required during the Elixir Sulfanilamide disaster. The following recommendations can help make sure that all AI solutions achieve more equitable and just outcomes for vulnerable populations:

Operationalize principles for trust and transparency. Fairness, explainability and transparency are big words, but what do they mean in terms of functional and non-functional requirements for your AI models? You can say to the world that your AI models are fair, but you must make sure that you train and audit your AI model to serve the most historically under-served populations. To earn the trust of the communities it serves, AI must have proven, repeatable, explained and trusted outputs that perform better than a human.
Appoint individuals to be accountable for equitable outcomes from the use of AI in your organization. Then give them power and resources to perform the hard work. Verify that these domain experts have a fully funded mandate to do the work because without accountability, there is no trust. Someone must have the power, mindset and resources to do the work necessary for governance.
Empower domain experts to curate and maintain trusted sources of data that are used to train models. These trusted sources of data can offer content grounding for products that use large language models (LLMs) to provide variations on language for answers that come directly from a trusted source (like an ontology or semantic search).
Mandate that outputs be auditable and explainable. For example, some organizations are investing in generative AI that offers medical advice to patients or doctors. To encourage institutional change and protect all populations, these HCLS organizations should be subject to audits to ensure accountability and quality control. Outputs for these high-risk models should offer test-retest reliability. Outputs should be 100% accurate and detail data sources along with evidence.
Require transparency. As HCLS organizations integrate generative AI into patient care (for example, in the form of automated patient intake when checking into a US hospital or helping a patient understand what would happen during a clinical trial), they should inform patients that a generative AI model is in use. Organizations should also offer interpretable metadata to patients that details the accountability and accuracy of that model, the source of the training data for that model and the audit results of that model. The metadata should also show how a user can opt out of using that model (and get the same service elsewhere). As organizations use and reuse synthetically generated text in a healthcare environment, people should be informed of what data has been synthetically generated and what has not.

We believe that we can and must learn from the FDA to institutionally innovate our approach to transforming our operations with AI. The journey to earning people’s trust starts with making systemic changes that make sure AI better reflects the communities it serves.

Learn how to weave responsible AI governance into the fabric of your business

Gautham Nagabhushana, Partner, Data & Technology Transformation – Healthcare, Public Markets, IBM

Global Leader for Trustworthy AI, IBM Consulting



Source link

Tags: DeliveringHealthcareIndustryLiferesponsiblesciences
Previous Post

Duplicate Content Issues on Your Website? Easy Ways to Find and Fix Them

Next Post

Enhancing Accountability and Trust: Meet the ‘AI Foundation Model Transparency Act’

Related Posts

5 SLA metrics you should be monitoring
Blockchain

5 SLA metrics you should be monitoring

June 10, 2024
10BedICU Leverages OpenAI’s API to Revolutionize Critical Care in India
Blockchain

10BedICU Leverages OpenAI’s API to Revolutionize Critical Care in India

June 9, 2024
Arkham: US Government Seizes $300M from Alameda Research Accounts
Blockchain

Arkham: US Government Seizes $300M from Alameda Research Accounts

June 8, 2024
Fake Musk Live Streams Flood YouTube During SpaceX Launch
Blockchain

Fake Musk Live Streams Flood YouTube During SpaceX Launch

June 7, 2024
How to Track Crypto Transactions for Taxes?
Blockchain

How to Track Crypto Transactions for Taxes?

June 7, 2024
NVIDIA Enhances Low-Resolution SDR Video with RTX Video SDK Release
Blockchain

NVIDIA Enhances Low-Resolution SDR Video with RTX Video SDK Release

June 7, 2024
Next Post
Enhancing Accountability and Trust: Meet the ‘AI Foundation Model Transparency Act’

Enhancing Accountability and Trust: Meet the 'AI Foundation Model Transparency Act'

Steve Wozniak Biography

Steve Wozniak Biography

Accelerate Lead Generation by Ungating your Primary Free Offer

Accelerate Lead Generation by Ungating your Primary Free Offer

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Trending
  • Comments
  • Latest
23 Plagiarism Facts and Statistics to Analyze Latest Trends

23 Plagiarism Facts and Statistics to Analyze Latest Trends

June 4, 2024
How ‘Chain of Thought’ Makes Transformers Smarter

How ‘Chain of Thought’ Makes Transformers Smarter

May 13, 2024
Amazon’s Bedrock and Titan Generative AI Services Enter General Availability

Amazon’s Bedrock and Titan Generative AI Services Enter General Availability

October 2, 2023
Is C.AI Down? Here Is What To Do Now

Is C.AI Down? Here Is What To Do Now

January 10, 2024
The Importance of Choosing a Reliable Affiliate Network and Why Olavivo is Your Ideal Partner

The Importance of Choosing a Reliable Affiliate Network and Why Olavivo is Your Ideal Partner

October 30, 2023
Managing PDFs in Node.js with pdf-lib

Managing PDFs in Node.js with pdf-lib

November 16, 2023
Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

Can You Guess What Percentage Of Their Wealth The Rich Keep In Cash?

June 10, 2024
AI Compared: Which Assistant Is the Best?

AI Compared: Which Assistant Is the Best?

June 10, 2024
How insurance companies can use synthetic data to fight bias

How insurance companies can use synthetic data to fight bias

June 10, 2024
5 SLA metrics you should be monitoring

5 SLA metrics you should be monitoring

June 10, 2024
From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

From Low-Level to High-Level Tasks: Scaling Fine-Tuning with the ANDROIDCONTROL Dataset

June 10, 2024
UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

UGRO Capital: Targeting to hit milestone of Rs 20,000 cr loan book in 8-10 quarters: Shachindra Nath

June 10, 2024
Facebook Twitter LinkedIn Pinterest RSS
News PouroverAI

The latest news and updates about the AI Technology and Latest Tech Updates around the world... PouroverAI keeps you in the loop.

CATEGORIES

  • AI Technology
  • Automation
  • Blockchain
  • Business
  • Cloud & Programming
  • Data Science & ML
  • Digital Marketing
  • Front-Tech
  • Uncategorized

SITEMAP

  • Disclaimer
  • Privacy Policy
  • DMCA
  • Cookie Privacy Policy
  • Terms and Conditions
  • Contact us

Copyright © 2023 PouroverAI News.
PouroverAI News

No Result
View All Result
  • Home
  • AI Tech
  • Business
  • Blockchain
  • Data Science & ML
  • Cloud & Programming
  • Automation
  • Front-Tech
  • Marketing

Copyright © 2023 PouroverAI News.
PouroverAI News

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In