Overview
Pre-Module Survey
Module
Recommendations
Cases
Post-Module Survey
References
About Module

Instructions
For any questions about the content or to report technical problems contact Benjamin Collins at AlgorithmicBiasModule@gmail.com

Left Arrow: Move to the previous slide Right Arrow: Move to the next slide
Play/Pause: click this to toggle between playing and pausing the audio on a slide Autoplay: Continues to the next slide automatically
Information: click this button to view the instructions again Slide Counter: this shows your progress in the module for the number slides completed
Navigator: opens menu to select a section to skip up to or skip back Show Transcript: opens caption to show the transcript for the current slide
References: opens caption to show references for the current slide
Objectives

In this module you will learn to...

(1) Recognize different forms of algorithmic bias in the use of artificial intelligence for health care

(2) Deconstruct real-life examples of algorithmic bias

(3) Empathize with the importance of recognizing and addressing algorithmic bias in health care

Agenda

Core Terminology

Main Concepts

Examples

Summary and Recommendations

Cases

What is Artificial Intelligence?


Computer Science
Digital Algorithms
Complex Decisions
What is Machine Learning?


Input
Output
Patterns of Data
Machine Learning
Predictions
What is an Algorithm?
Version 1
Version 1
Version 1
Version 1
Version 1
Version 2
Version 2
Version 2
Version 2
Version 2
Version 3
Version 3
Version 3
Version 3
Version 3
Version 4
Version 4
Version 4
Version 4
Version 4
Input Instructions
Output Produced
What is Bias?

Truth
X
X
X
X
X
X
X
X
X
X
X
X
X
X
Your browser does not support the HTML5 canvas tag. Your browser does not support the HTML5 canvas tag.
Social Bias
Statistical Bias
What is Algorithmic Bias?

Data Input
Methods
Human
Judgment
Concept #1 - Biased Data
Garbage in - garbage out







Biased Data In
Biased Data Out
Preexisting bias is captured in data
More on Coding Bias into AI Systems


Data collection forms

Decisions about which variable to include

Access to care

Cost of care

Rates of (mis)diagnosis

Access to care
Avoiding biased data
Diverse
Accessible
Standardized
New
Accurate
Complete
Concept #2 - Transparency and Explainability

"Any sufficiently advanced technology is indistinguishable from magic." -Arthur C. Clarke

The Black Box
Expanding Transparanency and Explainability
Hindered by commercialization
There is a burden on clinicians to learn more
(Mis)trust in AI
Ability to audit AI systems
Auditing
AI systems should provide level of certainty
Certainty

Medium-Level of Trust

At a medium level of trust, you may have some idea about how well the system works for a test population, but you may not know the characteristics of that population and so it is unclear whether you should apply the system to you individual patient. At this point, you would want to seek out more information about the system and how it was trained. A system that does not report to the user its level of certainty can be a reason to hold off on having a high level of trust in a system. Additionally, even if a system initially performs well, not being able to audit the system should suggest not having a high level of trust as the level of system performance may not remain high over time.

High-Level of Trust

To have a high level of trust in an AI system, you should have confidence in how well if functions generally and for your patient specifically. The output should show you the level of certainty from the system, and experts should be able to audit the system overtime to ensure that it continues to function well. It must also be recognized that having a high level of trust does not always match the system performance. Automation bias is a type of bias where people may implicitly trust an automated system even if there is no real evidence that the system is trustworthy.

Low-Level of Trust

A low level of trust can imply either a system that you know very little about or a system which performs poorly. For systems that you know very little about, it can be a starting point when adopting new systems for use and trust in the system can grow as you learn about the system. If the system is performing poorly, it should not be used. However, due to issues with a lack of transparency, sometimes it can be difficult to know when a system is performing poorly. The inability to audit a system is a reason to hold lower levels of trust in the system, especially if there was no evidence that a system was performing well in the first place.
Concept #3 - Regulation of AI in Health Care
Limits to the regulation of AI systems
  • FDA recognizes AI systems as a medical device
  • Legal response lags behind technology development
  • FDA has not directly addressed concerns about algorithmic bias
  • Uncertainty about liability in the use of AI
  • Industry can exert significant influence over AI regulation
Concept #4 - Risk of harm from algorithmic bias
Potential Harm
Planning
Data
Collection
Programming
Training
Validation
Deployment
Racism and Colorism
  • Racial minorities are underrepresented in most studies

  • Developers are not representative of minority populations
Avoiding Harm
  • All new clinical tools should be evaluated for safety
  • Bioethicists should be involved in the design of AI systems
  • Patients expect their clinicians to ensure systems are safe
Concept #5 - Transformation of health care
Datafication and Digitalization of Health Care
Paradigm Shift
750 quadrillion bytes of data daily
Melanoma Diagnosis
Data Collection with Pulse Oximeter
1 of 6
playing
autoplay:off
timer

00:00

1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0