Artificial Intelligence is full of technological and economic promise, but, just like its creators, AI isn’t free from subconscious discrimination. As AI becomes more commonplace in the medical field, questions of whether racial bias will be mitigated or expanded in the future are omnipresent. The solution will depend on how much effort is put into making AI more equitable. Join Lee Hawkins, Drs. Maia Hightower and Ivor Horn as they delve into this new frontier.
“Algorithmic bias is part of our history. It is part of the history of medicine, part of the history of the United States, and part of the history of our world, for many reasons.” – Dr. Maia Hightower
“And when I think about technology, it’s all about how am I giving people more information, more access, so that when they walk through the doors of a health care system like they have the tools to say, I know this, I understand this, this is my question for you, and this is what I expect of this health care system for me and for my family.” – Ivor Horn, M.D. MPH
Read the transcript:
Lee Hawkins: Welcome to Mayo Clinic’s Rise for Equity podcast. My name is Lee Hawkins and I’m your host And I want to welcome our guest today, Dr. Maia Hightower, who is chief digital technology officer of the University of Chicago Medicine, and Dr. Ivor Horn, Google’s director of health equity. Our episode today focuses on disparities in artificial intelligence and ways to make it more equitable, especially for people from marginalized communities.
Both of you have had careers that have spanned academe in technology and medicine and beyond. And you both been focused on making AI more equitable and on equitable opportunities and access in our health care system as a whole so people can reach their fullest potential in terms of health and wellness. So right at the outset, I want you to help me define the key areas where the disparities in AI are the most pronounced.
Maia Hightower: So as far as what is artificial intelligence, it’s actually just complex math. And used in a variety of different ways to mimic human behavior. In the case of health care, we actually only use a subset of the AI tools to predict, and usually it is prediction, health outcomes. And so there are other areas that will use AI as well Other than prediction, typically it’s natural language processing or image recognition similar to facial recognition, but instead it’s either skin or x ray imaging and images that we’re trying to predict if there’s something wrong when there is something wrong. So I think that’s sort of a broad definition. But within health care itself, the idea is that if we can scale equitable, fair, ethical AI, that there is opportunity to be a more efficient, equitable health care system.
Ivor Horn The way that I think about it. For X rays, for example, X-rays, when they are digitized they have pixels and the visual, you know, your naked eye and a radiologist reading an image has the limitation of what we can see with our naked eye. But then when you’re looking at it related to artificial intelligence and it breaks it down to those pixels, then they can get another level of granularity to help with the prediction models. However, it is really important that we don’t incorporate the biases that exist from the decision making that’s happened as part of that process. That’s happened in the past, historically.
Maia Hightower: AI is developed, especially machine learning products are developed from real world data. And so who is represented in our data? Data is like stories, and if you think about the stories that are overrepresented in our cultural narrative. It very much reflects power and privilege. And that’s the exact same with data when it comes to the data, the real world data that we train AI on. And so that real world data underrepresents some of our most marginalized populations. Those stories are not as well captured into our zeros and ones that converts a lived experience into data.
So that’s one area where disparities in AI arises. Just in who is represented in the data. Then the second component is who is represented in the machine learning community that decides which problems are addressed, that decides which methods to use during the machine learning process. There’s actually a judgment call during the machine learning process, and when you have only, say, a predominant perspective, then the questions are different and those decision points are different than for patients or people that may not be represented in the data science community. So then it’s within machine learning itself. There again, there are so many different judgment calls within the machine learning process where bias can be introduced and either mitigated or expanded. And then lastly, it’s actually how the algorithm of the product is implemented, because once you implement a machine learning product, often especially in health care, a lot of our machine learning products or AI products are prediction models. And at that point, let’s say you have a prediction model that is going to predict no show. That institution and that individual has to decide what to do with that information. And you can do many things with that information. One thing you can do if you know somebody is at high risk for no show – and what often health systems will do is they’ll double book – and so a person that may have many social determinants of health sort of working against them. You know they have child care issues, they have transportation issues, is more likely to show up as a high risk for no show, more likely to be double booked and then more likely to have a poor experience. Because if they’ve gone through all the trouble of getting to the appointment on time now, they have been set up for a poorer experience. A health system can choose if somebody is at high risk for no show to figure out why, to understand that root cause and try to help alleviate that barrier to access so perhaps it’s providing transportation and saying we’re going to send an Uber to pick up that that patient or figure out how to have, say, child care on premise. So that’s a very human decision. Once an algorithm has already been implemented and how a health system decides which intervention to use, it can be biased as well. So there’s a lot of different places along this spectrum that create bias. And then it can snowball.
Lee Hawkins: Okay. So algorithmic bias is obviously part of the history, right?
Maia Hightower: Oh, it is part of our history. It is part of the history of medicine. It’s part of the history of the United States. It’s part of the history of our world and for many reasons. But say in health care before there was machine learning algorithms, they were just plain statistical regression models that typically were based on a very limited population of, say, clinical trial data and clinical trials in the fifties and sixties.
Let’s say the Framingham Study that so many of our cardiac algorithms are based on. Framingham, Massachusetts, in the 1950s and sixties, does not reflect the United States today. It didn’t even reflect the United States in the fifties and sixties. And so that history of algorithmic bias has been with us since the history of medicine.
Lee Hawkins: Dr. Horn, what keeps you up at night in terms of the alarming disparities?
Ivor Horn: Oh, what keeps me up at night? I mean, I want to go back to something that Dr. Hightower said, because it is really important for people to have this context. When we talk about artificial intelligence and machine learning and algorithms, that there are historical factors that build on that. It’s sort of like, well, what are we going to learn from? The data set that is used there. There’s information. There’s data that these data scientists use to train those models and that data set is very limited. That information that they are using to create this intelligence is very limited in the content that it has. And it may come from a source that has no context for where it’s being applied to your point of understanding that: now we’re applying this algorithm, but this algorithm was, for example, applied in an ICU and now they’re doing it in a primary care setting. So understanding that is really important. There are lots of things that keep me up at night. But when it comes to this context, I think the fact that we are beginning to build this into our workflows without not just individuals who are coming to get care, understanding how this impacts on the decision making that’s happening in their care, but that the providers who are actually using these in their practice workflow don’t have necessarily the transparency to say: this algorithm applies and I should use it in this way, but not in this way. And so us finding ways that when we do apply algorithms to decision making and workflows in health care, that we’re creating transparency, whether it’s transparency in model cards, transparency in information so that people can use them in the appropriate way and most effectively. And so for me, it’s about understanding the decision making process that’s going into place. What keeps me up at night is the information that we’re getting, but also people’s understanding of how to use that information.
Lee Hawkins: Let’s drill down into the root causes. The data, the real world data versus the real world bias. Where does that factor in?
Maia Hightower: It’s a huge factor because the real world bias is in the real world data. I like to think of the data set, because the data is supposed to be a reflection of the real world, but it’s actually just a small component. And my colleague here described how it’s just, you know, our datasets are quite limited, but they’re trying to be a reflection. We’re not quite a reflection of the real world. And we’re trying to create this future state where there really is a digital twin, that rep that is a replica of the human experience. But we are so not there yet. It’s worse than a coke bottle view of the world. And so the algorithmic bias, the real world bias, very much is integral to the algorithmic bias and the bias in the data set.
Lee Hawkins: But the real world experience of the scientists is there as well. I want to talk to you about that, because there’s a real lack of diversity. And I’m sure many of the rooms you’re in, you’re probably one of the few people like you. How does that play out?
Maia Hightower: There’s a huge disconnect, especially between data scientists and the clinical world. And so in health care, you’ll have the data scientists that don’t have a good understanding of health care. And on top of that, it may not have a good understanding of social determinants of health and how that is reflected in the data set. I’ll give an example – Obermeyer et al. Which is a very commonly cited example of algorithmic bias at scale. We’re talking at this point, it’s estimated that 80 million people were affected by this racist, biased algorithm. And the root cause was a simple labeling error. In other words, the data scientist did not realize that cost does not equal risk. And we know that black Americans in the United States, for an equal level of illness, will spend a lot less health care dollars. And yet the algorithm, which was perfectly designed to predict cost, was used to predict the risk of a poor outcome. And then from that risk, distribute care, case management in this particular case. And the rate of case management was at least half that for black patients compared to white patients. On a national scale, I mean, the scope was just mind blowing. I was a chief population health officer at the time. And so to know that an algorithm actually did harm to my patients. It was like real harm, but I couldn’t actually measure it. I didn’t have the tools to monitor it. I didn’t even have the tools necessarily to detect it, not at the level of, say, 100,000 lives, which were what I was under management at that time. It took 80 million people for some scientists at Berkeley and Chicago to detect it through a national data set. It was quite striking
Ivor Horn: And I think what’s really important, unfortunately, we now have examples like that. I work with literally geniuses every day who are really passionate and care about what they’re doing. They have no context. Oftentimes, they have no health care background. So they don’t even have a health care context to start with. And then they don’t have the context of putting the data in a real world understanding of diverse populations, because for the most part, to the place where they are, most of them have had a really privileged experience. And so what we do is we actually have health equity experts within my team and within our teams who are really embedded with those teams to work with them during the time when we’ve been virtual to really help them get context, to understand how they interpret the information that is in front of them.
I think my ideal world is to put them on the front lines and put them with providers and put them with communities to help them to begin to understand how do they interpret the information that’s in front of them? Because they do they have data from electronic health records or they have data from another data set. But actually understanding the limitations of that and having them understand the importance of the decision making that they do along the process. When they have a data set, understanding the limitations of that data set so that they can put that in context when they’re doing labeling, understanding that there are decisions that are made in that labeling process, in who’s a part of the labeling process and who’s a part of helping to shape the framework of how they decide that labeling process. And do those people have context and lived experience? And so that is part of the work that our team does and that we do to say, wait a minute, why did you do that? Like, what’s in that data set. Let’s actually interrogate and look at the content of that dataset. Is it representative of the populations that you’re actually planning to use this against? And if it is great, if it is not, we need to let the end user know that this is the way that you should think about what’s in this dataset.
Lee Hawkins: So where are the glimmers of hope and opportunity for addressing AI bias?
Maia Hightower: One other fact hat I wear is I’m also the co-founder and CEO of Equality A.I. And what we’re trying to do is identify methods within the machine learning process to identify, detect bias, reduce bias and model for fairness. And to make it what you’ve described for Google accessible for data scientists everywhere that if they want to do the right thing, that the tools are easy to use.
And so Obermeyer actually, one thing that they did show in their paper is that with awareness and techniques, you can actually reduce the bias that is both in real world datasets as well as decrease that risk of labeling error. The challenge is scaling it so that data scientists everywhere are detecting bias, reducing bias and modeling for fairness, every algorithm, every time.
And right now, that isn’t the case. The machine learning operations systems are very much a lot of judgment call by individual data scientists who may not be aware that these techniques exist. So I think one methods that’s a real glimmer of hope. We know just like on your iPhone, you take a picture and it’s a less than perfect picture and you go to the edit function and you click the little magic wand and then all of a sudden it looks better. And then if you go to the edit function again, you can see 20 different functions and maybe you’re like: Oh, maybe I want to make the brightness a little bit better or the balance better. Now whatever you’ve corrected that photo, it’s not as good as if you took a good picture in the first place. But it looks a lot better. And so that’s what we’re trying to do with machine learning modeling, where if you have some sort of methods, the same types of methods to give a balance between performance and fairness to the data, the data scientists, where they can make it a little bit better, a little bit less biased, a little bit more fair.
Ivor Horn: I would add for us there are a few things because I think what’s really important, like I’ve been in academia, I’ve been in medicine, and now I work for a large technology company and what I remind people is like now as technology becomes more into health care, there are more people at the table with you in the practice of medicine than you realize. And for us, there are teams, there are program managers. You know, everyone doesn’t have to be an engineer or a data scientist or a physician, though we do have engineers, data scientists and physicians and nurses and nutritionists, all a part of our teams who are sitting there helping to build our products and helping to think about our models, but also there are program managers and there are product managers who actually help coordinate and organizing, keep us in order. We see that a lot of times in product managers who are doing that work and the ability for everyone who’s at the table to feel like they belong and feel like they have a voice and give them space to do that, to have that voice, because you may not be an engineer, but you may have the lived experience that that team needs to hear and understand about that product so that the people that you care about can be seen in the products that you’re helping to develop. A great example: the work that we did on Pixel and Image Equity really was about people who were in the room saying, My, my family is not seen by this camera. Let’s build a better camera and let’s bring the community into the conversation so that we’re adding that lived experience as we’re building. And so that’s one of the really important things that we work on. It’s not just the engineers who are heads down. They also get to hear a voice from the community, whether it’s part of their team or us sort of giving them exposure to other people with lived experience as part of the process.
Lee Hawkins: In a lot of our conversations today with people in other sectors of healthcare of the industry have all boiled down again to recruitment and retention and getting the workforce more diversified. And you mentioned that one of the things that you’ve said in recent days is you’ve encouraged people to tap into their story, in their identity, to drive change. Can you expound on that, how that fits in?
Maia Hightower: I’ll share my own story. And by sharing my own personal story, but also my ancestral story, it provides, like I said, identity, but it also provides context, wisdom and perspective. My mom is Chinese American, second, third generation. I grew up in California. My dad is black and grew up in Arkansas and they met in the San Francisco Bay area. The matriarch really of our family is my grandmother, Miss Annie Hightower. She’s 102 years old from Little Rock, Arkansas. And she provides both that historical perspective and wisdom and some of the stories that she’s taught us is things like: they can take away your things, they can take away your life, but they can never take away your education. And so that’s just been a lesson that’s passed down in my family for generations. This commitment towards education and why perhaps I have way too many degrees. But I also think of my own 102 year old grandmother when I’m thinking about product design, right? When I’m thinking about that true north and often in health care, well, think about a true north patient that we’re designing for. And when you don’t have diversity of representation in the room that true north may not look like my grandmother or my father. And so that is something that I tap into all the time as far as my own personal journey, whether it’s education or accomplishments. Again, it really is our stories. Our stories really do provide us with our identity. Our identity provides us with purpose. Our purpose drives our behaviors. And so during those tough times when perhaps you feel like you’re swimming upstream and everybody else is swimming downstream, there’s always that moment when you’re like, everyone is swimming downstream, but I’m always swimming, swimming upstream. Whether it’s because you’re black or white or Asian or gender or sexual orientation or gender identity, there’s always a moment where one feels that, and that’s where that tapping into those sort of stories are so important. That perseverance in the face of marginalization, where it allows us to keep moving forward.
Lee Hawkins: And I should mention you have an MBA from Wharton School of Business, right?
Ivor Horn: I do, I guess.
Lee Hawkins: I think you just decided to get that.
Maia Hightower: Yes, exactly. So I’m a med p so birth to death – the internal medicine in pediatrics trained m.p.h. so that’d be individual to population. But I didn’t think that was broad enough. Right. So I had to add the greater understanding of finance in the health care marketplace. And hence that was the MBA from Wharton.
Lee Hawkins: And how seriously is the business community taking AI disparity?
Maia Hightower: Well, I think the business community at this point has acknowledged that it exists. So, you know, that is the first stage of change, is acknowledgment that we have a problem. And so I think that there has been movement in that direction, acknowledging that the problem exists, as well as declarative statement that we are working on addressing it. I would say we haven’t yet gone beyond that at this point where we can say that there’s been a measurable difference when it comes to health equity and health outcomes. I think in other parts of industry that there has been some improvement, but then there’s others like H.R, and definitely when it comes to surveillance in some of the other areas of AI where it’s actually seems, it may even have gotten worse. So I’ll leave that to my colleagues.
Lee Hawkins: You know, your biography as well. I mean, it’s one that I think it’s important for people to know about. So I want you to talk about your journey to where you are now today. But one of the things that really stood out to me was you talking about your own family’s experience in the context of health care, where you actually stated that your mother made sure that your father was dressed to the nines when he would go to the doctor because there was a concern about being treated equitably at the doctor’s office.
Ivor Horn: Absolutely. I grew up in Mississippi. Both of my parents were from Mississippi, born and raised. I love your story about education because it was very much imprinted on my mom. Like education is the key. Everyone should you know, she pushed for us to all get a college degree. She was the first one in her family to graduate from college. And growing up in Mississippi, my dad had a traumatic brain injury when I was nine. And so as a result, he developed a seizure disorder. And so we were in and out of hospitals all the time. And my mom came from that experience and from that culture of like, if you were to be respected, and we didn’t really dress up as a family, it wasn’t like a thing, like there was no Sunday best church. Like we went to Catholic Church and we just wore jeans. But when we went to the hospital, my mother knew that it was really important. More than anything, she wanted people to understand that my dad was loved, respected and had value in his community, and the best way that she could represent that was how he was presenting and how she presented it, how she presented herself. Because my dad was a really soft spoken man, so he wasn’t going to speak up for himself, but she wanted people to see within his stature and within his attire and within his presence, that he was important in his community and valued. And I think for me, the recognition of that, I went into medicine because at nine and ten I saw how my family was treated. And my mother, who was an educator, who had educated many of the people in the community, how they were disrespected and how we were disregarded in that setting. And I said to myself, I never want anyone to have that experience that my family experienced going into health care, and I want them to be empowered to feel that they can speak up for themselves.
I literally went into medicine to transform the way people behave, the way physicians behave, in health care. I realized really quickly because I went into academic medicine to educate physicians on cultural humility and realized like, it’s not really about changing the provider’s behavior. It’s really about giving the people who walk in the door the tools to be able to ask for what they deserve from us as providers.
And so that has been my focus. And when I think about technology, it’s all about how am I giving people more information, more access, so that when they walk through the doors of a health care system like they have the tools to say, I know this, I understand this, this is my question for you, and this is what I expect of this health care system for me and for my family.
So to your point of like, who’s your north star in walking in that door? It’s like my north star is that experience of watching as a nine year old how my parents were treated and wanting something different for other people when they go into a health care system who look like my family.
Lee Hawkins: And it is important to know that you are a child of a child of Jim Crow segregation. And so were you as well standing before us right now. What did you do after high school to lead up to this?
Ivor Horn: Oh, well, after high school I went to I went to a historically black college. I went to Spelman. And there was something about going to a school where there are all black women of excellence with legacy and even though after I left Spelman, I was in many, many rooms where I was the only, I knew there were many people outside who were amazing, like I have I don’t know how many physicians in my class. I knew that there were people like me. So I knew that I was never alone, even though I was the only in a room. So I could stand in my stand in my presence and stand in my power in that space I have a colleague who does recruitment, who is really recruiting, and people say, well, I can’t find this person or I can’t find this talent. There are no black women. There are no black men. And she would say it’s like looking for a needle in a haystack. And she’s like, Oh, no, you’re looking in the wrong haystack. It’s like, there are tons, let me show you. You’re just looking in the wrong haystack. And I think what is really important is there are people from marginalized communities, underrepresented communities who have lived experience that are going to empower your team to do better work, to do excellent work. You just have to want you have to want to find them.
Lee Hawkins: And when that, when the desire is there, the business outcome is there. Right? Just like any other business problem. If the objectives are not met, then heads roll. Companies provide millions of dollars to hire consultants to solve problems. And if the problems are not solved, people are shown the door. Why can’t that be true with AI?
Ivor Horn: We know that data shows that more diverse teams have better outcomes. They’re more businesses are more profitable when they have more diverse teams sitting around the table. And it’s really important not just to have the team, but also give the team space to speak and confidence to have their voice and bring and be their full selves when they come to the room.
Lee Hawkins: Dr. Hightower, Dr. Horn, thank you very much. This has really been enlightening and very powerful. And I hope to speak with you again some time. It’s been a pleasure. Thank you so much.
Ivor Horn: Thank you.
Lee Hawkins: And for the RISE for equity podcast, I’m Lee Hawkins. We’ll see you next time.