Achieving AI’s Promise & Transformative Power to Shape Health Care’s Future

Join us as Laura Adams, Special Advisor at National Academy of Medicine, guides us into the world of AI in health care. Discover the transformative power of AI and the ethical considerations surrounding its applications, including ChatGPT4. Gain valuable insights into the opportunities and challenges AI presents to health care professionals, envisioning a future where AI revolutionizes person-centered care.

Achieving AI’s Promise & Transformative Power to Shape Health Care’s Future
Featured Speaker:
Laura Adams

Laura Adams is Special Advisor at the National Academy of Medicine (NAM), where she provides leadership for the Digital Health and Evidence Mobilization portfolios of the Leadership Consortium. She has expertise in digital health, health care innovation, and human-centered care. As Catalyst at X4 Health, Laura leads the national strategic partnerships for the 3rd Conversation (3C) project (https://www.3rdconversation.org/), helping to reweave humanity into the fabric of healthcare and healing. She serves on the Board of Translational Medicine Accelerator (TMA), a Boston-based precision medicine company focusing on patients with recalcitrant and rare diseases. She leads TMA’s initiative in Barcelona, Spain.

Transcription:
Achieving AI’s Promise & Transformative Power to Shape Health Care’s Future

Intro: The following SHSMD Podcast is a production of DoctorPodcasting.com.


Bill Klaproth (Host): On this edition of the SHSMD Podcast, we're going to do a deep dive into the world of AI with Laura Adams. She is the closing keynote speaker at the 2023 SHSMD Connections Annual Conference in Chicago. She is going to discuss the promise of AI, the transformative power to shape healthcare's future with AI. She's also going to be talking about the pitfalls and the drawbacks of AI as well.


And speaking of AI, have you heard the one about why AI went on a diet? Because it had too many bytes of information. Okay. Sorry about that. Laura will not tell any bad AI jokes, but it is going to be an enlightening closing keynote. So, let's hear more about it coming up right now.


This is the SHSMD Podcast, rapid insights for healthcare strategy professionals in planning, business development, marketing communications and public relations. I'm your host, Bill Klaproth.


In this episode, we talk with Laura Adams, Senior Advisor at the National Academy of Medicine and named to Becker's Most Powerful Women in Health IT. She is going to discuss this topic in length, Achieving AI's Promise and Transformative Power to Shape Healthcare's Future. At this year's SHSMD Connections 2023, she will be the closing keynote speaker. You can register at shsmd.org. That's S-H-S-M-D.org/education/annualconference. Laura, welcome to the SHSMD Podcast.


Laura Adams: Hey, Bill. It's great to be here. Thank you so much for having me.


Host: Of course. Looking forward to talking with you now and hearing your keynote at SHSMD Connections 2023. So Laura, can you give us the cliff notes version of your keynote?


Laura Adams: Yeah, I'd be happy to. I think people attending this session are going to want to know about the idea of the hope, the hype, the promise and the peril. We really can't turn 10 degrees today and not encounter some discussion of AI, whether it's on the television or social media or in conversations with friends. And depending on where you sit with that conversation, it can either be exhilarating or it can be sort of terrifying.


And I think that as we begin to look at the idea of AI's advances in healthcare, it's without a doubt going to become one of the operational backbones of healthcare. So, it'll be important for attendees at this conference to start to get a feel for what do we really think it can do for us, what is that hope for us, what's being overhyped right now and what feels like it's maybe a pipe dream. I think we also want to look at the idea of what is the real promise for this. I mean, what can be count on for this that's a little more grounded than the hope? And then, what are the real perils that we've got to be paying attention to? And actually, all of us putting our shoulder to the wheel to make sure that AI doesn't break bad.


Host: Yeah. You said a couple of things that really piqued my interest. One, you said it's exhilarating or terrifying, which I think is true no matter what side you're on. And then, you said it has the potential to become the operational backbone of healthcare. So, that's really exciting and can be terrifying to think about at the same time. So, this is a podcast for healthcare marketers and strategists and communications professionals. So, will there be specific applications of AI for healthcare strategists and marketing professionals and communication people?


Laura Adams: There definitely will be. And I can't wait to share these with the participants and the audience. Because what it holds in store for those in those particular areas and specialties is things like language translation, the creation of culturally appropriate messaging that we struggled with so much in healthcare, to be able to communicate with the many audiences and our communities and our patients and even our staff often; helping them improve their writing skills. And I mean, this won't be a heavy lift to be able to do that. There are going to be AI applications that make it very easy for us to write brilliantly in many ways. I think it's going to save a ton of time by generating some texts, some content, some emails, you name it. In terms of content generation, this perhaps is where we're going to see one of the biggest boons.


And I can't wait to see what AI is going to do with scenario planning and really help us to learn from the past, gather some insights, stay on top of trends. I think it's going to have all of those benefits. It'll also, in many ways, help monitor brand reputation, which is critically important. It'll make it much easier to monitor things like media impressions, online conversations, identifying kind of sentiment that is around a brand or an issue that we've got to follow.


And then, I think it'll be good at providing some insights into how some of the marketing campaigns are actually going beyond some of the traditional and, I think, kind of hidebound methods that we have now that don't really give us that feeling of being close to those that we're attempting to reach with our messaging.


Host: So, you said this has the potential to be the operational backbone, and what you just mentioned are all positive attributes and benefits of this. When you say the operational backbone, is it going to take over healthcare, do you think?


Laura Adams: I think that we like to think of this, I prefer to think of this, as augmented intelligence, because there are things that human beings apply in healthcare probably to a degree that really no other sector or industry does. And that's things like we have to be completely conscious often as we're interacting with the people that we serve. That means being in the here and the now. No artificial intelligence system can replicate human consciousness. We also have to use our intuition, particularly as it relates to planning and our ability to anticipate, no augmented intelligence is going to be able to do that. There's a component of feeling and human connection, sort of judgment with regard to context, the fact that we have to shift between situations because we never know. And AI may not be able to anticipate that will get a COVID-19 epidemic, for example. I don't think it's going to be capable, say, in the next 10 years of anticipating short and long-term goals. So, I won't go any further than 10 years out saying what I think it can and cannot do, but I think it will permeate every aspect, such that humans will not be replaced by AI as the saying goes. But instead, they will be replaced by humans using AI. So, it feels to me like leaning into this is not only strategic, it's very smart and it will allow a person to become much more valuable to the people that they serve and to their organizations if they can lean into it.


Host: So, leaning into this for individuals will be very smart. And that is part of the exhilarating aspect of AI. So on the flip side though, as you said before, this also can be terrifying. Where is the potential for serious harm when it comes to AI?


Laura Adams: There's a lot of it, to be blunt. I think that one of the biggest challenges we have with AI is its capacity to spread disinformation, misinformation with things like voice cloning and deep fakes where we've seen situations now where grandma can get a call from a grandson saying, "I'm in peril and I need money," and that is a voice clone fake of the grandson. It is his voice. It is his voice. And so while we are using voice AI to be able to do for the first time become like a biomarker for mental health disease, if you don't want someone using AI on your voice that they can record right off your cell phone and diagnosing your mental illness with it, this is a problem. So, it looks to me like the truth is in trouble. And that's what I'm most concerned about with AI, is that it has that capacity.


The other aspect of it that we should all absolutely be aware of is its potential to promote and spread bias. We are well aware that in the history of digital advances, they almost often widen a digital ethics divide. They don't often close that divide. We know that AI sounds futuristic, and it sounds like It's the next coming decade. It's actually trained on data from the past, and those data are replete with bias where we've collected data from certain populations, the well-resourced, the well-represented populations, not so much for those that are under-resourced. So, we think about facts like the earlier skin cancer detection applications of AI were wonderful if you have light skin, because they were trained and built to be able to identify cancers, but primarily on those with light skin. So if you are dark skin, you are out of luck with that particular application. And that's just one of many, many examples where we've begun to understand that our training data is perpetuating bias. And I have enormous concerns about halting that in its tracks.


Host: Yeah. That is kind of scary, especially when you say the truth is in trouble, that one little phrase really wraps it up, and the potential for this to spread bias. So, thank you for talking about the ethical considerations. For our SHSMD members listening to this, who are primarily marketers, strategists, communication professionals, what type of ethical considerations will they need to be aware of?


Laura Adams: I think they're going to need to be aware of how easy it is to violate privacy on this. It used to be that you had to have special access to AI, so you either had to be dealing with an algorithm. Now, we use AI in our everyday lives. Google, Netflix, Alexa, open your phone with your face ID, social media, that's all AI, and it's all an issue of privacy. And I think the concerns around privacy here are that the more that you give to these tools and techniques, because now they're democratized, anybody with access to the internet can get ChatGPT or any of the others, it's in everyone's hands. So, people might upload things like their own symptoms around something, not knowing that that's out there on the internet. That's like posting it for everyone to see. We've seen so far that these things help us, chat bots and things like that, help us by gathering more and more and more information from us. And our experience has been with all of this, that convenience overwhelms privacy. And I don't see that trend reversing, that we have to be careful about and sometimes we're exposing patient privacy. We now already know that some clinicians, if they've got a vexing problem with the patient in front of them, will just put information into ChatGPT, not knowing that that's actually a HIPAA breach because it's now on the open internet. So, I think that's one thing to be concerned about.


Probably, the other thing that's on everyone's mind is, "I'm going to lose my job to AI." We know that the World Health Organization, actually it was a World Economic Forum, they did a 2020 report where they said that AI is going to replace about 85 million jobs by 2025. Eighty-five million jobs. They said that now 97 million jobs will be created that same year thanks to AI. The question is most people assume, "Oh, that's great. Those 85 million people can get jobs." No, actually not. You may or may not be one of those people that gets the new AI job. So again, I really feel like that idea of learning how to incorporate this into your day-to-day work, into our day-to-day lives and understanding the roles and responsibilities are really critical for the professionals that are in strategy, marketing and communication.


Host: Yeah. So when you say, "We must lean into this and embrace this," it's true. So, either get on board or you could be left behind. Talking about the pitfalls of ChatGPT and AI, so, Laura, what is the path then towards responsible implementation of AI?


Laura Adams: Bill, fortunately, there are a number of efforts going on all over the country and internationally to make sure that we reap all the benefits of this, but we make sure that we protect people. This is almost like a time when the copper wire was being invented, and we didn't even really know what we were going to be able to power with that copper wire like a microwave or a light bulb or so forth. We just have to make sure that we start insulating the copper wire so people don't get electrocuted in the process.


And I think that's what we're trying to do with initiative that I lead at the National Academy of Medicine called the Artificial Intelligence Code of Conduct, and we've brought together people from all sectors, Microsoft and Google and Mayo and Memorial Sloan-Kettering and Phillips, and patient representatives, ethicists, anthropologists, and brought them all together, academics and researchers, to say, "What are the guardrails here that won't tamp down and suffocate innovation, but at the same time, make sure that we don't continue to widen that ethical and the divides that we have in this country, racial divides and those sorts of things? How do we make sure that we don't just allow AI to go into well-resourced organizations and the inner city Chicago hospital, not so much, they won't be prepared or capable, or the Mississippi Community Health Center or the rural hospital in Nebraska?" So, these organizations have come together.


Ours is one of the largest initiatives and we are working with a variety of coalitions and the federal government and others. The National Academy of Medicine was actually formed to advise the federal government. So, we're quite excited about the conversations going on nationally and internationally to say, "No, this is serious." Because we happen to understand, I think, at a base level that if we don't put the guardrails in place, people will step in and throw sand in the gears of the advancement of AI, and they should. They should stop it if there can't be the examples of responsible behavior.


So, what we want to do is begin to weave this into things like federal regulation. We want to weave it into things like hospital accreditation. We want to weave it into medical education and preparation of those in the healing professions and those that are going to be dealing with healthcare in any way, shape or form, and help patients to understand how do they be co-creators and partners in all of this.


So, there are huge initiatives going on. I'm heartened and hopeful by the enthusiasm and the commitment that I see people demonstrating for what I think is the long-term, and it's going to need to be long-term. We're going to have to be vigilant about this for as long as we're using AI.


Host: So, what might these guardrails look like? Anyone can type anything in right now and ask it to do anything. What would a guardrail look like?


Laura Adams: A guardrail might be something that says, "We want to know if something has been synthesized in terms of made synthetic or faked, that there is some stamp missing, there is some sort of way to identify that this was faked, this was not coming out of facts or coming out of solid resources." So, we want something like that. We also want to put in mechanisms for vigilance that say, "All right. We don't exactly know how these algorithms work when we unleash them in the healthcare setting, because they're not always predictable." They are not like a molecule, like a drug. They're not like a device where once you design it, it stays as it is. Algorithms learn and we call this having emergent properties. So, we want to see very strong guardrails for vigilance that say we must track the outcomes specific that lead to those outcomes of equity. Are we seeing this behaving equitably out there? And if not, what about putting in guardrails that require these things to be certified before they ever get out there? So in some way, shape or form, providing some assurance, particularly for those, again, under-resourced organizations, to be able to tell them it's safe to go on the water with this algorithm.


So, those are some examples of what we might be doing with this too. And empowering patients also to understand what algorithms were used in my healthcare and where can I go get information about the performance of this algorithm. Those are some examples of what we might be making available through our efforts.


Host: So, there might be some kind of a flag that says, "These results may not be from facts" or something, just something to alert the end user that we're not sure about these results, something like that.


Laura Adams: Right. Thinking about a transparent conversation with those that are engaged in the use of AI, so that this isn't just a black box. I think that's what really causes significant and legitimate concern to all of us, is that this is a black box. And to a certain extent, there are things that we cannot explain about it.


So, for example, retinal fundus scans where they look at the back of your eye can very accurately predict sex assigned at birth. We don't know how it does it. We also know that some of the things that we use for pregnant mothers in anticipating whether or not they will have postpartum complications, also can tell us about the age of the person, not the chronologic age, but their actual biologic age. We don't know how it does this. So, we're not going to be able to explain everything about it, but to the extent that we can flag and make things that are germane to people being able to trust this, we want to make sure whenever that's possible, let's do it.


Host: Yeah, absolutely. This is such an important topic. I think topic du jour as far as I'm concerned, because it really is, as you say, permeating our whole world, our whole lives, everything we do. So, this is going to be such an important keynote, and we're so glad that you're going to be there to deliver it. Just a question out of curiosity, you are named to Becker's Most Powerful Women in Health IT. How does that work? How does someone get named as one of the most powerful women in healthcare IT?


Laura Adams: I'm still trying to figure that out myself. I, think how that happened was I happened to be someone that senses big trends and things coming, and I do lean into those. I make sure that I'm a constant learner. So, I like to immerse myself in environments where everybody learns and everybody teaches. So, I surround myself with people that are smarter than I am whenever possible. And also, this sense of I know that the future arrives whether we prepare for it or not.


And so, one of the things that I've always tried to do is make myself as prepared as possible for the future, which often puts me at the forefront of things like, for example, the AI work right now. We don't have any governance experts in AI in healthcare because we haven't ever needed to govern it. So, here I am, sort of a governance expert when we don't really have any of those. And I think that's how it works, is sometimes you achieve a designation that maybe will appropriate in 10 years, but somehow it's arrived at now.


Host: Wow. A governance expert. I love how you put that. So, I think it's great that you're going to be with us at SHSMD sharing your expertise with us. So when we talk about your closing keynote at SHSMD Connections 2023, Achieving AI's Promise and Transformative Power to Shape Healthcare's Future, give us any final thoughts on that.


Laura Adams: Yeah. What I hope the participants leave here is with a sense of possibility, a sense of how much they can extend their own creativity and capability and achievements, because it's all there for the taking for them. I think that if they begin to reframe their thinking; identify that this is a disruption that isn't coming, it's already here; and thinking about how they can take advantage of this revolution, how they can use it in their private lives as well as in their work lives; and I think to the extent that if we can consider how this will free us up for even richer and better and deeper human connection, then we really have something here.


So, I'm hoping that they will leave with a sense of empowerment, possibility, some awe and wonder about the future, and also considering themselves guardians and joining the forces to make sure that we keep this tracking in the right way so that everyone again reaps the benefits and we protect people from the peril.


Host: This is going to be such a great keynote. I know that all of us listening to this podcast right now can't wait. And when you say AI may help us develop deeper, better human connection, I'm all in on that. If it can help us do that as well, I'm on the train towards AI. I think it's going to be great. Laura, thank you so much for your time. We really appreciate it.


Laura Adams: Absolute pleasure, Bill. Thank you so much for having me.


Host: And once again, that is Laura Adams. She is your closing keynote speaker at this year's 2023 SHSMD Connections Annual Conference in Chicago. She will be speaking on Tuesday, September 12th. Go ahead and get yourself registered right now. Just go to shsmd.org. That's S-H-S-M-D.org/education/annualconference. So, sign up now. It's such a great event. The networking, the keynote speeches, the panels, it really is a cool event and we hope to see you there.


And if you found this podcast helpful, and how could you not, please make sure you share it on all of your social channels. And please hit the subscribe or follow button to get every episode. This has been a production of DoctorPodcasting. I'm Bill Klaproth. See you!