Discover how AI is transforming healthcare. Our healthcare experts will discuss real-world applications of AI in clinical workflows, diagnostics, patient care and more, including the challenges and compliance concerns.
Selected Podcast
AI Uses in Healthcare Settings
George Beauregard, DO | Steve Melinosky | Renee Broadbent, MBA, CCSFP, CHC
George Beauregard, DO joined SoNE HEALTH in January 2023 as Chief Population Health Officer where he is responsible for leading our population health programs, performance improvement, clinical integration, health equity and in SoNE’s unwavering pursuit to maximize value in our health system.
Learn more about George Beauregard, DO
Steve Melinosky is the Chief Compliance and Privacy Officer.
Renee Broadbent is Chief Information Officer and Information Security Officer at Southern New England Healthcare (SoNE HEALTH). She is a senior-level healthcare executive with extensive background in strategic planning, information technology, digital strategy, value-based care and data security. Renee has held the role of Chief Information Officer and Chief Information Security Officer in both hospital health systems and Managed Care Organizations (MCO).
AI Uses in Healthcare Settings
Lisa Farren (Host): Hello everyone. Welcome back to Crushing Healthcare, where we explore diverse perspectives regarding the state of healthcare today, and gutsy visions for a more affordable, accessible, equitable, and sustainable healthcare model. My name is Lisa Farren, and I'm your host today and welcome to our second episode in a series of podcasts that focus on artificial intelligence and specifically its use in healthcare. AI touches us all. We know its tentacles are spreading each day, including healthcare.
So today's episode is part two, as I mentioned, of a three part series and returning to us, again, we have three experts who are each going to bring their unique perspective on AI and its use in a healthcare setting. Welcome back, George Beauregard, DO, Renee Broadbent, and Steve Melinosky. Our guests all hail from Southern New England Healthcare, also commonly known as SoNE Health. SoNE health is a physician owned and physician-led, clinically integrated network. It is a leader in value-based care, providing resources, support, and advocacy aimed at helping independent providers remain sustainable and thrive in today's healthcare ecosystem.
George Beauregard, DO is SoNE's Chief Population Health Officer. Dr. Beauregard leads SoNE's extensive population health programs, its performance improvement, clinical integration and health equity. His clinical experience in internal medicine spanned over 30 years in the Boston market. We also have Renee Broadbent, who is the Chief Information Officer and Information Security Officer at SoNE Health.
Renee is a senior level healthcare executive with extensive background in information technology, digital strategy, and data security. And we have Steve Melinosky serving as the Chief Compliance and Privacy Officer at SoNE Health. Steve is a seasoned professional in compliance. He is certified in healthcare compliance and certified in healthcare privacy compliance. So welcome back our experts. Thanks for joining us.
In our previous discussion, we started on a high level, if you will, discussion on what is AI, what are some of the examples, and considerations of its use in healthcare. So today we're going to dive in even deeper to understand how AI is transforming healthcare.
As a refresher for those who may have missed the first episode, and just for all of us, can you remind us some of the different types of AI, particularly those most likely to be used in a healthcare setting?
Renee Broadbent, MBA, CCSFP, CHC: Yeah, thanks Lisa. So we talked about AGI, which is artificial general intelligence, and that's the strong AI and that's things like self-driving cars, et cetera. We also talked about artificial super intelligence, which is ASI and those are things like HAL 9,000, you know, that surpasses human intelligence.
And then we also talked about ANI, which is artificial narrow intelligence, also known as weak AI. And they're designed to perform some specific tasks with pre-defined boundaries. So when you talk about weak AI, you're talking about things like Siri and Alexa or Google search engines or chatbots and things like that.
Certainly, the AI that would be most applicable to healthcare because of the data, and, and all that, would be a large language model learning, which falls under the ASI provisions. We're most familiar with ANI because we most likely use chatbots, or we have Alexa, or we use Siri.
So as we talk about healthcare today, we want to talk about some of those more sophisticated types of AI that can actually look at data, that can do some predictiveness, that can look at things like radiology and other types of imaging.
Host: Thanks, Renee for that refresher. It's a good starting point and a reminder. So let's talk now about AI and its use in healthcare, how it's changing, or should we say the way it's actually transforming healthcare. Dr. Beauregard, I'm going to look to you. As a physician, what are you seeing? What are the most common and widespread uses of AI in healthcare today?
George Beauregard, DO: There are several areas where AI is currently being applied in the delivery of healthcare, and there are other areas where there's significant effort toward advancing the use of AI in those areas. But at least for the moment, image interpretation, how AI can help improve the diagnostic accuracy of medical images, like x-rays, CAT scans, MRI images and PET scans to provide comprehensive diagnostic information. It's also being used in pathology, looking at and interpreting slides that would normally take a human pathologist a lot longer, and there's evidence that the AI is doing it with a slightly higher degree of accuracy.
There's certainly application in the field of dermatology where AI can look at a variety of skin lesions and based on their characteristics, you know, come up with what that skin lesion is likely to represent. Those are just some of the areas where AI is currently being used. Another very important area is clinical decision support.
For the modern day physician, the volume of medical knowledge is growing exponentially, making it almost impossible for clinicians to keep up with the most current evidence as it is produced among the thousands of medical journals that are published on a annual basis. And because of the overwhelming amount of information and the changes that accompany that, at times, you know, diagnostic errors are not uncommon.
I mean, as much as we'd like there to be a zero rate of diagnostic errors, that's sadly not the case. Evidence-based medical information combined with generative AI, conversational search and retrieval augmented generation, so-called RAG, can assist in clinical decision making at the point of care.
So a physician could have a very complex case in front of him or her. He or she could dictate the characteristics of the particular clinical scenario and through searching a worldwide database of current information, the AI might actually come up with a diagnosis that the physician may or may not be able to come up with.
Another area where it's being helped is back office tasks, self-service online tools that can streamline and reduce tasks such as scheduling, cancellation request, medication refill request, et cetera. All those things take a lot of human effort and time and can be simplified and quickened frankly, by the use of artificial intelligence.
Note completion is another area. Doctors currently spend innumerable hours on what's considered a lower than licensed task, like creating clinical notes either by handwriting, typing on a keyboard, dictating, then having to proofread it and make edits, et cetera. And AI can easily facilitate a lot of those tasks.
There are currently available software solutions that use what's known as ambient listening. So conversation occurring in a room between a provider and a patient and generative AI can create a draft clinical note during the encounter. With consent, the software summarizes the conversations in real time.
Once completed, the clinician can review, edit, and approve the note. And it's been demonstrated in several large health systems in the United States that using that approach saves considerable time and effort for physicians where they perhaps could see other patients or spend more time with patients.
And the above also results in higher physician satisfaction and productivity.
Host: That's incredibly interesting as a nonclinical person to hear about the ways it's being used and it really sounds like there's many different ways it's helping our providers save time. As you said, improve satisfaction, which is wonderful and just really as an assistant to their work, which is very cool.
So I'm sure there are many that are singing the praises of AI, however, I'm guessing there's probably naysayers as well. What would you say are the biggest benefits of the use of AI and what are the biggest downfalls?
George Beauregard, DO: I mean, to me, Lisa, the single biggest benefit is given my comment earlier about the growth in medical literature that is occurring and occurs at a pace that your average physician cannot keep up with on a daily basis. The ability to quickly access a large amount of comprehensive, reliable, and updated information that can provide clinical decision support, whether it's for diagnostic reasoning, you know, what is wrong with this patient?
Or here's what's wrong, what is the most current treatment that is available for this particular person based on their characteristics, at the point of care, or at some time soon thereafter. So to me, that's probably where the biggest benefit is going to be realized. Now, on the other hand, physicians don't possess the skills and experience that would enable them to recognize that the information being provided by AI, lacks the context of each individual patient. There's a lot more to a clinical situation just aside from single statements about the characteristics of their illness, the history of their illness, a description of their heart sounds, et cetera, et cetera.
There are lots of other elements that go into consideration when physicians do go through their diagnostic reasoning exercise. So, the AI it can be inaccurate and it can has, as Steven pointed out in the previous podcast, you know, there can be biases that get put into the outputs or the hallucinations that can occur where it basically makes up an answer. So those are things that are concerning.
Host: That's good information. So it sounds like what you're saying is it's like two sides of the same coin. So, on the one hand, AI allowing physicians and providers to access volumes of information is great. However, they may not be able to understand necessarily if AI is hallucinating or biased. So that's interesting. There's caveats to the integration of AI in healthcare is what I'm hearing. So with that, Steve, back to you. You always keep us grounded in reality here. Can you help us with this, as an expert in compliance and a privacy expert, what concerns you most about the increased use of AI in healthcare?
Steve Melinosky: Yeah. And I hate to keep being the gloom and doom of artificial intelligence, but, when you mention patient privacy, and I think that is an absolute critical aspect of healthcare compliance. And a lot of what we're seeing with AI usage for the everyday employee is large language models.
Now, artificial intelligence on paper, it knows what protected health information, PHI, it knows what it is, but it doesn't understand the intricacies of it, and it doesn't have sort of an internal risk assessment for what to do with PHI once it gets it? When we look at machine learning aspect of artificial intelligence, machine learning takes in massive amounts of data and spits out streamlined information, but it doesn't yet have an understanding of the sensitivity of what is and is not considered protected health information. So right now, for example, if you put PHI into a large language model like ChatGPT or Gemini or Grock, that large language model is going to store and process and even risk exposing the PHI you put in there. And just to take a step further, it also does that with any information you put in there.
So, even when you do a Google search, Google sort of keeps that information about what you search for. All the large language models keep that information somewhere. So there's always that risk that you are inadvertently exposing that PHI. If any of these AI companies ever have a data breach; all that information is going to be out there.
We do see, however, some state laws emerging that are going to impose stricter requirements for the use of PHI in artificial intelligence. Right now there's a big focus on artificial intelligence as it relates to PHI in wearable devices and remote patient monitoring.
But unfortunately a lot of organizations now, they're not caught up. They're not providing cybersecurity training on the use of artificial intelligence. So it's evolving quickly, but we are not catching up to it. The other concern that I have about the increasing use of AI is that AI does not address its own algorithmic bias.
It's being increasingly used for diagnostics, treatment planning, insurance coverage, and these biased algorithms can actually exacerbate health disparities. And from a regulatory perspective, I think my concern with regulation on AI as it stands today, AI, like I said, it's growing too fast.
It's exponentially getting better every single day. And the Federal Government, and often state governments don't work that fast, right? It takes a while for a bill to become a law. We've all seen the schoolhouse rock. By the time a bill is passed to protect people from the dangers, the known dangers of artificial intelligence; there's new artificial intelligence being developed that aren't covered by that regulation. So now we're caught in this cycle of catching up to this one AI threat. And then we have to respond to 10 new AI threats, and now we have to wait for legislation on that. So, the big concern I have is that if we want to regulate artificial intelligence, we need to have a system that can react to artificial intelligence as it grows. And we're just not there yet.
Host: That's an interesting picture you paint about sort of being caught in this unending sort of cycle of catch up in terms of regulation around AI and the pace with which it continues to learn and evolve and expand. So are we always going to be behind, if you will? We're never going to get there.
It poses the question. So what can we do as we, as patients, consumers of healthcare, we, our loved ones go about on our healthcare journey? Should we be concerned about the use of AI and are there questions and examples we should ask our providers when we go to an appointment as far as what actions we can take? George as a physician, do you have any advice you can share on this?
George Beauregard, DO: Sure. Lisa. I think that for patients, if Netflix gets their, Hey, we recommend this series or that movie wrong, you know, there's no real loss there. Right? Or Amazon telling you to buy something, if it's wrong, okay, no harm done. But in healthcare, bad things could happen. So I think patients should be concerned about the use of AI in healthcare by their physician.
And I think some very simple questions for them to ask is, okay, doc, do you use AI to help you diagnose and treat your patients? If so, how long have you been using it? And how do you ensure that it has quality and it's been validated and it's not containing biases or doing hallucinations or some of the adverse effects that you heard about previously.
And if you don't use it, why not? How do you keep up with all the information that physicians and healthcare professionals need to keep up with on a regular basis to provide the high quality care that people need? So those are just three questions that they could easily ask their primary care provider.
Host: Three very good questions and three takeaways I think everyone listening can use. I know I will definitely do that. So, when you talk a bit about how information is used and how the data is quality checked, it raised some questions of what regulatory requirements are in place. I know Stevie you spoke to that to some degree.
But in addition, to the questions George mentioned for patients to ask their providers, are there any other questions you can think of that they should be addressing?
Steve Melinosky: Yeah, like George said, you know, asking the doctor, what they do with your personal data, and your doctor should have notice of privacy practice that sort of indirectly addresses that. Right now there's no legislation that really says you have to tell a patient when AI is used, your PHI is used in artificial intelligence. I mean, federally we have HIPAA and that doesn't directly address the disclosure of the use of AI. It does note that AI tools that process PHI have to comply with privacy and security rules. But again, as I mentioned before, HIPAA and like other federal regulations has not caught up with the use of artificial intelligence.
And if a doctor is using artificial intelligence in clinical decision making, there is an indirect requirement for transparency. It's kind of more ethical, right now. And the standard guidelines are you can use artificial intelligence for clinical decision making as long as there's somebody at the end of that saying a qualified medical provider approving that.
Now we see this in insurance companies right now, utilizing artificial intelligence to make decisions regarding coverage or denials and things like that. And there is some legislation that's sort of bubbling up that's looking to crack down on this because again, they want to make sure at the end of the line, there is a human at the end, making these clinical decisions saying it's reviewed the artificial intelligence decision, and it agrees or disagrees with it. But it's such a prevalent issue right now in especially with some of these large insurance providers. We had mentioned, I believe in the last podcast, the use of ambient listening in the doctor's office.
And that's sort of where you and the providers speak with each other and there's an artificial intelligence basically acting as a scribe. And a lot of places have policies that if they're implementing this, that you should tell the patient you're implementing it.
But if you're at your doctor's and they're not taking notes as they're talking to you or even checking the computer or anything like that; you might want to ask if they're using artificial intelligence or ambient listening. Some will ask you if it's okay to use it. Some will say, oh yeah, you know, we do, but I'm not going to use it, what have you.
But a lot of places they don't have policies on that just yet. It's very new. So it's important for a patient to be empowered to ask their provider is artificial intelligence involved in my care? What does it do? What are its limitations? I think those are the things, as a patient, you can ask your provider.
But again, there's no real rules, no regulations to say that they have to tell you.
Host: Well, I, sound like a broken record. It's all very exciting, but a little unnerving as well. So, you've definitely laid the groundwork for us. I appreciate that. And we're going to dive in deeper, in our next episode on AI. We're going to talk more about the concerns, challenges, regulatory challenges, all sorts of the caveats of AI.
So thank you, Steve, for particularly, you know, keeping us grounded on this and laying the groundwork for that. We will soon be back for that part three, which I think will be a nice sort of way to round out this AI discussion. So for now, I just want to thank all of our guests. Thanks again for sharing your expertise, and thank our listeners for joining us.
Remember, we all have a role to play in healthcare transformation, so join us in Crushing Healthcare.