From data security to regulatory challenges, we dive into the critical compliance and privacy concerns surrounding the use of AI in healthcare settings.
The Pitfalls & Perils of Artificial Intelligence
George Beauregard, DO | Steve Melinosky | Renee Broadbent, MBA, CCSFP, CHC
George Beauregard, DO joined SoNE HEALTH in January 2023 as Chief Population Health Officer where he is responsible for leading our population health programs, performance improvement, clinical integration, health equity and in SoNE’s unwavering pursuit to maximize value in our health system.
Learn more about George Beauregard, DO
Steve Melinosky is the Chief Compliance and Privacy Officer.
Renee Broadbent is Chief Information Officer and Information Security Officer at Southern New England Healthcare (SoNE HEALTH). She is a senior-level healthcare executive with extensive background in strategic planning, information technology, digital strategy, value-based care and data security. Renee has held the role of Chief Information Officer and Chief Information Security Officer in both hospital health systems and Managed Care Organizations (MCO).
The Pitfalls & Perils of Artificial Intelligence
Lisa Farren (Host): Hello everyone and welcome to Crushing Healthcare, where we explore diverse perspectives regarding the state of healthcare today, and gutsy visions for a more affordable, accessible, equitable, and sustainable healthcare model. I am your host, Lisa Farren. Welcome back everyone. Today we continue our conversation to learn more about artificial intelligence.
Particularly its growing use in the healthcare setting. We've discussed a lot about AI in our previous episodes with our three experts who are here once again on Crushing Healthcare. I'm thrilled to have them back. Welcome back, Dr. George Beauregard, Renee Broadbent, and Steve Melinosky.
All of our guests are with Southern New England Healthcare, commonly known as SoNE Health. SoNE Health is a physician owned and physician led, clinically integrated network. SoNE is a leader in value-based healthcare. It provides resources, support, and advocacy to help independent providers remain sustainable and to thrive in today's healthcare ecosystem.
George Beauregard DO, is SoNE's Chief Population Health Officer. Dr. Beauregard leads SoNE's extensive population health programs, performance improvement, clinical integration, and health equity. His clinical experience in internal medicine spanned over 20 years in the Boston market. Renee Broadbent is the Chief Information Officer and Information Security Officer at SoNE Health.
Renee is a senior level healthcare executive with extensive background in information technology, digital strategy, and data security. And Steve Melinosky serves as the Chief Compliance and Privacy Officer at SoNE Health. He's a seasoned professional in compliance. Steve is certified in healthcare compliance and certified in healthcare privacy compliance, and welcome back everybody.
So, all right. We have covered a variety of topics related to artificial intelligence, what it is, the types. We've covered, how it's being used in healthcare. We've addressed some questions that we can ask our healthcare providers about how it's being used in our care. And we've even touched on some of the caveats, the things to be aware of, the challenges and the possible dangers of AI, particularly in healthcare.
So Steve, as our compliance expert, you've shared some caveats of the use of AI in healthcare, but let's walk through the weeds a bit more about the pitfalls and the perils of AI use in healthcare. Open-ended question. The floor is yours. Take it away.
Steve Melinosky: I mean, how much time do I have. There's a lot going on from a compliance, privacy, and regulatory perspective with artificial intelligence. In the last podcast, I did talk a little bit about patient privacy. I think that still is one of the biggest concerns I have. Again, these large language models like ChatGPT, Grok, Gemini, they're available to everybody.
I don't even think you need an account to use most of these, but that means that people can just sort of throw PHI, protected health information in there, and think it that it's safe, but it's not. Right. It goes in there, it's not secure. And then there's always the chance that that ends up in the wrong hands.
The other concern I have, especially with patient privacy, is doctors can use it to help with patient diagnosis. Office staff can use it to write letters or appointment reminders. Anyone can ask a large language model for advice on how to deal with a patient. And as soon as you add that identifying information in there, their name, a diagnosis, there's a potential privacy concern, right? There's not a breach yet, but there's always a concern that there will be. Large language models are not generally not secured or encrypted to the extent that a healthcare organization would like them.
Many of them are actually having a disclaimer on them now that say, we're not a doctor. But we are going to use your input to make the models better. So it uses whatever data you put in there and it, makes the models better with that, which means it's using PHI if you put that in there.
And again, you have to consider that patients don't automatically consent to having their PHI used in artificial intelligence system. So, it's an unproven, and newer technology as far as the security of patient information. What I also mentioned last time as a concern is, the regulatory, framework of addressing PHI is constantly evolving. There's a lot of difficulty catching up with it. I know, Connecticut for example, is putting forward a draft of an artificial intelligence bill that addresses concerns with data privacy. They have another bill about children's data privacy. They're creating an advisory board, an AI task force.
And they're also putting forth regulation to prohibit dissemination of what they call synthetic images, right? So you can't go to ChatGPT and say show me a picture of Arnold Schwarzenegger dressed as Santa, right. So that something that they're going to be cracking down on. And in the federal government just passed a bill as well a couple months ago, banning images of people that had been altered by artificial intelligence without their consent.
Regarding patient privacy, HIPAA is catching up a little bit with artificial intelligence. It has a proposed 2025 security rule that does address artificial intelligence a little bit, but, again, as I mentioned, the landscape is evolving so quickly that by the time this security rule gets pushed through, there's going to be 10 other issues. The other concern with the regulatory field is the, food and drug administrator, the FDA, they're seeing cases where unapproved artificial intelligence is being put into medical devices. And that potentially breaches FDA standards. So there's rapid innovation that outpaces the regulatory framework and entities like the FDA, they have strict standards on deploying artificial intelligence, in these medical devices without clearance. And this leads to recalls, fines, legal liability, and obviously the worst, is the increased risk of patient harm. On that same note, the other pitfall or parallel of artificial intelligence is clinical outcomes.
When you put data into an AI or, a machine learning model, it can create inaccurate or biased output. So it can misdiagnose rare conditions if it's not trained completely, or if it has unrepresentative data sets. And these types of errors compromise clinical quality and, and they can lead to adverse patient outcomes, delayed treatments.
And as I said earlier, with insurances, they could lead to denials of claims. And like I said in our first podcast, even with artificial intelligence, the phrase garbage in, garbage out rings true. So if you train a system using bias or inaccurate data, you're going to get bias and inaccurate responses.
AI is far from perfect. At the last podcast, I believe we talked about AI hallucinations, where for some reason or another, AI makes this odd determination, seemingly out of nowhere, and then it gives terribly, consistently terrible output based on that. So it gives bad information and it says, okay, this is bad information I that I just gave you is true.
So there's a recent study that showed an artificial intelligence, it was used to total up a calcium score, in a patient, and it failed in basic addition. Four numbers that it had to add and it couldn't get it right. The numbers were there. The numbers were right, but for some reason it said two plus two equals five, and it took that as gospel and it would not change its mind.
So now you have to fix the artificial intelligence, which is just as hard as training in the first place. But that leads me to my next point, is that there has to be a system of checks and balances. There has to be human oversight with artificial intelligence. It's ironic actually, that the most valuable aspect in using artificial intelligence,
is a human. So any artificial intelligence that's being used should have that final check by a human to ensure that the output is appropriate. So I mean, just imagine if your doctor used ChatGPT to come up with a treatment plan for you. And they didn't check it, right? They just put in your diagnosis.
They printed it out and said, here you go. If they missed something like an medication allergy, or they misinterpreted the data, like they failed to identify a cancer cell among a healthy cell, your life is at stake. They must be human at the end of the day, responsible for those decisions, accountable for those medical decisions.
So I can certainly picture in the near future, you know, people blaming artificial intelligence, whereas they should have been the one to check the information. When we look at the broader picture of the use of artificial intelligence in healthcare, in the delivery of health, but the healthcare industry, there's going to be effects on the job market.
When we talk about AI and machine learning, a lot of people think ChatGPT, and they stop there, but it's so much more than that. I think Renee in the first podcast we had went over some of the different types of it, but we're seeing AI that can make synthetic images, they can generate movies, whole movies that look realistic using just word prompts.
We see self-driving cars. We see artificial intelligence used to coordinate hospital throughput. We see it analyzing and manipulating the stock market. It can recognize patterns, it could analyze anomalies, it conducts research. Like George said, sometimes that research isn't so great. It can predict patient outcomes, it can design presentations, it can take notes and provide summaries.
There are thousands of jobs that can be right now augmented by the, using artificial intelligence, and it's the people who use artificial intelligence to their advantage right now that are going to succeed. But if you're not using artificial intelligence to some extent to augment your current role, you might be putting your job at risk of obsolescence.
A lot of jobs can be replaced by artificial intelligence. It could be devastating to our economy if we don't balance that job market and doing so, we would have to sort of emphasize roles requiring artificial intelligence expertise. So it's a dawn of a new era and you have to be on board with being able to use artificial intelligence.
And the last thing I want to touch on are the ethical considerations of using artificial intelligence and machine learning in healthcare. So, AI systems may prioritize efficiency over patient dignity because if you ask an AI model what dignity is, it'll give you a definition. But AI can't feel dignity.
It doesn't have what we consider emotional needs. So it can automate a triage, but it's not taking into consideration the emotional needs of the patient. We train it to be efficient, so we shouldn't be surprised if it doesn't stray from that directive. If we feed PHI into the machine, we can't be surprised if it uses that PHI and potentially discloses it.
More commonly we're seeing research being conducted almost completely using artificial intelligence and the author's taking credit for it. Universities are struggling to keep up with it. Some have gone back to those old blue books where you have to like literally write out, on the spot what your thoughts are instead of typing out, something where AI can be used.
So universities have created AI detectors to review papers. So that prompted AI engineers to create an AI detector avoidance system. So now we have a new issue where we create a circle of never ending avoidance and detection systems for artificial intelligence. So research being conducted to these days, I would say from 2021 to present, you might not want to take it directly at face value without looking at all of its sources on there, because a lot of it could be created by AI and some of that might just be made up on the spot.
Host: That's a lot, Steve.
Steve Melinosky: Yeah.
Host: So breakneck speed challenges, and numerous challenges. And just as I think it was in the last episode, was this cycle of always trying to keep up and we're just always going to be behind. We're always going to be trying to get ahead and it sounds like AI might win the race, but, I guess we shall see.
It's a concern for everyone. One thing I want to kind of touch on is, I think when we watch the news, one thing that, alarming thing that is frequently in the news are data breaches. And, the idea that data can get in the wrong hands of bad players and bad actors, and the result is long and messy.
It's stressful and it's anxiety provoking. And healthcare in particular, that's a really touchy subject. Because you know, that's so personal. Our healthcare information is, very, very personal. So, Renee, I'm going to look to you as the information security officer, do you have any thoughts on AI, specifically around the protecting the data and securing patient privacy?
Renee Broadbent, MBA, CCSFP, CHC: Yeah. Thanks Lisa. Yes. So healthcare, as most people know, is a prime target, for data breaches. In fact, among industries, it is the leader. That is not a place you want to be in first place for data breaches. But because it is the richness of the data, and there's probably everybody has experienced some form of a data breach or some notification that their data has been compromised, and it's certainly on my mind constantly, based on the role and the things that we do. Right. And so just to kind of talk a little bit about what it means for AI, some recent statistics are starting to indicate that the prevalences of data breaches are increasing as the use of AI starts to increase as well, right? So what we're seeing is a higher incidence of AI related breaches. Last year alone, 77% of the businesses experienced a breach involving their AI systems. As we've talked about, they're not encrypted. There's a lot of things that haven't been put in place to protect data as we would in a normal fashion.
When I say normal, sort of like the old fashioned way, right? And so don't have those because it's evolving so rapidly. We're struggling to keep up. And that highlights the significant threat that this poses to most organizations. There's an expanding attack surface, right?
The rapid adoption of AI, broadens the attack surface for cyber criminals, creating more potential entry points for breaches, right? So the, space that they can enter and go in gets bigger and bigger and bigger as it evolves. Data is the target. So AI systems process and rely on huge amounts of data, which often includes sensitive information, especially in healthcare, making it a prime tag, target for attacks.
And, we talk about it now, we're constantly thwarting off attacks because that data is so rich. AI's black box problem. This is the complexity of many AI models, makes it challenging to identify vulnerabilities and track data flow hindering the efforts to detect and prevent security breaches, right?
So it's hard enough. There's lots of systems that we have in place that are constantly, looking and, trying to find if there's, we have vulnerability so we can proactively prevent them from happening, right? AI makes this so much more complicated to actually hunt and find those types of things.
So, and then conversely, the cyber criminals are using AI to develop sophisticated attacks such as adversarial attacks that manipulate AI models to leak sensitive data, right? So not only is it vulnerable, but now cybersecurity criminals are using it to their advantage. There's also specific attack vectors.
So common AI attack vectors identified include prompt injection where they put it in sight, training, data poisoning. Model extraction and supply chain compromises, right? So they find different ways to put it in there. And financial and reputational consequences, right? So everybody knows that, a data breach causes a huge problem for an organization.
It can often cost them a lot of money in ransomware. It can often cost a lot of money to resolve the data breach and support the people who have been attacked by it. And it damages the company's reputation. Especially if there's AI involved, because of all the other things that we've been talking about, you know, the ability to put the documentation in place and the signatures and the regulations and all that.
So it's a risky area. And so while it's important to note that AI poses these security risks, it can play a significant role enhancing the efforts. So there are AI powered tools out there now that achieve up to 70% better mail rail detection rates compared to traditional methods, right?
So traditional methods are a little bit slower. So this is actually improving in that area, and critical it can reduce the incident response time by 96%. That's huge because a lot of times, an incident occurs and it's several hours before you're aware of it. In that several hours, even an hour of time, the amount of damage that can be done or data that can be stolen is unbelievable.
So the fact that we could detect it quicker is huge. The machine learning algorithms have led to a 60% decrease in false positives and fraud detection and organizations that are utilizing AI and automation extensively in prevention have realized significant cost savings. Right. So, you have a team of cybersecurity people, right? We have all these tools that we deploy. We're constantly hunting and looking and preventing. If we can use AI to its benefit, we might be able to find things quicker and reduce our overall spend for the organization. So this is definitely something that we keep an eye on.
There are trace tools that are out there that we're considering using that really will help in this area. So, while it does open up the opportunity, there's also opportunity to use it to our advantage to detect and prevent data breaches.
Host: Yeah, so yeah, thank you for ending on the positive with that, because I'll
Renee Broadbent, MBA, CCSFP, CHC: Yeah,
Host: I'll take some reassurance in that hearing AI can also be a helpful toolin helping with data breaches. But yeah, there's a lot there and, and a lot to be really concerned about. Dr. Beauregard, through your lens as a physician, how concerned are you about the use of AI in medicine, particularly with clinical quality and accuracy in diagnosing disease or pathology review, those sorts of things.
George Beauregard, DO: Well, Lisa, I'm cautiously optimistic about the potential value that it's going to provide in terms of certain of its capabilities and improving the accuracy of interpreting images and pathology specimens, et cetera. But this is really all in its infancy. So I think we need, more time needs to go by before we can make more informed judgements and decisions about the real value or risk to patient care that the application of AI, will eventually result in.
Host: How about your experience with patients? Have you seen any patient questions or concerns asking about AI use in their healthcare?
George Beauregard, DO: Well, I think at the root of it all, the physician patient relationship is the time honored bedrock of medicine, so I don't really see that being extinguished. Generally, patients view their doctor as trusted, sympathetic confidants and experts with whom they feel a warm rapport. So how they'll perceive their doctor's use of technology, I think will evolve over time.
I think it's reasonable to expect that patients may not trust AI in the same manner that they trust their physician. There are some providers in SoNE that are currently using AI, but they're primarily using it for clinical documentation and administrative tasks. A few others are using it for clinical decision support.
I mentioned previously about the large volume of information that the doctors are expected to absorb and process and put into clinical practice. That's almost impossible to keep up with, and a few patients have asked if our doctors think AI will replace them, and their resounding answer has been no.
Host: Great. Good answer. I like that. So I'm curious, in terms of the different clinical uses, are there any statistics you can share, Dr. Beauregard, on the accuracy of AI for those doctors who are using it in the clinical setting?
George Beauregard, DO: There are some areas where early statistics are available and AI in clinical settings has shown varying degrees, varying levels of accuracy, with some models reaching high levels of precision with regard to specific diagnostic tests, while others still lag behind expert clinicians.
Overall, AI diagnostic accuracy is reported to range from 52% to 96% with some models actually surpassing human performance in certain areas. Now, areas where AI shows high accuracy, as we've discussed in previous podcasts, you know, medical imaging. AI algorithms can analyze medical images, like CAT scans and x-rays with remarkable accuracy and speed, often identifying very subtle patterns and anomaly that can be missed by the human eye.
Heart disease classifications is another area where studies have shown impressive accuracy rates and classifying heart disease using machine learning with some models achieving up to 93% accuracy. Similarly, COVID-19 diagnosing COVID-19. There's been a accuracy rate of around 95%, and I think that there are some other illnesses and diseases like acute appendicitis, gallstones and even certain mental health conditions where the application of AI can be helpful.
I think there are some limitations where varying accuracy across different symptom complexes that present, not all patients with the same illness present the same way. So all those variants and nuances, have to be taken into consideration. So some AI models perform better in certain symptom categories than others.
Example I in urinary problems can be diagnosed with high accuracy, with skin and mouth problems probably less accurate. And again, we've talked about the issue, the concern about bias and that if there's a bias that is put into the training of the model, the bias is going to come out in the output as well.
It could be a societal bias, a personal bias, and that can lead to inaccurate or discriminatory diagnoses.
Host: Like all things, there's pros and cons. It is very exciting. But there's definitely things that we have to be aware of. So that's good to hear. I kind of now want to loop back around to you, Steve. What do you see coming on the horizon in terms of future legal oversight around the use of AI? I'm sure that must be in the works.
Steve Melinosky: Yeah. Like I said earlier, we really need to catch up, with the regulatory field needs to catch up with what AI is capable of. We need to anticipate the next steps of AI use in healthcare, and we need to look farther ahead than we are. So, the oversight rules we have are just being made and they're rudimentary.
They're acknowledging that AI exists and that it could be an issue. But again, the legislative process is slow and AI growth is exponential. When we look at this from a regulatory perspective, in my opinion, we can't just look at artificial intelligence as it is today. We have to look farther into the future, 10, 20, even 50 years into the future.
The immediate needs that we have, they may include the creation of AI risk insurance, as sort of like a, another cybersecurity insurance. But if you use AI and something goes wrong, you might want to have something like that. You could pass precautionary laws, better to have than to not, and need them, right?
And we will probably very soon have regulations on the use of AI as it pertains to human life. So, again, like I said. AI should not be the sole decision maker for insurance denials or any sort of claim. There should be a clinical human decision maker at the end of the chain saying yes or no. I, take accountability for this.
When we look at specifically in America, we've addressed major industry concerns by forming regulatory agencies, sort of as a process that we have. When we have the FDA for food and drugs, and some people are, are saying we should anticipate a new federal agency to certify modern audit AI system.
And that would be especially useful in high risk areas such as healthcare. We also need to, this is going to get a little heavy. We need to consider the question of personhood with AI. So think about, Elon Musk's company, Neurolink. They're helping people, people with severe mental or physical disabilities function with the use of a chip in their brain, and that chip uses artificial intelligence.
So, w e have some people already whose brains are merged with artificial intelligence and it brings up the old thesis, a ship problem. How do you determine where the AI begins and the human ends? Can you determine that or are they one and the same? We have a person with a neural link chip in there, if they're owning property, does that mean artificial intelligence owns that property?
Is artificial intelligence giving legal consent? Is artificial intelligence voting. There's no way of us to distinguish where the human ends and the AI begins. We have to determine who's responsible for the AI's actions, what containment protocols do we need? How do we limit AI? Can we limit AI?
And some people are saying, you know, we might reach the singularity where AI outpaces human intelligence by 2026. Others are saying it's already happened. We just don't realize it yet. We have to determine if we can limit AI and how do we do that. If we say with regulations, it might not be that simple because if we have a super intelligence, beyond that of human capabilities, I don't think they're going to follow our laws.
We'll have to see though. That's to be determined.
Host: Wow, there. So interesting. The whole personhood in AI, that to me sounds like a whole other podcast. Like wow. And AI risk insurance. There's a term I've certainly never heard. It kind of made me giggle as a marketing person. I, kind of pictured like ads about bundling AI insurance with your home and auto now.
That was what came to mind when I heard that, but,so we've covered a lot, but in some ways we really only scratched the surface about AI. Just can go on and on. We could talk forever, but to wrap it up, I kind of want your personal opinions.
Where do you see AI going in healthcare? Just off the top of your head, Renee, let's start with you. What are your thoughts?
Renee Broadbent, MBA, CCSFP, CHC: Let's see. I think that we'll move a little bit more towards sophisticated and personalized in preventative care models. And how do we get there? Data analysis. As we've talked about, throughout the several podcasts, the ability of AI to look at a lot of data at the same time and come up with some really interesting conclusions, I think is going to be important.
And how we use that data to treat people and treat diseases and things. I hope to see things like virtual health assistants that can, maybe reach people that can't necessarily get to a physician. Maybe in drug discovery. You know, better ways to find drugs to cure things, and precision medicine and hopefully optimizing healthcare workflows, which in my mind we're in desperate need for.
Anybody that can worked in healthcare for a while can relate to the fact that the workflows in healthcares are often onerous and difficult, and perhaps AI is an opportunity for us to see some optimization in that area and make it better for all patients and physicians.
Host: Okay, great. Dr. Beauregard, what are your thoughts?
George Beauregard, DO: Well, Lisa, I think although it's very promising, the application of AI in clinical care is really still in its infancy. I think it's going to advance pretty quickly, but it's not that advanced yet, so it'll take some time to analyze, judge and to trust its clinical utility and its true value to people. It's nearly impossible for today's physicians to keep current with the medical literature.
So I think having the most current and accurate information at your disposal, at the point of care immediately available, is going to be a real value add as long as the information is accurate and reliable.
Host: Okay. How about you, Steve?
Steve Melinosky: I, think AI, especially in healthcare, it's going to be so helpful, but it's also going to be so harmful, right? It's, the more uses we get out of artificial intelligence, the more problems we're going to find with it. So throughout these three podcasts, I, been sort of providing a cautionary tale.
But I will say is it entirely possible to be both excited and terrified about the unknown variables of artificial intelligence? And that's where I am.
Host: I agree with you. I've said that. It's exciting and scary all at the same time, but interesting perspectives. I guess we just have to wait and see. The future will unfold and we'll see where it goes from here. But I think we all will agree that AI isn't going anywhere. It's part of our lives.
It's only going to be become more and more part of our lives in all aspects healthcare included. So really educational and fun topic. I appreciate you all sharing your knowledge. Thanks Renee, Dr. Beauregard and Steve. And thank you everyone for joining us. And as always, I like to end with the reminder that we all have a role to play in healthcare transformation.
So please join us in Crushing Healthcare.