AI in Ophthalmology: Improving Detection, Workflow and Patient Care

In this episode of Better Edge, a Northwestern Medicine Ophthalmology panel including Paul Bryar, MDRukhsana G. Mirza, MD, and moderator Angelo P. Tanna, MD, discusses practical applications of artificial intelligence in eye care. Topics include point‑of‑care diabetic retinopathy screening with immediate results, oculomics and multimodal imaging, and collaborations using AI to identify biomarkers such as retinal ischemic perivascular lesions (RIPLs). The panel also reviews early experience with large language model triage and MyChart responses, EMR‑based risk scoring that incorporates social drivers of health and workflow optimization.

AI in Ophthalmology: Improving Detection, Workflow and Patient Care
Featured Speakers:
Angelo Tanna, MD | Paul Bryar, MD | Rukhsana Mirza, MD

Angelo P. Tanna, M.D. is Vice Chairman and Professor of Ophthalmology and Director of the Glaucoma Service at the Northwestern University Feinberg School of Medicine in Chicago, Illinois, where he has served on the faculty since 1999.  


Learn more about Angelo Tanna, MD 


Paul Bryar, MD is a Professor of Ophthalmology and Pathology and Northwestern Medicine. 


Learn more about Paul Bryar, MD 


Rukhsana Mirza, MD is the Vice Chair of Faculty Affairs in the Department of Ophthalmology and the Ryan-Pusateri Professor of Ophthalmology. 


Learn more about Rukhsana Mirza, MD 

Transcription:
AI in Ophthalmology: Improving Detection, Workflow and Patient Care

Melanie Cole, MS (Host): Artificial intelligence is reshaping the future of eyecare from early disease detection to personalized treatment plans. Welcome to Better Edge, a Northwestern Medicine Podcast for physicians.


I'm Melanie Cole. And we have a panel for you today with three leading experts from Northwestern Medicine to explore AI integration into ophthalmology practice and research. Joining me in this panel are Dr. Angelo Tanna. He's the Vice Chair of Ophthalmology, Director of Glaucoma, and a Professor of Ophthalmology at Northwestern Medicine. Dr. Tanna will be moderating today's discussion. Joining Dr. Tanna is Dr. Paul Bryar, he's the Vice Chair of Clinical Operations, a Professor of Ophthalmology and a Professor of Pathology at Northwestern Medicine; and Dr. Rukhsana Mirza, she is the Vice Chair of Faculty Affairs, the Ryan‑Pusateri Professor of Ophthalmology, and a Professor of Medical Education at Northwestern Medicine. Dr. Tanna, I turn it over to you.


Angelo Tanna, MD (Moderator): Thank you, Melanie. Let's start with you, Paul. We, in the field, have limited actual practical applications that are FDA approved and that we can use right now applying artificial intelligence technology to patient care. Tell us about what we're doing at Northwestern in that area.


Paul Bryar, MD: Yeah. I mean, you're right when you say we have to use things clinically in most circumstances, they have to be FDA approved devices, so to speak, if we're going to use artificial intelligence with that. So, the one area that we have at Northwestern available right now for clinical use is screening for diabetic retinopathy. And the way that's done is, you know, taking what we all know a photograph of the retina. And typically, it would be interpreted by one of us to see if there's, you know, diabetic retinopathy present or not. And then, we would return a grading result.


Now, we have a device where the AI can actually interpret the images at what we call the point of care and give an immediate result to the patient right there. So if that is done in the primary care office, they can get the appointment. If there's diabetic retinopathy or diabetic eye disease, they can make that appointment before they leave rather than having to wait two days for a result and then having, you know, to go through the scheduling after the fact. So, it helps us get instant results to the ordering providers and get the patient scheduled that need to be scheduled.


Angelo Tanna, MD (Moderator): So, where do we have those devices right now?


Paul Bryar, MD: So, we have several diabetic, you know, screening cameras. One of them is actually in our clinic. So, patients will visit their primary care and then the order will come in. They'll come to our department and get that done. We're deploying a new camera right now to the labs. So often these patients, when they get their blood draws, their blood glucose, the people who draw the blood can actually take those pictures there. So, those are the main areas that we have that, and we're looking to expand that out to all of Northwestern and the various regions. Some of those cameras might be in the endocrinology office, some might be in their central labs. But the goal is to have this widespread to match, you know, the patient flow in each individual region so we can maximize the amount of people screened.


Angelo Tanna, MD (Moderator): That's great. The idea of having the camera in the area where blood work is drawn is fantastic, especially if the technicians there are able to get the pictures. So, what's the uptake been?


Paul Bryar, MD: So, you know, adoption of this, like anything, is you have to get the word out, right? And so, we piloted it with several practices, and just trying to figure out, you know... We don't want to give the primary care providers a new workflow. We want to find out what they do in their ordinary practice. How can we make this accessible to them and their patients with minimal change in their workflow? Because like us, they're very busy in clinic. And if it's a multi-step process, it's not going to be done. But the adoption, once providers order this and they get a result in a timely manner, they find that that's great tool for them rather than just repeatedly telling a patient, "You're overdue for your eye exam. Go schedule with ophthalmology," they can get a picture right there. So, it's definitely growing and becoming more widespread adoption by the clinicians who use it.


Angelo Tanna, MD (Moderator): That's great. And how about at some of our federally-funded health clinics that we support? Do we have implementation of these cameras at those locations?


Paul Bryar, MD: We do. Yeah. So, we've partnered with the, you know, community health centers, you know, federally-qualified health centers here in Chicago. And we used some of our big data here at Northwestern to find out exactly where in the city to deploy these cameras, you know, looking at where is the highest incidence of diabetes? Where is the highest incidence of potential blinding eye disease, right? And we could go down to the ZIP code level and say, "If we're going to deploy this camera, we should employ it in this four blocks square area there." You know, so we partnered with several clinics on the south, in west, and near west sides of the city. So, we have three cameras right now actively screening patients with diabetes. We've photographed, you know, over a thousand patients already. And given the fact that, in that population, little less than half of them will require referral for some sort of potential eye disease. It's been, you know, a number of patients that we've been able to detect disease. And as we all know here at this table, you know, 90% of vision loss from diabetes is preventable if we can detect it early enough.


Angelo Tanna, MD (Moderator): Rukhsana, as somebody who provides medical retina care, are you finding that these patients are coming your way? Is this increasing your volume to unmanageable levels or what going on? So, there are all these people that we need to take care of who have undiagnosed disease, and this can open the window and allow us to detect these patients at a much earlier stage. How does it affect your clinical practice?


Rukhsana Mirza, MD: Absolutely. So, you know, technology takes a while to kick in. And I think one of the things that we do very well at Northwestern is we're on the leading edge. And we're on the ground floor and we've been doing telemedicine screening for quite some time, manually grading them at first, and then even employing some internal methods for trying to boost gradability of the images and eliminate sometimes some of the pitfalls of these processes and technologies, or sometimes it doesn't work. So, working through those early processes, we've been doing it for some time.


We also have, for a long time, been available to the community. So, I haven't seen a marked change, because we get these patients in, and it's not always flagged how they got in. But I think that the volumes have never been higher. So, I would not doubt that this is in part due to this new technology that we are working through.


Angelo Tanna, MD (Moderator): Thank you. Well, so all of us do research using artificial intelligence, and we have a big interest both in using image analysis and generative artificial intelligence to help facilitate glaucoma clinical care. Rukhsana, why don't you tell us about the research you're doing and how it's going to change the field?


Rukhsana Mirza, MD: Well, I think it is incredibly exciting, and ophthalmology is right for this in part due to the fact that we have multimodal imaging. So, what's multimodal imaging? We have so many different ways of looking at different parts of the eye that are noninvasive. The eye, it's fascinating in the sense that it is the only place in the body where you can directly visualize vessels as we know. This is incredibly exciting. And embryologically, we know that the retina is a part of the brain. So in essence, we're able to actually look into the brain and the circulation of the brain. So, this has opened up just a huge body of research called oculomics, where we look at retinal manifestations of systemic disease and not only looking at how that might impact our vision, but how our retinal microcirculation might give us an idea about the cardiovascular state or the neurovascular state. There's been quite prominent research regarding Alzheimer's and seeing that even cognitive state can show a decrease in flow in the deep capillary plexus. We have seen in cardiovascular disease that a colored photo can estimate ejection fraction. Now, some of the things that we are doing at Northwestern is collaborating with other departments. We have a collaboration with vascular surgery, neurology, even heart transplant, and looking at ways that maybe a non-invasive imaging can inform us about the state of other systems.


Harnessing AI is essential in that because to actually look at all of these modes of imaging and to find patterns within that really is this is where AI can really help us harness our systems information and really find biomarkers. Now, what are biomarkers? Biomarkers are areas within a tissue that might inform systemic disease. So, one of the things that we looked at very carefully are retinal ischemic perivascular lesions or they're also known as RIPLs. These little areas of lack of oxygenation, which we call infarcts, are undetectable by patients. They're not coming in with any visual complaints. But we find them in their imaging. And, you know, at first we were like, "Oh, we don't know what the significance of that is." But there's been big research related to that from elsewhere that has shown that these little microinfarcts are related to systemic and cardiovascular disease.


So, we've worked with our computer science colleagues on the Evanston campus to try to automate finding these, because they can be very, very labor-intensive to go through and count. And we've had some really interesting early success. We've also had early success with our computer science colleagues, bringing algorithms from other entities into ophthalmology and using our data. One is called machine teaching. And machine teaching is where we actually interact with the artificial intelligence and don't require as much data as is needed in other modes of learning. But we interact with the artificial intelligence to try to point out the areas that we are most interested in. So, this has been fascinating from so many different angles, not only harnessing new technology to potentially find almost a needle in a haystack. But also, working with teams and developing new technology.


Angelo Tanna, MD (Moderator): You were talking earlier, when we were preparing for this today about education and the use of AI in education. So, tell us about that. You oversee medical student education in our department, and I know it's a passion of yours.


Rukhsana Mirza, MD: It is. Well, you know, one thing that I learned being in this field for years now is that we not only learn from our mentors, but sometimes our students become our mentors. And the new generation of ophthalmologists and medical students and doctors, they have skill sets that are very different than ours. And we work with our wisdom about systemic problems and retinal pathology, and they come with a different perspective and that mentorship and collection together. And not only just working with medical students, but with computer science students. This is a bridge and a language. And we can't do this in isolation. It is really a team science. So, bringing that passion about ophthalmology to students is what I bring, and they bring their different ways of thinking and we come up with new ways to do things. So, it's really exciting.


Angelo Tanna, MD (Moderator): Fantastic. So, you know, in glaucoma, my field, we've done some research looking at the use of vision transformers with a technology called Dyno, which stands for distillation with no labels. It's a fascinating thing that we've discovered. If you provide a training process to review OCT tomograms, OCT of the macula, to our vision transformer approach, we were able to use that AI model to evaluate patients that we have who have glaucoma, and we were able to predict the individuals likely to have rapid visual field progression with a reasonable AUC of around 85%. So, you know, it's interesting that there's so much information in the OCT that we don't look at and using a vision transformer, using AI, we can actually harness that information in a different way. It may not always be explainable, which is an important feature of AI in terms of clinical acceptance. But in terms of the discovery process, I think that we may be able to use AI in meaningful ways, as we did in that study. Paul, go ahead. You were going to say something?


Paul Bryar, MD: Yeah. So, I look at it in a slightly different way, and I think what's good, we all have our own ways of approaching this, you know, using AI for analyzing and image or thousands or hundreds of thousands of images to figure out what's really there that we can predict from that, and how can we get instant results and predictions from that.


I think where a lot of the power for AI potential is coupling that with all of the sorts of other data, clinical data, right? Such as, you know, what medications are the patient on, what is their calcium or their magnesium been like. We really don't look at that on each individual patient on each visit. But with AI, you know, looking at various factors, you know, what is their ethnicity, even what is their ZIP code, right? You know, all of those things will change each person's unique individual risk for progression of a certain disease or, you know, is this patient more likely to have a quicker progression? Or is this patient more likely to be stable? So, coupling images with all of this data that we have today in our electronic health record.


Rukhsana Mirza, MD: It's the ultimate personalized medicine, right?


Paul Bryar, MD: Yeah, exactly. And no one clinician can look at all that in a 15-minute encounter with a patient, right? You know, so having that done almost instantaneously, and presented to the physician while the patient is there. I think that's where another great potential AI application is.


Angelo Tanna, MD (Moderator): Yeah. And for clarity, when you talk about ZIP code, of course, we're trying to talk about capturing social determinants of health. For example, exposure to pollution, 2.5 micron particles. It also captures information about income in a particular region, at least average income. And that, of course, can influence things like nutrition, which may influence disease processes, in a very important way.


So, yeah, I think that's really powerful. You know, I've looked at some studies that have incorporated EMR data that have arrived at very, very high sensitivity and specificity levels for the detection of glaucoma, for example, or for predicting some future glaucoma-related event, sort of unbelievably high.


Paul Bryar, MD: Yeah.


Angelo Tanna, MD (Moderator): And I think one of the dangers with using the EMR is that information can bleed into the data set that may not really be intentionally desirable to have in the model. So, I guess that's more just of a research warning is that, you know, if you include intraocular pressure data, for example, when you're trying to use the EMR data in a multimodal way to try to enhance the detection of glaucoma. If a patient had a pressure of 24 and then suddenly they have a pressure of 12 on a different day, maybe the machine has just captured the fact that the patient was started on treatment and that could lead to a falsely high sensitivity and specificity assessment. I've seen papers where that I think is a problem.


Rukhsana Mirza, MD: Well, I think you bring up critical issues about a clean data set and a very curated and mindful process of looking at all this. And I think this is the importance of the clinician in all of this, right? Because we are constantly mining data with our own thoughts, right? And is this relevant? Is this not relevant? And I think that that critical thinking and that critical questioning is vital.


You also brought up this idea of, you know, screening patients, patient burden, patients coming into clinic. And I'm really excited with another research study that we're a part of at Northwestern, which is through the DRCR, which is the Diabetic Clinical Research Network. It's a national study of retina specialists, and this is looking at home OCT monitoring. So now, for those of you who don't know, OCT is probably the most common image that we do in ophthalmology. And it looks at the cross-sectional image of the retina nerve. It gives us layers of the retina.


So, we use this for macular degeneration at every visit almost to see how our treatment is working, how we might treat an individual. There is now an FDA-approved device that will home monitor patients with OCT and it uses an AI algorithm to detect levels of fluctuation. We're going to learn so much from the study about what is actually happening on a daily basis with these patients and when is it critical to treat and maybe some fluid is tolerable or maybe it isn't. So, we treat with the best knowledge that we have, but the ability to access new information and data about our patients is just exploding. So, it's really exciting.


Angelo Tanna, MD (Moderator): So, you're talking about Protocol AO in which patients will be randomized to either have home monitoring of their macular degeneration, looking for subretinal fluid using an AI-driven OCT machine that patients will have at home versus standard of care.


Rukhsana Mirza, MD: Right. So both groups, this is for exudative macular degeneration or wet macular degeneration. Patients will be randomized either to standard of care, which is our treat and extend, our own way of personalized medicine where we see the patient in office and do these imaging, looking at subretinal, intraretinal fluid hemorrhages, versus them also being treated and being monitored at home while they're guiding their own imaging every single day. We are getting an alert. We have human oversight over that data as well. But it's giving us that information of perhaps some patients can go much longer in between injections.


Paul Bryar, MD: And you could find clinical events that patients don't notice but happen on OCT, and perhaps intervene.


Rukhsana Mirza, MD: Correct. And perhaps we end up with less injections with exceptional vision or maybe we'll learn otherwise. So, it's really exciting. That's a very clinical application, uh, clinical application of AI in a device.


Angelo Tanna, MD (Moderator): So, all of us have done research looking at generative AI and managing patient communication and queries that come in from patients. So, the two of you worked on a very interesting project looking at use of ChatGPT, I believe, for triage.


Paul Bryar, MD: Yeah. So in my head as a practice director, you know, incoming calls, patient calls every day with questions. Like, that's something that is a burden. We have to get the right person to answer the right question to find out is this an acute issue that we need to have them come in today, tomorrow, or can it wait? And we get those calls every day, you know, dozens maybe; many, many calls every day. So, I'll tell what we did and then Dr. Mirza can tell what our findings were. So, we came up with scenarios for each of the specialties, glaucoma, retina, you know, oculoplastics. You know, what would be the typical calls that would come in, a patient scenario? "I have red eye and my right pupil is bigger than my left," right? So, you know, or headaches and eye pain, right? That's something that we would want to see that patient the day of, whereas somebody has got itchy, burny, scratchy eyes, we'll tell them to use tears and let us know in a week or two how it is.


So, we posed those questions to ChatGPT and various iterations of ChatGPT to find out, well, how good is that compared to a group of people who would answer them in person. So, we went to attending physicians, we went to some residents, we went, you know, triage people, like how can we compare ChatGPT--


Angelo Tanna, MD (Moderator): The triage people are trained technicians, right?


Paul Bryar, MD: Yeah, exactly. To triage, you know, to see how the model did.


Rukhsana Mirza, MD: Yeah. So, I think it was a really interesting thing because, you know, as ophthalmologists, we know the words people say that require urgent evaluation. So, we wanted to see especially this has been done in other fields, how large language models would respond to these questions.


So, we posed actually the questions three different times to the model. And it's interesting, people may not be aware, but you might get a different answer each time you put in. So, you know, as we get these new technologies, ChatGPT barred all these things. They seemed amazingly helpful, but we have to continue to look at it critically. The good news is that, overall, we did find that this technology was very helpful and it did screen, and it was able to provide answers and triage, meaning, "You should come in one day. We think this is the diagnosis." It was better at triaging saying, "Come in at this point," than actually knowing the diagnosis. So, the diagnosis may not be correct, but the triage meaning come in or don't come in or come in a month, or this is routine, that was better.


Angelo Tanna, MD (Moderator): So, what proportion of the queries that you submitted resulted in what you or the graders considered to be an accurate response, an appropriate response?


Rukhsana Mirza, MD: The majority were appropriate, but there was a minority, a very small minority of unacceptable answers, which we always have to screen for. Because an unacceptable answer by all of artificial intelligence needs human oversight at this time.


Paul Bryar, MD: Yeah. And, you know, many calls are currently a technician or front office staff. You know, they're not always going to have you answering the phone all the time and giving, you know, the answer with that experience. So, we're comparing, you know, that type of thing. But we found, you know, over 85% or so had appropriate answers. It did come up with a list of diagnoses that could be possible, right? And we put some questions in there like, "Hey, if this patient calls with a shade over their vision, flashes and floaters, that's a retinal detachment unless we prove otherwise, they need to come in today." Questions like that to find out if it would trigger an immediate referral. That's where it was good at. But there were some, you know, hallucinations, you know, so to speak. You know, like if you asked ChatGPT, does United Airlines fly to tacoma? They say, "Yes, there's an nonstop flight." And then, you ask it again, "Are you really sure?" It's like, "Oh, yeah, they don't. You have to connect through," uh, you know? So, we can't have that clinically, you know? But these are evolving models and we even put it into version three and version four, and the learning of those models is really exponential.


Angelo Tanna, MD (Moderator): Absolutely.


Rukhsana Mirza, MD: And actually interacting with it and training it. It shows a lot of potential even in, you know, we talked about education and training of staff and giving a script for what would be an appropriate response to a certain query. So, I think that the technology can be really harnessed, and it can really be a positive force with the right oversight.


Angelo Tanna, MD (Moderator): So, we did another study in which we took about 300 real questions that came in through MyChart. And so, you know, all ophthalmologists know that with the volume of questions coming in, it's very difficult to process this. Technicians are typically answering these questions and they have to do it in the midst of a busy day of supporting us as technicians. And so, there can be delays in responses to questions that come in this way. So, it'd be very nice to know that there might be another approach, like an AI-driven approach. And so, we looked at 300 real questions that came in to physicians who were retina specialists, glaucoma specialists, and cornea specialists. And we used ChatGPT 4.0 to generate the responses. And then, we had three graders review the responses for accuracy and completeness. And we found that three quarters were complete and accurate, which is not bad. But, you know, similar to your study, about 10% of their responses varied between among specialties, but somewhere between 5% and 10% of their responses were judged to be unacceptable.


So, that's the problem here, is we're not quite ready for prime time. But at the same time, as you mentioned, Rukhsana, you know, and Paul, you too, that there are these exponential gains in the quality of these models. And so, with ChatGPT 5.0, it may be a completely different story where we may scale a barrier and get this to the stage where it might be implementable.


Rukhsana Mirza, MD: What's interesting is, like with the newer models, there's also abilities to put in images. So if you combine an image with a symptom or a question, what we know is you add to the model and you get even more information out.


Angelo Tanna, MD (Moderator): Absolutely. And whether we use it or not, patients are definitely using it.


Rukhsana Mirza, MD: Correct.


Angelo Tanna, MD (Moderator): And that's probably, you know, for the most part, I actually think it's a good thing. I think it gets the patient often closer to a correct answer.


Rukhsana Mirza, MD: But I would put that caveat out there. There are big mistakes that can happen.


Angelo Tanna, MD (Moderator): For sure.


Rukhsana Mirza, MD: So, we still need really good oversight. But I do think that an increasing trust in the process and an engagement with the process is part of the development and part of our advancement.


Angelo Tanna, MD (Moderator): Yeah. There was an article in New York Times, just within the past week in which there was an assessment of ChatGPT's responses or generative artificial intelligence in broad terms to patient questions and how patients feel about them. And what was really interesting to me is that patients find the communication with these large language models to be more empathetic than what physicians are able to provide in some instances. It's a very interesting.


Paul Bryar, MD: The model is busy and distracted.


Rukhsana Mirza, MD: Paul, I think it's the directness, right? The short response versus an opportunity to provide a longer complete response. But I think they've done studies like that in psychiatry as well where interacting with the empathy can be integrated into the model. So, really exciting.


Angelo Tanna, MD (Moderator): In the U.K., there was very interesting study that was done that used an artificial intelligence tool that they called Dora, that was designed to communicate with patients about four weeks after cataract surgery to determine if they needed to come in for further evaluation. So, it's interesting, you know, we see patients on post-op day one and post-op week one and so on. And that's not the national Health Service approach. You know, the approach there is if you're healthy and just had cataract surgery, meaning no eye problems, no glaucoma, for example, those patients are not seen back until about four to six weeks and by an optometrist and mainly for glasses at that time.


Paul Bryar, MD: Yeah. I mean, I think it's like we're basically describing the fact that we really don't know where we're going to be using it the most and where we're going to find the most usefulness of AI. It's going to be a part of our lives and our practice, you know, very soon. But we don't actually don't know what it is yet. We have some idea of where we think it should go and where it can go, but I think we'll be surprised about, you know, there'll be applications that are in everyday use that we still don't really grasp yet. I think that's a great thing.


Rukhsana Mirza, MD: I think some of our hopes is that we can spend some of that time reconnecting with our patients and really focusing on, you know, the issues that they have and connections with them to take off some of these other tasks. Certainly, AI has started doing scribing of office visits, many other things where small AI is used in every day. Like, we write an email, it suggests the next word sometimes. But I think our hope is that we utilize this as a tool to really access the patient as a whole, access all the data that's given in front of us, and to really be able to reconnect and reengage with really tough problems sometimes.


Paul Bryar, MD: Where I like to also look at where it's going is looking at not just the patient but populations, right? So, we in our system alone here, you know, have a tremendous amount of patients, right? And using AI to say, okay, of all the patients with diabetes, that haven't had an eye exam, who's at the highest risk? How can we proactively get in contact with them at their next, you know, visit with anybody to say that, you know, encourage them to get in for screening or treatment of a disease. So, these are patients that haven't shown up yet, right? How can we go reach them? The high risk patients, the high risk for vision loss, you know, or for X, Y, Z. I mean, looking at that, having that in the background, proactively search for people to get them in our office. I mean, that's where'd like to see it.


Rukhsana Mirza, MD: Well, I think if you really took that a step further, ultimately, we will be able to do undilated photographs with iPhones, you know, and really reaching people where they are. I think that is not too far in the future to obtain an image of the eye.


Angelo Tanna, MD (Moderator): I think it's now. And it can be done in Africa using an iPhone to look for glaucoma.


Rukhsana Mirza, MD: Right. And so, then coming back to research, what is a research? Research to create data sets to create that knowledge that is infused in what that picture means, right? What does that picture, how can we harness all the data, and say this is what we think is going on with this patient. That's really exciting. And also, to create models where all people are represented so that the model itself is accurate. And I think that there's a lot of work to be done, but the possibilities are just really amazing.


Paul Bryar, MD: It's not too far in the future where part of your yearly checkup with your primary care is going to include a photo of the eye. And right now, we think of it as a fancy tabletop non-mydriatic camera, but it's going to be a handheld device or an iPhone, you know? Yeah, I think that's going to quickly become part of the standard of looking for things, you know?


Angelo Tanna, MD (Moderator): So, yeah, that's a good point. You know, the problem is the sensitivity and specificity have to be sufficiently good in order to be able to accomplish all of our goals.


Rukhsana Mirza, MD: And getting back to the gist of the research.


Angelo Tanna, MD (Moderator): Yes. There's a lot of evidence that AI works best when it's a collaborative approach with a human clinician. So, you know, I think that that will be another strong area, you know, especially in glaucoma where there's a lot of disagreement about the diagnosis of glaucoma early on. And then, there are certain cases of anomalous optic discs in patients with high myopia where experts don't often agree. So, there's not a real consensus on the definition of glaucoma in all cases. There are the black and white cases, though, of course, and AI is great at differentiating those, but actually so are normative databases that we already have, that we use with our OCT that don't rely on artificial intelligence.


Paul, you're doing research in the use of artificial intelligence to generate risk scores. Tell us more about that.


Paul Bryar, MD: Yeah. So, we use, you know, something, a big data repository called Source. And it's about 25 centers, and soon to be 50, you know, centers--


Angelo Tanna, MD (Moderator): That use Epic.


Paul Bryar, MD: Academic medical centers that use our same electronic health records. So, that data's a lot cleaner. So, we clear it of all patient identifying information and send it to the big data repository. But it's a lot cleaner than other big data sources because it's the same electronic record, right? So, the data format is pretty similar. But you can couple that with outside things such as income, like we talked about. And you have all the lab values, all the medications, all the diagnoses of diseases. Then, you can have, you know, home ZIP codes and things like that.


So, what we are doing is, you know, because glaucoma, as we learned, you know, in our residency, certain populations have glaucoma earlier and it's more severe and it's more prevalent, right? Is that just all genetics or what else is going on there, if anything? And, well, you alluded to it earlier, you know, so it may be that an African American population has definitely higher glaucoma. But when we look at them, they tend to live in higher areas of air pollution, like you said, PM 2.5. So, we're looking at all these various factors to figure out, you know, can we bring in these things? Because you know, when I see a patient, I'm not really looking at their ZIP code, right? I'm just looking at them as a patient looking at their vision, their pressure, their eye exam. I'm not thinking of those things. You know, so how can we get that data to me or to you when you're seeing that patient. And that's the end goal for this is to calculate. You know, with all of these things that we're not looking at in the clinical visit, but it's there somewhere in the EHR, can we give the provider at the point of care, a risk estimate or a red flag, so to speak that says, you know, if you were thinking about adding a second medication and you're on a borderline, you have a low risk patient and a high risk patient, you might add that everything else is the same except for that risk score. You might add the second medicine to the high risk patient and maybe not for the low risk patient. That definitely has to be validated and proven. But, I mean, that's how I think it will augment our ability at the point of care.


Rukhsana Mirza, MD: I think one thing we haven't discussed that's possible in the age of AI are wearable devices or devices that are in the home. Well, we talked about OCT, but other devices that a patient might wear and harnessing that data about what's going on outside of that clinic visit, right?


Paul Bryar, MD: Yeah. Cardiology uses that all the time, right? You know, get home heart monitors, pulse rates.


Rukhsana Mirza, MD: And that glycemic variability.


Paul Bryar, MD: Yeah. So, plugging into that is definitely-- I can't let this go by without thinking of just how we get through our days too. Like, I'd like to latch onto, you know, saying, "Okay, how can we deploy resources in the hospital or in our clinic during a busy day?" You know, we have 500 patients going to our clinic, our own clinic, right? And how can we say, "If you're getting overwhelmed, we need to shunt some more resources there," or you know, "Your pictures are running behind. Let's get some more retinal imagers in your side of the clinic," and, you know, how can we make the patient experience better? You know, we're looking at AI to monitor all these things. Because we know exactly when somebody checks in, when their eyes are dilated, when they're waiting for their pictures, when they're ready to see me, see you. So I mean, we could have something monitoring that constantly, you know, all these timestamps and then redeploying resources to make everyone's lives easier and get people on the clinic quicker.


Angelo Tanna, MD (Moderator): Yeah. And as the population ages and we all get busier and busier, we're already there, but it's going to continue to get busier. And we have to understand the need for efficient delivery of healthcare, I think AI will be a major driver in that direction. I think that we can hope for AI driving discovery in medicine, AI will drive efficiency and I think it will drive accuracy too, and quality.


Paul Bryar, MD: We'll get there eventually.


Angelo Tanna, MD (Moderator): Yeah. It's been a great discussion.


Paul Bryar, MD: Yeah, it was fun. Thank you.


Rukhsana Mirza, MD: Future is exciting. Future is now.


Angelo Tanna, MD (Moderator): Thank you.


Melanie Cole, MS (Host): Thank you all so much for joining us today for such a lively discussion on such an exciting topic. Thank you again. And to refer your patient or for more information, please visit our website at breakthroughsforphysicians.mm.org/ophthalmology to get connected with one of our providers.


That concludes this episode of Better Edge, a Northwestern Medicine Podcast for physicians. I'm Melanie Cole.