On Mental Health Services Research: Can Big Data Improve Mental Healthcare?

In this episode, Dr. Daniel Knoepflmacher talks with Dr. Jyoti Pathak about the critical role of mental health services research and the innovative use of technology in this field. Dr. Pathak describes how researchers use machine learning and artificial intelligence to identify and understand the factors contributing to disparities in mental healthcare delivery. He shares compelling examples from his work on postpartum depression and the stigma surrounding addiction, demonstrating how advanced tools uncover complex structural problems within our system. Tune in to learn how harnessing technology can lead to targeted solutions for improving access and equity in mental healthcare.

On Mental Health Services Research: Can Big Data Improve Mental Healthcare?
Featured Speaker:
Jyotishman Pathak, PhD

Jyotishman Pathak, Ph.D. is Chief of the Division of Health Informatics, the Frances and John L. Loeb Professor of Medical Informatics and Vice Chair for Entrepreneurship in the Department of Population Health Sciences at Weill Cornell Medicine. His research focuses on analyzing electronic health records, insurance claims and social determinants of health data to study mental health services utilization and treatment outcomes in depression, substance use and suicide. Dr. Pathak is also the founder of Iris OB Health Inc., a mental health startup company spun out of Weill Cornell Medicine to develop AI technology for early detection of postpartum depression and anxiety.

Transcription:
On Mental Health Services Research: Can Big Data Improve Mental Healthcare?

Dr. Daniel Knoepflmacher (Host): Hello and welcome to On the Mind, the official podcast of the Weill Cornell Medicine Department of Psychiatry. I'm your host, Dr. Daniel Knoepflmacher. In each episode, I speak with experts in various aspects of psychiatry, psychotherapy, research, and other important topics on the mind.


As many of you listening probably know, we are facing a major problem in psychiatry and mental health care in the United States. Put simply, many who need our help are not getting the care we can provide. While having a shortage of healthcare providers contributes to this problem, it is only one of many contributing factors. With a system as large and complex as ours on a national scale, gaining a nuanced understanding of disparities in mental health care is a massive undertaking in itself. So, how do we find the right solutions without fully understanding the problems? Fortunately, there are talented researchers who make it their mission to take on these challenges, working in the field of mental health services research.


Using AI and other machine learning tools, they're able to analyze massive amounts of data to clarify the key factors contributing to widespread problems in our mental health care system. And that's just the first step. These researchers are also using data and technology to create tools for improving diagnosis and treatment interventions in Psychiatry that make care more accessible, equitable, and personalized.


Today, we'll hear from a leader in the field of mental health services research who will help us understand how he investigates mental health disparities and helps craft solutions to address the needs of many who remain neglected in our country. I'm happy to welcome our guest, Dr. Jyoti Pathak. Dr. Pathak is the Frances and John L. Loeb Professor of Medical Informatics and Chief of the Division of Health Informatics here at Weill Cornell Medicine. Jyoti, thank you so much for joining us today.


Dr. Jyotishman Pathak: Thank you for the kind invitation, Daniel. It's a real pleasure to be here.


Dr. Daniel Knoepflmacher: Well, I'm really happy to speak to you. And the way I'm going to begin is as we always do on this podcast, I'm going to ask you about your story. How did you develop an interest in medical informatics and then move towards a focus on mental health services research specifically?


Dr. Jyotishman Pathak: Thank you for that question. As you alluded to, I am a mental health services researcher, but my foundational training and background is in Computer Science and Biomedical Informatics. So coming from India, I came to this country in 2002 and started my graduate school at Iowa State University, where I was studying Computer Science, primarily focusing in developing approaches and methods for distributed data mining and knowledge discovery. That's a fancy way of saying how we can look at very large data sets and identify certain patterns and look for certain trends. My work at that time was very theoretical in nature, so we were solving theorems, we were creating lots of scientific formulas. But somewhere in me, I always had the desire to see how these approaches could be applied in the real world.


Five years later, at the time of my graduation, I was very fortunate to have two dramatically different job opportunities. One was to take a research position at IBM Research right here in New York. And then, the second was to join as a fellow in biomedical informatics at the Mayo Clinic. Now, coming from India, I had always heard about the world famous Mayo Clinic, but never been to Rochester, Minnesota. So, it was an interesting dichotomy in many ways. But at the end of the day, I ended up taking the Mayo opportunity, told my wife at the time that we may try this for a year and then head to East Coast or West Coast.


Well, we ended up staying there for over eight years, and it was actually during my time at the Mayo when I was introduced to the field of Psychiatry, Clinical Psychiatry and Mental Health Services Research. Now, this was around 2007 and Mayo was making huge investments as an institution to build a massive biobanking and biorepository to recruit participants both in Minnesota, but across neighboring states to collect biospecimens, to link such data with electronic medical records. And I had the honor and privilege to work with, at that time, the Chair of Biomedical Informatics, Chris Chute, who is now a faculty member at Hopkins, and Chair of Clinical Pharmacology, Dick Weinshilboum, who is still actually at Mayo Clinic, who are very interested in studying pharmacogenomics of antidepressants. To be quite candid, this is a phrase that I had never heard before coming from an engineering school with a computational background, but what I learned very quickly from Dr. Weinshilboum that when it comes to prescribing medications in general, but certainly antidepressants, it involves a huge degree of trial and error. And it is very challenging for a clinician to make treatment decisions more often than not, and hope that these treatments work. So, his lab was trying to make this overall process "more precise."


And with the establishment of this biorepository at Mayo Clinic, there was this unique opportunity to look at multi-modal data. So, patient medical records, genomics data, self-report survey instruments, insurance claims data sets. And the goal here was to study treatment outcomes for commonly prescribed antidepressants such as SSRIs. So, I mean, in a way, I was like a kid in a candy store, right? So, I immediately sort of jumped into this opportunity that here is a perhaps once in a lifetime opportunity for me to apply some of the foundational skills that I have learned in data mining and machine learning. And, hopefully, it could lead towards applications in predicting treatment response, in trying to understand drug side effects and ultimately impacting patient care.


So, that's how, Daniel, my sort of journey started in this path. And just to sort of wrap up, I mean, as I started digging deeper, I also very quickly learned that genetics and neurobiology, of course, do matter. But when it comes to mental health services and mental health treatment in general, there are several aspects that need to be considered. Many patients remain underdiagnosed and hence undertreated. Timely access to mental health services, as you mentioned at the opening remarks, remains a huge barrier, even in places like New York City, where we do have a large concentration of mental health services provider. And I think it's becoming increasingly clear that social determinants of health, where you live, where you work, your exercising habits, your diet, all of these play an important role in disease risk and treatment response.


So over the last decade or so, my work and my lab's work has sort of gravitated in that direction in terms of how we can look at large data sets to answer some of these questions broadly in mental health services research.


Dr. Daniel Knoepflmacher: Wow. So, we're very fortunate that you didn't end up at IBM. I think you were very clear in describing this path. But as someone who's not a computer scientist or a medical informatics specialist, I want to just make sure that we define some of the terms for our audience and as we continue our discussion. So specifically medical informatics, what would be your entry-level definition of medical informatics?


Dr. Jyotishman Pathak: Yeah. So, I consider medical informatics as a highly interdisciplinary field that intersects with information and computer science, cognitive and social sciences, mathematical and statistical sciences, engineering, biological and physical sciences in a way that the focus is around how we could use and leverage large biomedical data sets, develop new technologies, all geared towards improving human health.


So just to give some examples, designing and implementation of patient electronic medical records, which I'm sure you use almost every single day, is a big thrust in our field of Medical and Biomedical Informatics. Development of machine learning models that can predict risk of developing a certain disease is a big element of the field of Biomedical Informatics.


Dr. Daniel Knoepflmacher: And am I correct in thinking that mental health services research is a subset of medical informatics? It's a specialization focused on mental health care within informatics?


Dr. Jyotishman Pathak: I would regard mental health services as a field that is aimed to study the effectiveness and, broadly, efficiency in mental health services, right? So for example, questions around how do we facilitate broader engagement of our patients to mental health care and mental health treatments? In other words, treatment adherence is a big topic of research in mental health services. What are some of the barriers and, vice versa, facilitators of treatment access and how can we develop interventions to address some of these barriers and what disparities exist? That's a big area of research in mental health services.


And I think there is also a huge intersection in this field in studying the impact of existing policies, local, state, federal level policies, as well as development of new policies, either within CMS, Medicare, Medicaid, and how it impacts the services that are being offered across different health systems. So, my work, personally, and my lab's work, really tries to marry both of these fields. How we can use technology and large data sets to answer some of these questions in the field of mental health services research?


Dr. Daniel Knoepflmacher: Well, let's talk about some of that technology. So in business, in medicine, in technology itself, you know, in companies that are focused on tech and in research, as you've described, there's a lot of talk about big data, and which I imagine relates to the data mining that you were talking about. So, can you describe what is the power of big data?


Dr. Jyotishman Pathak: Good question, Daniel. Very high level. Big data is big, right? But it is sort of characterized what I would call as three major features, right? And these include volume, variety, and velocity, the three V's. So, what do we mean by that? Volume, of course, implies large amounts of data. And we see a lot of that in different aspects of our life. Variety includes whether the data could be in the form of text, it could be an image, it could be audio, it could be video. So, multi-modal data sets is what variety relates to. And then, velocity implies the speed at which the data is generated. Just to give you a very small data point, Facebook by some estimates says that, on average, more than 130,000 photos are uploaded in Facebook every minute, 130,000 photos every minute. So, that's what I would call as very big and high velocity data.


Now, of course, our interest is more in the field of medicine and healthcare. So when it comes to big data for medicine, it essentially relates to information about diseases, treatments, patient outcomes that is collected and analyzed during clinical care. So, it could very well be electronic medical record data. It could be imaging datasets, omics, epigenomics, transcriptomics datasets. There is this whole new world of Internet of Things where it's wearables and sensor data that's been collected in a very passive way. Social media is increasingly playing a big role, certainly, broadly in healthcare, but certainly in mental health care. So, data from Twitter and Facebook and other resources become very relevant. And all of these data sets, as you can imagine, the volume is very large. They are growing exponentially. So, our group and many other groups across the country and the world are trying to apply these AI and machine learning methods to see what we can learn from these data sets? Can we use them for predicting disease? Can we use them for early diagnoses? Can we use such data for understanding patterns of treatment adherence or non-adherence? And these are the types of questions that people are working on.


Dr. Daniel Knoepflmacher: AI is on everybody's minds right now. And you just described how you're using AI and machine learning in your work. Can you help us just for clarification, AI and machine learning, what's the distinction there?


Dr. Jyotishman Pathak: You know, these terms are frequently used very interchangeably. But if you ask a traditional computer scientist, there are differences, right? So again, from my point of view, I regard AI as this umbrella term, a broad field of technology that is, in essence, trying to mimic human intelligence in machines, right? So, how can we take natural human intelligence and apply them within machines?


Machine learning is a subfield of AI, right? So, it's a subdiscipline within the field of artificial intelligence that is much more focused on how machines and computers can learn and improve from experience. You could learn from existing data sets and develop models that improve over a period of time on its own, so there could be a self-learning process. But at the end of the day, the focus here is how can computers learn and improve from prior experience.


And all of these methods use logical reasoning. There is a lot of mathematical modeling that is applied to perform certain tasks. And I think it's fair to say that in this day and age, we are surrounded, unbeknownst to us, with these technologies. I'm sure you have a Siri or Alexa at your home or in your desk. If you drive a Tesla, it is a self-driving vehicle with a lot of AI technology. You watch movies in Netflix or Amazon. There is AI behind that in terms of recommendation of, you know, movies and whatnot. And so, we are again surrounded by these technologies. Our work is focused on how some of these tools and algorithms could be used for studying health services question and questions in depression, suicide, and substance use disorders. Those are the areas that our lab focuses on.


Dr. Daniel Knoepflmacher: So, let me dig a little deeper into that, that last point you made. So, you have these powerful tools that you're harnessing and you're looking at depression, suicide, and other mental health-related issues. What are the major research questions you're investigating?


Dr. Jyotishman Pathak: Yeah. So, our lab primarily focuses on three types of questions. So, let me walk you through them. So, one area of investigation that we are very much interested in is what we call as broadly symptom detection. So, for example, when you think about depression as a condition, it's a highly heterogeneous disorder, which is often characterized by many symptoms, persistent feeling of sadness, lack of energy, loss of appetite, social withdrawal. So, these are all symptoms that a patient might be experiencing. And typically, when they go and see a doctor, whether a psychiatrist or any other clinician, as I'm sure you can relate to, Daniel, most of this information is recorded in an electronic clinical encounter note. We sort of call that piece of information as natural language. As the patient is describing their feelings, their symptoms, such information is getting documented in the note as a natural language.


While it is very easy for humans to comprehend such information, it's not that easy for machines to actually interpret and understand. So, a body of work in our lab is focused on developing these machine learning methods that we call as natural language processing. So, you process the natural language to extract meaningful information, which could hopefully be presented to clinicians for appropriate clinical decision-making. So, that's one body of work that we are actively working on.


Dr. Daniel Knoepflmacher: Can I just stop you there and ask a little clarification? So what an example of that would be, it would be calling through notes written by doctors and sort of flagging that there might be depression. You know, when you talk about symptom detection, how would this operate?


Dr. Jyotishman Pathak: Exactly. Yeah. So, you could imagine a system which is processing notes as they're getting generated, or maybe right before a patient encounter, the system is processing historical notes for that particular patient. And it could detect some of these symptoms that might have been documented in prior visits or prior encounters. And that information could be presented to a clinician at the time of that encounter. And as a psychiatrist, probably some of these elements you are more familiar with, but other practicing physicians and clinicians may or may not be fully aware of some of these symptoms. And so, some of the tools that we are developing are essentially elevating how this information could be surfaced at the clinician level.


Dr. Daniel Knoepflmacher: Thank you for clarifying that. And continue. There were other problems, questions that you're addressing, yeah?


Dr. Jyotishman Pathak: Yeah. So, the second set of problems that we are working on broadly I'd like to label them as pattern mining or clustering, right? So again, when we think in terms of depression, some of the most commonly used treatments include psychotherapy or antidepressants. And we all know that different patients respond differently to such treatment. Some patients have side effects, some don't, some patients tolerate the treatments very well, some don't. And as I mentioned at the very beginning, a lot of this is trial and error, right? So, our goal here is to see, can we tease out some of this heterogeneity by developing competition algorithms that can analyze hundreds of thousands of medical records to essentially derive clusters of patients who might experience certain type of side effects versus whom may not, who might tolerate certain dosing regimens versus whom may not. So, can we essentially look at subphenotypes or subgroups of patients for different treatments that could be prescribed? So, that's one area that we are very, very actively working on it. And again, as you can imagine, this information could be very beneficial for a clinician during their encounter.


And then, the last set of problems that we have a number of studies underway is broadly in application of AI and machine learning methods to derive disease risk, right? So here also we are looking at historical data, whether it's medical record data sets, insurance claims data sets, patient self-report survey data sets to predict the risk of developing depression or predict a future suicidal attempt in individuals. And we'd like to think that some of this information can assist clinicians in making appropriate decision-making, maybe think about preventative interventions that could potentially be applied in a given context.


Dr. Daniel Knoepflmacher: I want to touch on some of the specific projects that you've been doing, and I know we had a prior episode on Reproductive Psychiatry with Lauren Osborne and Alison Hermann to prominent reproductive psychiatrist. Here. You've done some research on postpartum depression, and that's something which relates to a key focus, of course, in Reproductive Psychiatry and women's mental health. So, can you tell us about the work that you did in this area.


Dr. Jyotishman Pathak: Absolutely. It's been a real pleasure to work with Dr. Osborne and Hermann. And I actually really enjoyed the episode you did with them. And so, as it was discussed, even during that episode, by some estimates, one in seven women in US suffer from postpartum depression. The numbers are much higher in Asia and Africa, close to one in five. And the latest data from the CDC suggests that mental and substance use disorders are the number one cause of maternal mortality in the US. It's no longer cardiovascular reasons, but it's rather mental health and substance use disorders.


Now, as it turns out approximately 50% of women remain undiagnosed and hence untreated. In fact, there's a recent study that was published where it was observed that there's huge racial and ethnic disparities in who ends up receiving treatment. This particular study, it showed that women, perinatal individuals who are from minority backgrounds, despite screening positive, only half of them were referred for mental health treatment compared to non-Hispanic white individuals, right? So already, very few women are getting diagnosed and, even after diagnosis, very few are getting any form of treatment.


So, our goal in this series of studies actually underway is to see if we can improve diagnosis of postpartum depression and hopefully treatment so that the right woman can receive the right treatment at the right time. And so, our first body of work led by my colleague, Yiye Zhang, who is a faculty member also in Informatics here, was focused in developing a machine learning model that can predict the risk of postpartum depression using electronic medical record data. So in this study, we used de-identified clinical data from patients across several academic medical centers in and around New York City. And we essentially developed and experimented with many different machine learning algorithms. And the one that performed the best was able to predict the risk of developing postpartum depression with almost 90% accuracy.


And what was even more interesting was that the model performance stayed consistent throughout pregnancy. So in other words, if I am an OB-GYN and I have this information about a patient who might develop postpartum depression, but I have this information as early as their first trimester, during their pregnancy, I have an opportunity here, a significant window and an opportunity to consider preventative interventions, lifestyle interventions, things like sleep hygiene, exercise and diet, which in a way could lead towards ultimately preventing or reducing the risk of developing postpartum. So, that's how we got started in this entire journey.


Now, of course, one of the things that us and many others are working is how do we eventually see these AI models being used for real world clinical care. Because we can geek out about analyzing large data sets, coming up with the best accurate model, but eventually if they're not used by our clinical colleagues, eventually if it's not benefiting patient care, I would argue that we have not done our job. So in this regard, again my colleague here, Yiye Zhang, is leading this effort in conjunction with New York-Presbyterian Hospital to implement the models that we have created as a clinical decision support tool within Epic, which is our electronic health record system, so that it can inform our OB-GYN providers during prenatal visits regarding patients who might be at risk for developing postpartum depression, right?


So again, our expectation is that once this information is available, it could facilitate some sort of shared decision-making between the patient and their provider and maybe caregivers and other family members. But of course, it also introduces important ethical and privacy issues. There are questions about how should we present this information? When should this information be presented? It's a highly vulnerable phase. Pregnancy is a highly vulnerable phase during a woman's life. So, there are all of these elements. What support services exist in place so that the right interventions could be implemented? So, these are all very clinical workflow and clinical process questions that we are trying to understand as we pilot these algorithms in real-world environments across, in this case, outpatient OB clinics.


Dr. Daniel Knoepflmacher: Wow. So many pieces to that. You're talking about the overall population and the under-reporting and identification of postpartum depression, the disparities, because I imagine this cuts across all groups coming in for the care, so it helps to address the disparities, but also that we don't have enough providers who are mental health providers, so this helps empower the primary providers in OB to ask these questions, to address these issues, and then when needed, get the support from the right mental health care provider. So, that sounds like a massive change and a beautiful illustration of the work you do. I'm wondering, just briefly, any other pieces along pregnancy or Reproductive Psychiatry that you've also been working on?


Dr. Jyotishman Pathak: Thank you for asking, Daniel. We have another complimentary set of efforts that we are working in this case, also with Allison but also with Jonathan Avery, who is an addiction psychiatrist here at Weill Cornell. And as I understand, he has been a guest in this podcast as well. And our work with John in that case is trying to understand stigma particularly in birthing people who might have a prior history of substance use, right? So, if I may indulge you, as we know, stigma is also a huge barrier for seeking mental health care. And stigma, I regard it as a social construct, right, which is often communicated through language. The words we choose, the tone of our language, all of that matters and could potentially induce stigma.


The first question that actually we are trying to address here is that how does stigma manifest during patient care via use of stigmatizing language in clinical documentation? In other words, do we see intentionally or maybe unintentional use of such language in patient medical record during a clinical encounter, which could lead towards stigma. And both in our own experience as well as other studies you'd be surprised that this is unfortunately very common. We have seen use of such language, which personally, I consider very offensive and unprofessional, terms like individual is a junkie and needs to get clean. This is documented in the encounter note. Or phrases like, a drunk was bought, brought to the emergency room again, or a dirty user came today asking for pills. Again, language which certainly, in my mind, is not professional and highly offensive.


And now, with 21st Century's Cures Act and the whole OpenNotes initiative, our patients have more easier and wider access to their own medical records. This means that individuals can actually see what was written and what was documented during a clinical encounter. And previous research has shown that the use of such stigmatizing language in clinical documents negatively impacts the therapeutic relationship between a patient and a provider. There is lack of trust. There's misjudgment and that's all detrimental in nature.


So, our research with John is multidimensional, and we are first of all trying to sort of understand and learn from different stakeholders, patients, caregivers, family members, clinicians about how they may have experienced stigma during clinical care, how stigma might have impacted their trust and confidentiality of the relationship that they have with their provider. We are also trying to understand how AI and machine learning could be used to detect such language automatically from patient medical record. Think of it as are typing something in Microsoft Word and it tells you the spelling error, right? So. Can we detect such language somewhat automatically during an encounter note creation? And then, of course, our goal here is to think about tools and educational interventions that we can offer to clinicians, which hopefully would lead towards reduction of use of such language in clinical documentation.


So, our very first study that's still underway is focused primarily on pregnant birthing people with a prior history of substance use, including opioids. As I mentioned, pregnancy is of course a very highly vulnerable period in women's life. And again, research has shown that especially pregnant women with current or past history of substance use are often marginalized. There is huge disparities when it comes to treatment. And stigma is a big component of their lived experience. So in this work, that preliminary work, I would say that we started doing late last year and still underway, we looked at medical record data on approximately 2,700 patients with substance use disorder. So, this was roughly about 1 million encounter notes, right? So, not a very large data set, but reasonably large enough. And again, our goal here was to develop a machine learning tool which could automatically detect use of stigma language in the encounter notes.


Daniel, we were so surprised that out of these roughly 1 million notes, more than 85% of the notes had such language. Terms like addict, alcoholic, drug abuser, junkie, again, terms that I personally consider unprofessional and offensive were very, very common. And unsurprisingly, in this case, and unfortunately, individuals who were racial and ethnic minorities, black patients, Hispanic patients, individuals who were age 35 and older had much higher prevalence of such terms in their notes. And we also looked at provider characteristics


The good news was, in this case, probably you can relate to, is that notes that were written by psychiatrists and clinical psychologists had the lowest prevalence of stigma language. So, something good is happening in this field.


Again, this is all very preliminary study, but the point being that this is an area where we are truly seeing how machine learning tools could be applied to learn natural language that is documented during a clinical encounter note and how we are able to detect such stigma language. So, we are now doing additional qualitative research, surveys with patients, and focus groups and interviews to sort of understand some of these elements. And eventually, it is our hope that we can develop a tool that could be implemented to ideally within Epic, the medical record system, such that it will detect such languages, ideally near real time, either during dictation or when the note is being written. So, it will at least prompt the person who's writing the note that perhaps this language is not the right language.


Dr. Daniel Knoepflmacher: As part of training in Psychiatry, there's talk about stigmatizing language when it comes to mental health or addiction, but it's hard in the moment as people are writing notes, these things may bleed through. So, here's an example of technology helping a culture change in the use of the language. It might be there intellectually in the learning, but then it's just sort of reinforced by the technology, which I think is a nice use of technology. I guess I want to ask you about potential risks with technology, because we all understand that with AI and other technological developments, there's some risk. So, could you speak briefly about what risks there might be?


Dr. Jyotishman Pathak: Yeah, no, absolutely. And I mean, as with any new technology, there are many known risks and unknown risks. So, obviously, these algorithms require training and they are typically trained using very large clinical data sets and biomedical data sets, which often are biased to begin with. So, as a result, the algorithms that you are developing could be biased, and they might perpetuate or even amplify biases that would marginalize vulnerable groups, right? So, an AI tool might misinterpret cultural expressions or may lead to overdiagnoses for certain racial and ethnic minority communities. And so, this is a very active area of research where how can we develop these AI tools and algorithms that are fair and not necessarily biased, but it is certainly a risk.


I think the other risk that is also in my mind fairly high is privacy. So, we are again dealing with highly sensitive personal information that can be vulnerable to breaches, misuse, and potential commodification. And that could jeopardize a patient's ability to adhere to certain treatment regimens, their trust in the overall system. So, privacy and security is not something to be taken lightly in the context of these tools. And I guess the last point I'll say, Daniel, is that technology is good. But at the same time, overreliance on technology could have a negative impact. I think the term that comes to my mind is dehumanization. And by using too much technology, are we actually going to lose the human touch, the human connection? And I think more than in any other field, in my mind, it is absolutely important in the field of mental health care that there is a robust partnership and relationship between the patient and the provider.


And we have actually conducted a small survey for roughly 500 or so participants. The results were just published recently. When we broadly ask this question, "Are you worried about overuse of technology and AI in mental health care?" And not surprisingly, overwhelming majority of the participants express significant concerns about losing that personal touch and how it might impact the relationship with their provider. So, these are the three concerns that come to my mind.


Dr. Daniel Knoepflmacher: I think about humanism and dehumanization as real risks along with those other two potential risks. And I think it's important not to be binary in this because there is a tremendous risk of dehumanization or less humanism in the care. But at the same time, there's already lots of things that are dehumanizing our care. And if technology can help us take care of those so that we have more time to focus on the human elements, that would be a real plus.


So, we're short on time, but I have one final ask of you, and that's to use your own personal crystal ball and look at the future, make some predictions about what this kind of research might bring in the future and potentially what impact that might have on the delivery of mental health care.


Dr. Jyotishman Pathak: I'll make two brief comments. One is, as we discussed already, I think we are going to see an increasing interest in real world implementation and adoption of these technologies in clinical care, right? So, I think we live in this healthcare AI chasm where a lot of academic work and research in this area is not widely used in clinical care. And on the other hand, industry develops lots of tools which are being used for patient care, but they are not well researched. There is no RCT to sort of validate their findings. So, I think we will see an increasing interest in closing and narrowing that gap and hopefully that will lead towards more robust academia industry partnership.


The other area where I think there will be more and more interest is how do we use these technologies in low resource settings, including in lower middle income countries, right? Much of the work that's happening right now, it's primarily in the western part of the world in developed countries. And even there in sort of more urban areas, often individuals and patients in rural parts of our country or in LMICs, lower middle income countries are not included and their data is not part of these systems. So again, there also, I see an opportunity to perhaps close the gap. And I think the last point that comes to my mind is that because much of mental health care is impacted both by social and structural constructs, I think there is an opportunity here for how these tools could be used to address some of these elements. And so, hopefully, that will lead towards more closer collaboration between the technologist with clinical medicine and public health agencies.


Dr. Daniel Knoepflmacher: Well, you're painting a hopeful picture and excited to see all of these developments unfold in the years to come. I have to say, Jyoti, it was an absolute pleasure speaking with you today. You really taught me a lot in this discussion and your work is really helping to improve mental health care by identifying and overcoming barriers that so many people face in our country. And I know that this is just the beginning. So, I have to say this is the first time I've had a mental health services researcher on this podcast. But given how crucial this work is, I have a feeling it might not be the last. So, thank you very, very much for joining me on this episode.


Dr. Jyotishman Pathak: Thank you again, Daniel. It was a real pleasure to you. Much appreciated.


Dr. Daniel Knoepflmacher: Much appreciated, too. And thank you to all who listened to this episode of On the Mind, which is, as most of you know, the official podcast of the Weill Cornell Medicine Department of Psychiatry. Our podcast is available on many major audio streaming platforms, and that includes Spotify, Apple Podcasts, YouTube, and iHeartRadio. So if you like what you heard today, please give us a rating and subscribe. That way, you can stay up to date with all of our latest episodes. And please tell your friends. So, help us get the word out so we can grow even more. We'll be back next month with another episode.


Promo: Every parent wants what's best for their children, but in the age of the internet, it can be difficult to navigate what is actually fact-based or pure speculation. Cut through the noise with Kids HealthCast, featuring Weill Cornell Medicine's expert physicians and researchers, discussing a wide range of health topics, providing information on the latest medical science.


Finally, a podcast to help you make informed choices for your family's health and wellness. Subscribe wherever you listen to podcasts. Also, don't forget to rate us five stars.


Disclaimer: All information contained in this podcast is intended for informational and educational purposes. The information is not intended nor suited to be a replacement or substitute for professional medical treatment or for professional medical advice relative to a specific medical question or condition. We urge you to always seek the advice of your physician or medical professional with respect to your medical condition or questions. Weill Cornell Medicine makes no warranty, guarantee, or representation as to the accuracy or sufficiency of the information featured in this podcast. And any reliance on such information is done at your own risk.


Participants may have consulting, equity, board membership, or other relationships with pharmaceutical, biotech, or device companies unrelated to their role in this podcast. No payments have been made by any company to endorse any treatments, devices, or procedures. And Weill Cornell Medicine does not endorse, approve, or recommend any product, service, or entity mentioned in this podcast.


Opinions expressed in this podcast are those of the speaker and do not represent the perspectives of Weill Cornell Medicine as an institution.