Grant Awardees 2023-24: Initiative for Responsible AI in Medicine

Showcasing the 2023-2024 Grant Awardees for Inspiring and Excellent Research in the Field of Responsible AI in Medicine

 

The Tel Aviv University Initiative for Responsible AI in Medicine is dedicated to fostering trust, safety, and responsibility in the research, development, and application of AI within the medical field. Our mission is to ensure that technological advancements are not only scientifically validated and ethically guided but also truly beneficial to all aspects of healthcare.

 

We are excited to announce the recipients of our 2023-2024 research grants, recognizing exceptional contributions to the field of Responsible AI in Medicine. Below, you'll find the awardees and an overview of their pioneering research.

 

 

 

Empathetic Wellbeing: Using AI Responsibly in the Art Museum

 

The grant-winning research "Empathetic Wellbeing: Using AI Responsibly in the Art Museum" is conducted by:

The research examines the intersection of art, empathy, and artificial intelligence within museum settings, highlighting a recent experiment that demonstrated how gallery environments can enhance visitor trust as a base of empathy, in AI-generated art. This prompts a critical consideration of responsible AI use in these contexts. Through an artistic-scientific dialogue, the researchers aim to create an art installation that is an exhibition that is also a research  lab- that challenges visitors to confront conflicts surrounding empathy, questioning the differing roles of sound and image in fostering human responsiveness and the involvement of AI in this process. Understanding the relationship between empathy and AI is vital, particularly in light of past research indicating that AI can surpass human doctors in empathetic communication, ultimately fostering greater understanding across various fields.

 

Please follow updates about the 'Re: Empathy' exhibition in the TAU Art Gallery, here: https://www.facebook.com/TAUArtGallery/

 


(How can AI be integrated into an art museum to responsibly enhance empathy among visitors?)

 

 

 

From Responsible Genomics to Responsible Medical AI

 

The grant-winning research "From Responsible Genomics to Responsible Medical AI" is being performed by the researchers: 

AI is now positioned to soon become ubiquitous in clinical practice, with diverse applications across all healthcare sectors. Since responsible AI use in healthcare is still in its nascent stages, there is an urgent need for responsible governance of AI Healthcare systems, by mapping the uncertainties, benefits, and risks of AI systems in general and with regard to different healthcare sectors. This project aims to bridge a gap by developing frameworks that ensure the responsible use of AI technologies in medicine, addressing both ethical and legal considerations.

 

(Is Generative AI the Future of Ethical Clinical Decision-Making in Healthcare?)

 

 

 

Evaluation of Language Models and ICU Nurses for Clinical Decision Support

 

The team conducting the grant-winning research "Evaluation of Language Models and ICU Nurses for Clinical Decision Support" includes:

Advances in artificial intelligence have enabled new clinical decision support tools through natural language processing models. However, there are also important concerns regarding the safe and responsible integration of AI in high-acuity healthcare settings. This research compares the decision-making capabilities of AI language models and ICU nurses in common critical care scenarios, focusing on the safe and responsible integration of AI into critical care environments the study will examine key attributes such as transparency, robustness, and fairness in critical care decision-making. This approach aims to provide insights into AI's potential applications and limitations in intensive care settings.

 

(How can AI be Ethically and Legally Incorporated into Traditional Medical Practices?)

 

 

 

Discussion Groups

The Tel Aviv University Initiative for Responsible AI in Medicine proudly supports discussion groups within research grants, fostering thoughtful interdisciplinary dialogue on various topics in responsible health and medicine. The following researchers are exploring through discussion groups important and relevant issues:

 

 

Navigating AI in Healthcare Organizations: A Managerial Perspective

This discussion group aims to explore the pivotal role played by the deployment and utilization of AI-based tools in healthcare organizations, in shaping organizational processes, as well as usage patterns of physicians. This interdisciplinary platform welcomes scholars from management, information systems, industrial engineering and management, and sociology, fostering a comprehensive understanding of how AI-based tools draw new boundaries of responsibility, disrupt conventional work patterns and organizational processes. the discussion will encompass the topics of accountability for AI-assisted decisions, the impact on physician responsibilities and perceptions, monitoring of human-AI decisions, transparency with patients, and the measurement of AI usage patterns in healthcare organizations.

 

(What Should a Manager Consider When Deploying AI in Healthcare Settings?)

 

 

 

Interdisciplinary Explorations into Responsible Generative AI and Psychology

The integration of artificial intelligence capabilities into mental health broadly and psychotherapy specifically carries tremendous potential but also risks. In the past year, publicly available AI services like ChatGPT have exhibited impressive abilities to perform tasks in mental health care previously considered exclusive to humans. Amidst accelerated development attempting to harness such capabilities to advance care, there is a need to establish clinical and ethical guidance regarding responsible and safe AI use that minimizes harm in the psychotherapy context. This mandates interdisciplinary thinking spanning practicing psychotherapy clinicians, ethicists, lawyers, researchers, and social/community perspectives. This discussion group aims to formulate ethical principles for integrating generative AI in psychotherapy settings and provide guidelines for clinician supervision of such tools.

 

(What New Ethical Guidelines and Tools are Required for Using Generative AI in Psychotherapy?)

 

 

 

Understanding and Addressing Elderly Mental Health: The Role of Culturally Sensitive AI

This discussion group explores the intersection of AI, mental health, and cultural sensitivity in the context of older persons. It aims to provide valuable insights and recommendations regarding the prioritization and optimal practices for employing artificial intelligence in mental health care, particularly emphasizing the significance of culturally sensitive AI implementations for elderly populations. some of the emerging topics will include measurements and methods to ensure fairness against ageism in AI and its impact on mental health care for older persons. Also, ethical AI as a diagnostic screening system for Alzheimer's disease and an enhancement system for mental health in the elderly. 

 

 

(How can AI Improve Elderly Mental Health?)

Tel Aviv University makes every effort to respect copyright. If you own copyright to the content contained
here and / or the use of such content is in your opinion infringing Contact the referral system >>