Scholarship Awardees 2023-24: the Samueli Initiative for Responsible AI in Medicine

Introducing the 2023-2024 Scholarship Awardees for Exceptional and Pioneering Research in Responsible AI in Medicine

 

We are thrilled to announce the recipients of our 2023-2024 scholarships, awarded to students for their exceptional research in the field of Responsible AI in Medicine. Below, you will find the scholarship awardees and an overview of their groundbreaking work.

 

 

 

Nadav Gat, Computer Science M.Sc. Student

 

“Usable, Secure, Privacy-Preserving Genomic Data Sharing for AI”

 

Supervised by: Dr. Mahmood Sharif

 

 

(How can we Ensure Privacy and Ethical Use in Genomic AI Research?)

 

 

We are proud to support Nadav in his research 'Usable, Secure, Privacy-Preserving Genomic Data Sharing for AI'. Read his interesting research summary:

 

The artificial Intelligence era has transformed many fields, including medicine, where, among others, genomic data is employed to offer personalized and effective medical treatments. Still, privacy remains a major concern that may hinder the development and deployment of such approaches–to become accurate, AI models need to be trained on a multitude of real patient records, which may leak from the systems and models.

 

"Our research aims to close these gaps by offering a system to accurately measure the empirical privacy risk of genomic AI models, under different settings, and visualize these in an accessible, usable manner."

 

This research will help better understand their perceptions of privacy risk and identify useful defenses that best address privacy concerns while maintaining model utility. Ultimately, the study will give scientists and health professionals a holistic view of the benefits and risks of genomic AI models, allowing them to advance science and medicine effectively while guarding individuals’ privacy.

 


(Nadav Gat on LinkedIn)

 

 

 

 

Nimrod Harel, Biomedical Engineering PhD. Student

 

"Advancing Responsible AI in Medicine to Design Trusted Decision Support Systems"

 

Supervised by: Prof. Ran Gilad Bachrach and Prof. Uri Obolski

 

(How can we use AI Responsibly for Medical Predictions, like Birth Success?)

 

We are honored to support Nimrod in his work, 'Advancing Responsible AI in Medicine for the Development of Trusted Decision Support Systems'. Dive into his insightful research summary:

 

My research is centered on Explainable AI (XAI), wherein I aim to establish mathematical foundations for elucidating model predictions. The overarching goal is to discern the limitations and possibilities associated with explanations generated by AI models. My theoretical research, delves into the interpretability of AI models, exploring feature importance scores with a focus on local (e.g. individual patient diagnosis) and global (e.g. gene impact on diseases) interpretations.

 

"I distinguish between explaining data, akin to a scientist drawing conclusions from encoded information, and explaining the model, resembling an engineer's vigilance for system reliability, crucial in healthcare contexts".

 

On the practical front, my research involves studying the risks of induction during childbirth, specifically for women opting for voluntary induction. Utilizing causality and statistical methods, I aim to unravel the nuanced connections between induction procedures and birth success, contributing to both the understanding of medical practices and potential policy implications for more informed and patient-centric birthing processes.

 

(Nimrod Harel on Linkedin)

 

 

 

 

Chen Schiff Sacharen, Medicine PhD. Student

 

“Approaching Explainable Data in Medical Studies”

 

Supervised by: Prof. Noam Shomron

 

(How Can we Improve the way AI Understands and Uses data Responsibly in Medical Systems?)

 

We are proud to endorse Chen's research, 'Approaching Explainable Data in Medical Studies.' Explore her thought-provoking research summary:

 

The intersection of AI and medicine has opened exciting possibilities and raised many questions specifically in responsible AI. Our lab focuses on improving feature importance scores to better explain real-world medical data. Current scores often struggle with real-world applications. My Research centers on refining the approach to feature importance through the exploration and application of the Marginal Contribution Feature Importance (MCI).  By developing a set of axioms to guide feature importance scores in explaining data, the MCI emerges as a singular score that encapsulates all desired properties.

This novel approach not only addresses the challenges posed by correlated features but also ensures a more accurate and reliable understanding of the contribution of specific properties to medical outcomes.  This research thrust, emphasizing the MCI, heralds a new era of responsible AI in medicine. It is poised to enhance transparency, interpretability, and ethical deployment of AI models in healthcare settings.

 

"As we embark on these future studies, the goal is clear—to pioneer methodologies that not only unravel the complexities of AI in medicine but also fortify its responsible integration for the betterment of patient outcomes".

 

 

(Chen Schiff Sacharen on Linkedin)

Tel Aviv University makes every effort to respect copyright. If you own copyright to the content contained
here and / or the use of such content is in your opinion infringing Contact us as soon as possible >>