4/12/24

Using Explainable AI to Enhance Biomedical Data Analysis

Deep neural network (DNN) is a powerful technology that is being utilized by a growing number and range of research projects, including disease risk prediction models. On the other hand, DNN models are known to be difficult to explain compared to traditional statistical models such as linear regression, which limits their utilization and adoption in many areas such as in clinical practice. Dr. Zeng's team has developed novel explainable AI methods to quantify the impact of individual variables and the interactions between variables on the outcomes. They have also developed a method to quantify the impact of temporal trends in longitudinal data.

2024 BDSIL Mentor, Dr. Zeng is a tenured professor at the George Washington University, Department of Clinical Research and Leadership, and a research scientist at the DC Veterans Affairs (VA) Hospital. She is also the Director of the Biomedical Informatics Center in the George Washington University and Co-Director of the Center of Data Science Outcomes Research in DCVA. She co-leads the informatics core for the GW-CN CTSI. She has published over 180 peer-reviewed articles and served as the PI and Co-PI on over 20 NIH-, VA-, CDC-, AHRQ-, DOD-, and industry-funded research projects.

Previous

Responsible AI Governance: Ethical Considerations, Bias Mitigation, and Explainability in AI Systems

Next

Generative AI for Healthcare