Seminar: How to Explain the Decisions of Machine Learning Models?

Photo of Dr. Sheeraz Ahmad

Friday, November 12 from 3:30 - 5:00 PM PST in SINE 100/110

Written by Wan Bae
October 29, 2021


Dr. Sheeraz Ahmad, Senior Engineer, Explainable AI Team at Google 


As machine learning (ML) models continue to find applications in critical domains like finance and medicine, there is an increasing need for their accountability. Questions like "Why was my loan application rejected?" or "Why am I being asked to start on a specific medication?" need to be answered for improving users' trust on ML systems. In this talk, Dr. Sheeraz Ahmad will motivate the need for explainability further, and go over two major frameworks for explaining the decisions of ML models: feature-based and example-based explanations. Dr. Sheeraz Ahmad will also present some applications as well as anecdotes where explainability techniques helped generate unique insights.


Dr. Sheeraz Ahmad is a senior engineer in the Explainable AI team at Google. Prior to that, he was a scientist at Amazon working on problems ranging from recommender systems to modeling human behavior on crowdsourcing platforms. Sheeraz received his PhD from University of California, San Diego, working at the intersection of machine learning and cognitive science, investigating issues around how humans learn and make decisions in certain scenarios. His primary research interests include cognitive modeling, and lately explainability. Outside of work, he enjoys playing board games, weightlifting, reading and singing with a local choir.

Visit the CS Seminars Page for more information about Computer Science Seminars