I gave the annual Huntingford Humanities Lecture (When Your Grandpa Is a Bot: AI, Death, and Digital Doppelgangers) at the Jefferson County Library, WA
Sep 27, 2024
I was a panelist at the staging of the Play CYpher in University Heights, Seattle. The play deals with the issues of digital immortality, identity and family.
Aug 20, 2024
Gave a talk on Integrating Machine Learning and Complex Systems perspectives on Chronic critical illness at the Air Force Institute of Technology
This tutorial extensively covers the definitions, nuances, challenges, and requirements for the design of interpretable and explainable machine learning models and systems in healthcare. We discuss many uses in which interpretable machine learning models are needed in healthcare and how they should be deployed. Additionally, we explore the landscape of recent advances to address the challenges model interpretability in healthcare and also describe how one would go about choosing the right interpretable machine learnig algorithm for a given problem in healthcare.
arXiV
Creating trustworthy llms: Dealing with hallucinations in healthcare ai
Muhammad Aurangzeb
Ahmad, Ilker
Yaramis, and Taposh Dutta
Roy
Large language models have proliferated across multiple domains in as short period of time. There is however hesitation in the medical and healthcare domain towards their adoption because of issues like factuality, coherence, and hallucinations. Give the high stakes nature of healthcare, many researchers have even cautioned against its usage until these issues are resolved. The key to the implementation and deployment of LLMs in healthcare is to make these models trustworthy, transparent (as much possible) and explainable. In this paper we describe the key elements in creating reliable, trustworthy, and unbiased models as a necessary condition for their adoption in healthcare. Specifically we focus on the quantification, validation, and mitigation of hallucinations in the context in healthcare. Lastly, we discuss how the future of LLMs in healthcare may look like.
FAccT
Fairness, accountability, transparency in AI at scale: Lessons from national programs
Muhammad Aurangzeb
Ahmad, Ankur
Teredesai, and Carly
Eckert
In Proceedings of the 2020 conference on fairness, accountability, and transparency, 2020
The panel aims to elucidate how different national govenmental programs are implementing accountability of machine learning systems in healthcare and how accountability is operationlized in different cultural settings in legislation, policy and deployment. We have representatives from three different govenments, UAE, Singapore and Maldives who will discuss what accountability of AI and machine learning means in their contexts and use cases. We hope to have a fruitful conversation around FAT ML as it is operationalized ccross cultures, national boundries and legislative constraints.
KDD
Fairness in machine learning for healthcare
Muhammad Aurangzeb
Ahmad, Arpit
Patel, Carly
Eckert, and
2 more authors
In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, 2020
The issue of bias and fairness in healthcare has been around for centuries. With the integration of AI in healthcare the potential to discriminate and perpetuate unfair and biased practices in healthcare increases many folds The tutorial focuses on the challenges, requirements and opportunities in the area of fairness in healthcare AI and the various nuances associated with it. The problem healthcare as a multi-faceted systems level problem that necessitates careful of different notions of fairness in healthcare to corresponding concepts in machine learning is elucidated via different real world examples.
KDD
Software as a medical device: regulating AI in healthcare via responsible AI
Muhammad Aurangzeb
Ahmad, Steve
Overman, Christine
Allen, and
3 more authors
In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, 2021
With the increased adoption of AI in healthcare, there is a growing recognition and demand to regulate AI in healthcare to avoid potential harm and unfair bias against vulnerable populations. Around a hundred governmental bodies and commissions as well as leaders in the tech sector have proposed principles to create responsible AI systems. However, most of these proposals are short on specifics which has led to charges of ethics washing. In this tutorial we offer a guide to help navigate through complex governmental regulations and explain the various constituent practical elements of a responsible AI system in healthcare in the light of proposed regulations. Additionally, we breakdown and emphasize that the recommendations from regulatory bodies like FDA or the EU are necessary but not sufficient elements of creating a responsible AI system. We elucidate how regulations and guidelines often focus on epistemic concerns to the detriment of practical concerns e.g., requirement for fairness without explicating what fairness constitutes for a use case. FDA’s Software as a medical device document and EU’s GDPR among other AI governance documents talk about the need for implementing sufficiently good machine learning practices. In this tutorial we elucidate what that would mean from a practical perspective for real world use cases in healthcare throughout the machine learning cycle i.e., Data Management, Data Specification, Feature Engineering, Model Evaluation, Model Specification, Model Explainability, Model Fairness, Reproducibility, checks for data leakage and model leakage. We note that conceptualizing responsible AI as a process rather than an end goal accords well with how AI systems are used in practice. We also discuss how a domain centric stakeholder perspective translates into balancing requirements for multiple competing optimization criteria.
Book Chapter
Survey of explainable machine learning with visual and granular methods beyond quasi-explanations
Boris
Kovalerchuk, Muhammad Aurangzeb
Ahmad, and Ankur
Teredesai
Interpretable artificial intelligence: A perspective of granular computing, 2021
This chapter surveys and analyses visual methods of explainability of Machine Learning (ML) approaches with focus on moving from quasi-explanations that dominate in ML to actual domain-specific explanation supported by granular visuals. The importance of visual and granular methods to increase the interpretability and validity of the ML model has grown in recent years. Visuals have an appeal to human perception, which other methods do not. ML interpretation is fundamentally a human activity, not a machine activity. Thus, visual methods are more readily interpretable. Visual granularity is a natural way for efficient ML explanation. Understanding complex causal reasoning can be beyond human abilities without “downgrading” it to human perceptual and cognitive limits. The visual exploration of multidimensional data at different levels of granularity for knowledge discovery is a long-standing research focus. While multiple efficient methods for visual representation of high-dimensional data exist, the loss of interpretable information, occlusion, and clutter continue to be a challenge, which lead to quasi-explanations. This chapter starts with the motivation and the definitions of different forms of explainability and how these concepts and information granularity can integrate in ML. The chapter focuses on a clear distinction between quasi-explanations and actual domain specific explanations, as well as between potentially explainable and an actually explained ML model that are critically important for the further progress of the ML explainability domain. We discuss foundations of interpretability, overview visual interpretability and present several types of methods to visualize the ML models. Next, we present methods of visual discovery of ML models, with the focus on interpretable models, based on the recently introduced concept of General Line Coordinates (GLC). This family of methods take the critical step of creating visual explanations that are not merely quasi-explanations but are also domain specific visual explanations while these methods themselves are domain-agnostic. The chapter includes results on theoretical limits to preserve n-D distances in lower dimensions, based on the Johnson-Lindenstrauss lemma, point-to-point and point-to-graph GLC approaches, and real-world case studies. The chapter also covers traditional visual methods for understanding multiple ML models, which include deep learning and time series models. We illustrate that many of these methods are quasi-explanations and need further enhancement to become actual domain specific explanations. The chapter concludes with outlining open problems and current research frontiers.