Fr. 63.00

Robust Explainable AI

English · Paperback / Softback

Will be released 15.05.2025

Description

Read more

The area of Explainable Artificial Intelligence (XAI) is concerned with providing methods and tools to improve the interpretability of black-box learning models. While several approaches exist to generate explanations, they are often lacking robustness, e.g., they may produce completely different explanations for similar events. This phenomenon has troubling implications, as lack of robustness indicates that explanations are not capturing the underlying decision-making process of a model and thus cannot be trusted.
This book aims at introducing Robust Explainable AI, a rapidly growing field whose focus is to ensure that explanations for machine learning models adhere to the highest robustness standards. We will introduce the most important concepts, methodologies, and results in the field, with a particular focus on techniques developed for feature attribution methods and counterfactual explanations for deep neural networks.
As prerequisites, a certain familiarity with neural networks and approaches within XAI is desirable but not mandatory. The book is designed to be self-contained, and relevant concepts will be introduced when needed, together with examples to ensure a successful learning experience.

List of contents

Foreword.- Preface.- Acknowledgements.- 1. Introduction.- 2. Explainability in Machine Learning: Preliminaries & Overview.- 3. Robustness of Counterfactual Explanations.- 4. Robustness of Saliency-Based Explanations.

About the author

Francesco Leofante is a researcher affiliated with the Centre for Explainable AI at Imperial College. His research focuses on explainable AI, with special emphasis on counterfactual explanations for AI-based decision-making. His recent work highlighted several vulnerabilities of counterfactual explanations and proposed innovative solutions to improve their robustness.
Matthew Wicker is an Assistant Professor (Lecturer) at Imperial College London and a Research Associate at The Alan Turing Institute. He works on formal verification of trustworthy machine learning properties with collaborators form academia and industry. His work focuses on provable guarantees for diverse notions of trustworthiness for machine learning models in order to enable responsible deployment.

Summary

The area of Explainable Artificial Intelligence (XAI) is concerned with providing methods and tools to improve the interpretability of black-box learning models. While several approaches exist to generate explanations, they are often lacking robustness, e.g., they may produce completely different explanations for similar events. This phenomenon has troubling implications, as lack of robustness indicates that explanations are not capturing the underlying decision-making process of a model and thus cannot be trusted.
This book aims at introducing Robust Explainable AI, a rapidly growing field whose focus is to ensure that explanations for machine learning models adhere to the highest robustness standards. We will introduce the most important concepts, methodologies, and results in the field, with a particular focus on techniques developed for feature attribution methods and counterfactual explanations for deep neural networks.
As prerequisites, a certain familiarity with neural networks and approaches within XAI is desirable but not mandatory. The book is designed to be self-contained, and relevant concepts will be introduced when needed, together with examples to ensure a successful learning experience.

Customer reviews

No reviews have been written for this item yet. Write the first review and be helpful to other users when they decide on a purchase.

Write a review

Thumbs up or thumbs down? Write your own review.

For messages to CeDe.ch please use the contact form.

The input fields marked * are obligatory

By submitting this form you agree to our data privacy statement.