Fr. 69.00

Social Explainable AI - Communications of NII Shonan Meetings

English · Hardback

Will be released 18.01.2026

Description

Read more

This open access book introduces social aspects relevant to research and developments of explainable AI (XAI). The new surge of XAI responds to the societal challenge that many algorithmic approaches (such as machine learning or autonomous intelligent systems) are rapidly increasing in complexity, making justified use of their recommendations difficult for the users. A large body of approaches now exists with many ideas of how algorithms should be explainable or even be able to explain their output. However, few of them consider the users' perspective, and even less address the social aspects of using XAI. To fill the gap, the book offers a conceptualization of explainability as a social practice, a framework for contextual factors, and an operationalization of users' involvement in creating relevant explanations.
For this, scholars across disciplines gathered at the Shonan meeting to account for how explanation generation can be tailored to diverse users and their heterogeneous goals when interacting with XAI. As a result, social interaction is the key to the involvement of the users. Accordingly, we define sXAI (social eXplainable AI) as systems that interact with the users in such a way that an incremental adaptation of explaining to the users is possible, along with the unfolding context of interaction, to yield a relevant explanation at the interface with both active partners human and AI. To encourage novel interdisciplinary research, we propose to account for the following dimensions:
   Patterndness: XAI should account for different contexts that yield different social roles impacting the construction of explanations.
   Incrementality: XAI should build on the contribution of the involved partners who adapt to each other.
   Multimodality: XAI needs to use different communication modalities (e.g., visual, verbal, and auditory).
This book also addresses how to evaluate social XAI systems and what ethical aspects must be considered when employing sXAI. Together, the book pushes forward the building of a community interested in sXAI. To increase the readability across disciplines, each chapter offers a rapid access to its content.

List of contents

Introducing Social Explainable AI.- Scenarios of Social Explainable AI in Practice.- Components of an Explanation for Co-Constructive sXAI.- Context for Explanations.- Practices: How to Establish an Explaining Practice.- Explanation Goals.- Structures Underlying Explanations.- Roles and Relationships.- Responsibilities in sXAI.- Values and Norms in sXAI.

About the author










This open access book introduces social aspects relevant to research and developments of explainable AI (XAI). The new surge of XAI responds to the societal challenge that many algorithmic approaches (such as machine learning or autonomous intelligent systems) are rapidly increasing in complexity, making justified use of their recommendations difficult for the users. A large body of approaches now exists with many ideas of how algorithms should be explainable or even be able to explain their output. However, few of them consider the users' perspective, and even less address the social aspects of using XAI. To fill the gap, the book offers a conceptualization of explainability as a social practice, a framework for contextual factors, and an operationalization of users' involvement in creating relevant explanations.

For this, scholars across disciplines gathered at the Shonan meeting to account for how explanation generation can be tailored to diverse users and their heterogeneous goals when interacting with XAI. As a result, social interaction is the key to the involvement of the users. Accordingly, we define sXAI (social eXplainable AI) as systems that interact with the users in such a way that an incremental adaptation of explaining to the users is possible, along with the unfolding context of interaction, to yield a relevant explanation at the interface with both active partners—human and AI. To encourage novel interdisciplinary research, we propose to account for the following dimensions:

•    Patterndness: XAI should account for different contexts that yield different social roles impacting the construction of explanations.
•    Incrementality: XAI should build on the contribution of the involved partners who adapt to each other.
•    Multimodality: XAI needs to use different communication modalities (e.g., visual, verbal, and auditory).

This book also addresses how to evaluate social XAI systems and what ethical aspects must be considered when employing sXAI. Together, the book pushes forward the building of a community interested in sXAI. To increase the readability across disciplines, each chapter offers a rapid access to its content.

This open access book introduces social aspects relevant to research and developments of explainable AI (XAI). The new surge of XAI responds to the societal challenge that many algorithmic approaches (such as machine learning or autonomous intelligent systems) are rapidly increasing in complexity, making justified use of their recommendations difficult for the users. A large body of approaches now exists with many ideas of how algorithms should be explainable or even be able to explain their output. However, few of them consider the users' perspective, and even less address the social aspects of using XAI. To fill the gap, the book offers a conceptualization of explainability as a social practice, a framework for contextual factors, and an operationalization of users' involvement in creating relevant explanations.

For this, scholars across disciplines gathered at the Shonan meeting to account for how explanation generation can be tailored to diverse users and their heterogeneous goals when interacting with XAI. As a result, social interaction is the key to the involvement of the users. Accordingly, we define sXAI (social eXplainable AI) as systems that interact with the users in such a way that an incremental adaptation of explaining to the users is possible, along with the unfolding context of interaction, to yield a relevant explanation at the interface with both active partners—human and AI. To encourage novel interdisciplinary research, we propose to account for the following dimensions:

•    Patterndness: XAI should account for different contexts that yield different social roles impacting the construction of explanations.
•    Incrementality: XAI should build on the contribution of the involved partners who adapt to each other.
•    Multimodality: XAI needs to use different communication modalities (e.g., visual, verbal, and auditory).

This book also addresses how to evaluate social XAI systems and what ethical aspects must be considered when employing sXAI. Together, the book pushes forward the building of a community interested in sXAI. To increase the readability across disciplines, each chapter offers a rapid access to its content.


Summary

This open access book introduces social aspects relevant to research and developments of explainable AI (XAI). The new surge of XAI responds to the societal challenge that many algorithmic approaches (such as machine learning or autonomous intelligent systems) are rapidly increasing in complexity, making justified use of their recommendations difficult for the users. A large body of approaches now exists with many ideas of how algorithms should be explainable or even be able to explain their output. However, few of them consider the users' perspective, and even less address the social aspects of using XAI. To fill the gap, the book offers a conceptualization of explainability as a social practice, a framework for contextual factors, and an operationalization of users' involvement in creating relevant explanations.
For this, scholars across disciplines gathered at the Shonan meeting to account for how explanation generation can be tailored to diverse users and their heterogeneous goals when interacting with XAI. As a result, social interaction is the key to the involvement of the users. Accordingly, we define sXAI (social eXplainable AI) as systems that interact with the users in such a way that an incremental adaptation of explaining to the users is possible, along with the unfolding context of interaction, to yield a relevant explanation at the interface with both active partners—human and AI. To encourage novel interdisciplinary research, we propose to account for the following dimensions:
•    Patterndness: XAI should account for different contexts that yield different social roles impacting the construction of explanations.
•    Incrementality: XAI should build on the contribution of the involved partners who adapt to each other.
•    Multimodality: XAI needs to use different communication modalities (e.g., visual, verbal, and auditory).
This book also addresses how to evaluate social XAI systems and what ethical aspects must be considered when employing sXAI. Together, the book pushes forward the building of a community interested in sXAI. To increase the readability across disciplines, each chapter offers a rapid access to its content.

Product details

Assisted by Suzana Alpsancar (Editor), Kary Främling (Editor), Brian Lim (Editor), Brian Lim et al (Editor), Katharina Rohlfing (Editor), Katharina J Rohlfing (Editor), Katharina J. Rohlfing (Editor), Kirsten Thommes (Editor)
Publisher Springer, Berlin
 
Languages English
Product format Hardback
Release 18.01.2026
 
EAN 9789819652891
ISBN 978-981-9652-89-1
Illustrations Approx. 400 p.
Subjects Natural sciences, medicine, IT, technology > IT, data processing > IT

Artificial Intelligence, Mensch-Computer-Interaktion, Open Access, HCI, User Interfaces and Human Computer Interaction, multimodal, Human computer interaction, XAI, Explainable Artificial Intelligence, Human-AI collaboration, Shonan Meeting

Customer reviews

No reviews have been written for this item yet. Write the first review and be helpful to other users when they decide on a purchase.

Write a review

Thumbs up or thumbs down? Write your own review.

For messages to CeDe.ch please use the contact form.

The input fields marked * are obligatory

By submitting this form you agree to our data privacy statement.