Fr. 198.00

Designing the Conceptual Landscape for a XAIR Validation Infrastructure - Proceedings of the International Workshop on Designing the Conceptual Landscape for a XAIR Validation Infrastructure, DCLXVI 2024, Kaiserslautern, Germany

English · Paperback / Softback

Shipping usually within 6 to 7 weeks

Description

Read more

This book focuses on explainable-AI-ready (XAIR) data and models, offering a comprehensive perspective on the foundations needed for transparency, interpretability, and trust in AI systems. It introduces novel strategies for metadata structuring, conceptual analysis, and validation frameworks, addressing critical challenges in regulation, ethics, and responsible machine learning.
Furthermore, it highlights the importance of standardized documentation and conceptual clarity in AI validation, ensuring that systems remain transparent and accountable.
Aimed at researchers, industry professionals, and policymakers, this resource provides insights into AI governance and reliability. By integrating perspectives from applied ontology, epistemology, and AI assessment, it establishes a structured framework for developing robust, trustworthy, and explainable AI technologies.

List of contents

Synopsis of core concepts for explainable AI-ready data and models.- Conceptualizing validation systems for explainable AI A design approach.- Balancing performance and transparency.- Explainable AI for battery health monitoring.- A minimalistic definition of XAI explanations.-  A comparative analysis of deep learning architectures and explainable AI.- Conclusion.

Summary

This book focuses on explainable-AI-ready (XAIR) data and models, offering a comprehensive perspective on the foundations needed for transparency, interpretability, and trust in AI systems. It introduces novel strategies for metadata structuring, conceptual analysis, and validation frameworks, addressing critical challenges in regulation, ethics, and responsible machine learning.
Furthermore, it highlights the importance of standardized documentation and conceptual clarity in AI validation, ensuring that systems remain transparent and accountable.
Aimed at researchers, industry professionals, and policymakers, this resource provides insights into AI governance and reliability. By integrating perspectives from applied ontology, epistemology, and AI assessment, it establishes a structured framework for developing robust, trustworthy, and explainable AI technologies.

Product details

Assisted by Fadi Al Machot (Editor), Martin T. Horsch (Editor), Sebastian Scholze (Editor), Martin T Horsch (Editor)
Publisher Springer, Berlin
 
Languages English
Product format Paperback / Softback
Released 14.05.2025
 
EAN 9783031892738
ISBN 978-3-0-3189273-8
No. of pages 193
Illustrations VI, 193 p. 23 illus., 16 illus. in color.
Series Lecture Notes in Networks and Systems
Subjects Natural sciences, medicine, IT, technology > Technology > General, dictionaries

Artificial Intelligence, Computational Intelligence, Research Data Management, Explainable Artificial Intelligence, Modelling and Simulation, Applied ontology, Semantic technology

Customer reviews

No reviews have been written for this item yet. Write the first review and be helpful to other users when they decide on a purchase.

Write a review

Thumbs up or thumbs down? Write your own review.

For messages to CeDe.ch please use the contact form.

The input fields marked * are obligatory

By submitting this form you agree to our data privacy statement.