Fr. 82.00

Fairness of AI in Medical Imaging - Third International Workshop, FAIMI 2025, Held in Conjunction with MICCAI 2025, Daejeon, South Korea, September 23, 2025, Proceedings

Inglese · Tascabile

Spedizione di solito entro 6 a 7 settimane

Descrizione

Ulteriori informazioni

This book constitutes the refereed proceedings of the Third International Workshop, FAIMI 2025, held in conjunction with MICCAI 2025, Daejeon, South Korea, in September 23, 2025.
The 21 full papers presented in this book were carefully reviewed and selected from 29 submissions.
FAIMI aimed to raise awareness about potential fairness issues in machine learning within the context of biomedical image analysis.
 

Sommario

.- LTCXNet: Tackling Long-Tailed Multi-Label Classification and Racial
Bias in Chest X-Ray Analysis.
.- Fairness and Robustness of CLIP-Based Models for Chest X-rays.
.- ShortCXR: Benchmarking Self-Supervised Learning Methods for
Shortcut Mitigation in Chest X-Ray Interpretation.
.- How Fair Are Foundation Models? Exploring the Role of Covariate
Bias in Histopathology.
.- The Cervix in Context: Bias Assessment in Preterm Birth Prediction.
.- Identifying Gender-Specific Visual Bias Signals in Skin Lesion Classification.
.- Fairness-Aware Data Augmentation for Cardiac MRI using
Text-Conditioned Diffusion Models.
.- Exploring the interplay of label bias with subgroup size and separability:
A case study in mammographic density classification.
.- Does a Rising Tide Lift All Boats? Bias Mitigation for AI-based CMR
Segmentation.
.- MIMM-X: Disentangeling Spurious Correlations for Medical Image
Analysis.
.- Predicting Patient Self-reported Race From Skin Histological Images
with Deep Learning.
.- Robustness and sex differences in skin cancer detection: logistic
regression vs CNNs.
.- Sex-based Bias Inherent in the Dice Similarity Coefficient: A Model
Independent Analysis for Multiple Anatomical Structures.
.- The Impact of Skin Tone Label Granularity on the Performance and
Fairness of AI Based Dermatology Image Classification Models.
.- Causal Representation Learning with Observational Grouping for CXR
Classification.
.- Invisible Attributes, Visible Biases: Exploring Demographic Shortcuts
in MRI-based Alzheimer s Disease Classification.
.- Fair Dermatological Disease Diagnosis through Auto-weighted
Federated Learning and Performance-aware Personalization.
.- Assessing Annotator and Clinician Biases in an Open-Source-Based
Tool Used to Generate Head CT Segmentations for Deep Learning
Training.
.- meval: A Statistical Toolbox for Fine-Grained Model Performance Analysis.
.- Revisiting the Evaluation Bias Introduced by Frame Sampling
Strategies in Surgical Video Segmentation Using SAM2.
.- Disentanglement and Assessment of Shortcuts in Ophthalmological
Retinal Imaging Exams.

Dettagli sul prodotto

Con la collaborazione di Veronika Cheplygina (Editore), Aasa Feragen (Editore), Aasa Feragen et al (Editore), Enzo Ferrante (Editore), Melani Ganz-Benjaminsen (Editore), Ben Glocker (Editore), Andrew King (Editore), Heisook Lee (Editore), Eike Petersen (Editore), Esther Puyol-Antón (Editore)
Editore Springer, Berlin
 
Lingue Inglese
Formato Tascabile
Pubblicazione 19.10.2025
 
EAN 9783032058690
ISBN 978-3-0-3205869-0
Pagine 220
Dimensioni 155 mm x 13 mm x 235 mm
Peso 359 g
Illustrazioni XI, 220 p. 62 illus., 59 illus. in color.
Serie Lecture Notes in Computer Science
Categorie Scienze naturali, medicina, informatica, tecnica > Informatica, EDP > Software applicativo

Künstliche Intelligenz, machine learning, Maschinelles Lernen, Artificial Intelligence, Netzwerk-Hardware, Computer Vision, Computer Communication Networks, Medical imaging, Computer Application in Administrative Data Processing, Bias Mitigation, Algorithmic Fairness, fairness in medical imaging, legal bias, ethical bias

Recensioni dei clienti

Per questo articolo non c'è ancora nessuna recensione. Scrivi la prima recensione e aiuta gli altri utenti a scegliere.

Scrivi una recensione

Top o flop? Scrivi la tua recensione.

Per i messaggi a CeDe.ch si prega di utilizzare il modulo di contatto.

I campi contrassegnati da * sono obbligatori.

Inviando questo modulo si accetta la nostra dichiarazione protezione dati.