Read more
This reference text offers a clear unified treatment for graduate students, academic researchers, and professionals interested in understanding and developing statistical procedures for high-dimensional data that are robust to idealized modeling assumptions, including robustness to model misspecification and to adversarial outliers in the dataset.
List of contents
1. Introduction to robust statistics; 2. Efficient high-dimensional robust mean estimation; 3. Algorithmic refinements in robust mean estimation; 4. Robust covariance estimation; 5. List-decodable learning; 6. Robust estimation via higher moments; 7. Robust supervised learning; 8. Information-computation tradeoffs in high-dimensional robust statistics; A. Mathematical background; References; Index.
About the author
Ilias Diakonikolas is an associate professor of computer science at the University of Wisconsin-Madison. His current research focuses on the algorithmic foundations of machine learning. Diakonikolas is a recipient of a number of research awards, including the best paper award at NeurIPS 2019.Daniel M. Kane is an associate professor at the University of California, San Diego in the departments of Computer Science and Mathematics. He is a four-time Putnam Fellow and two-time IMO gold medallist. Kane's research interests include number theory, combinatorics, computational complexity, and computational statistics.
Summary
This reference text offers a clear unified treatment for graduate students, academic researchers, and professionals interested in understanding and developing statistical procedures for high-dimensional data that are robust to idealized modeling assumptions, including robustness to model misspecification and to adversarial outliers in the dataset.
Foreword
This book presents general principles and scalable methodologies to deal with adversarial outliers in high-dimensional datasets.