Read more
A diverse selection of data science topics explored through a mathematical lens.
List of contents
Part I. Machine Learning: 1. Rudiments of Statistical Learning; 2. Vapnik-Chervonenkis Dimension; 3. Learnability for Binary Classification; 4. Support Vector Machines; 5. Reproducing Kernel Hilbert; 6. Regression and Regularization; 7. Clustering; 8. Dimension Reduction; Part II Optimal Recovery: 9. Foundational Results of Optimal Recovery; 10. Approximability Models; 11. Ideal Selection of Observation Schemes; 12. Curse of Dimensionality; 13. Quasi-Monte Carlo Integration; Part III Compressive Sensing: 14. Sparse Recovery from Linear Observations; 15. The Complexity of Sparse Recovery; 16. Low-Rank Recovery from Linear Observations; 17. Sparse Recovery from One-Bit Observations; 18. Group Testing; Part IV Optimization: 19. Basic Convex Optimization; 20. Snippets of Linear Programming; 21. Duality Theory and Practice; 22. Semidefinite Programming in Action; 23. Instances of Nonconvex Optimization; Part V Neural Networks: 24. First Encounter with ReLU Networks; 25. Expressiveness of Shallow Networks; 26. Various Advantages of Depth; 27. Tidbits on Neural Network Training; Appendix A; High-Dimensional Geometry; Appendix B. Probability Theory; Appendix C. Functional Analysis; Appendix D. Matrix Analysis; Appendix E. Approximation Theory.
About the author
Simon Foucart is Professor of Mathematics at Texas A&M University, where he was named Presidential Impact Fellow in 2019. He has previously written, together with Holger Rauhut, the influential book A Mathematical Introduction to Compressive Sensing (2013).
Summary
This text explores a diverse set of data science topics through a mathematical lens, helping mathematicians become acquainted with data science in general, and machine learning, optimal recovery, compressive sensing, optimization, and neural networks in particular. It will also be valuable to data scientists seeking mathematical sophistication.