Read more
Taking the Lasso method as its starting point, this book describes the main ingredients needed to study general loss functions and sparsity-inducing regularizers. It also provides a semi-parametric approach to establishing confidence intervals and tests. Sparsity-inducing methods have proven to be very useful in the analysis of high-dimensional data. Examples include the Lasso and group Lasso methods, and the least squares method with other norm-penalties, such as the nuclear norm. The illustrations provided include generalized linear models, density estimation, matrix completion and sparse principal components. Each chapter ends with a problem section. The book can be used as a textbook for a graduate or PhD course.
List of contents
1 Introduction.- The Lasso.- 3 The square-root Lasso.- 4 The bias of the Lasso and worst possible sub-directions.- 5 Confidence intervals using the Lasso.- 6 Structured sparsity.- 7 General loss with norm-penalty.- 8 Empirical process theory for dual norms.- 9 Probability inequalities for matrices.- 10 Inequalities for the centred empirical risk and its derivative.- 11 The margin condition.- 12 Some worked-out examples.- 13 Brouwer's fixed point theorem and sparsity.- 14 Asymptotically linear estimators of the precision matrix.- 15 Lower bounds for sparse quadratic forms.- 16 Symmetrization, contraction and concentration.- 17 Chaining including concentration.- 18 Metric structure of convex hulls.
Summary
Taking the Lasso method as its starting point, this book describes the main ingredients needed to study general loss functions and sparsity-inducing regularizers. It also provides a semi-parametric approach to establishing confidence intervals and tests. Sparsity-inducing methods have proven to be very useful in the analysis of high-dimensional data. Examples include the Lasso and group Lasso methods, and the least squares method with other norm-penalties, such as the nuclear norm. The illustrations provided include generalized linear models, density estimation, matrix completion and sparse principal components. Each chapter ends with a problem section. The book can be used as a textbook for a graduate or PhD course.
Additional text
“This book is presented as a series of lecture notes on the theory of penalized estimators under sparsity. … The level of detail is high, and almost all proofs are given in full, with discussion. Each chapter ends with a section of problems, which could be used in a study setting to improve understanding of the proofs.” (Andrew Duncan A. C. Smith, Mathematical Reviews, August, 2017)
“The book provides several examples and illustrations of the methods presented and discussed, while each of its 17 chapters ends with a problem section. Thus, it can be used as textbook for students mainly at postgraduate level.” (Christina Diakaki, zbMATH 1362.62006, 2017)
Report
"This book is presented as a series of lecture notes on the theory of penalized estimators under sparsity. ... The level of detail is high, and almost all proofs are given in full, with discussion. Each chapter ends with a section of problems, which could be used in a study setting to improve understanding of the proofs." (Andrew Duncan A. C. Smith, Mathematical Reviews, August, 2017)
"The book provides several examples and illustrations of the methods presented and discussed, while each of its 17 chapters ends with a problem section. Thus, it can be used as textbook for students mainly at postgraduate level." (Christina Diakaki, zbMATH 1362.62006, 2017)