Read more
Combining essential theory and practical techniques for analysing system security, and building robust machine learning in adversarial environments, as well as including case studies on email spam and network security, this complete introduction is an invaluable resource for researchers, practitioners and students in computer security and machine learning.
About the author
Anthony D. Joseph is a Chancellor's Professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. He was formerly the Director of Intel Labs Berkeley.Blaine Nelson is a Software Engineer in the Software Engineer in the Counter-Abuse Technologies (CAT) team at Google. He has previously worked at the University of Potsdam and the University of Tübingen.Benjamin I. P. Rubinstein is a Senior Lecturer in Computing and Information Systems at the University of Melbourne. He has previously worked at Microsoft Research, Google Research, Yahoo! Research, Intel Labs Berkeley, and IBM Research.J. D. Tygar is a Professor of Computer Science and a Professor of Information Management at the University of California, Berkeley.
Summary
Combining essential theory and practical techniques for analysing system security, and building robust machine learning in adversarial environments, as well as including case studies on email spam and network security, this complete introduction is an invaluable resource for researchers, practitioners and students in computer security and machine learning.
Report
'Data Science practitioners tend to be unaware of how easy it is for adversaries to manipulate and misuse adaptive machine learning systems. This book demonstrates the severity of the problem by providing a taxonomy of attacks and studies of adversarial learning. It analyzes older attacks as well as recently discovered surprising weaknesses in deep learning systems. A variety of defenses are discussed for different learning systems and attack types that could help researchers and developers design systems that are more robust to attacks.' Richard Lippmann, Lincoln Laboratory, Massachusetts Institute of Technology