Fr. 56.90

Introduction to HPC with MPI for Data Science

English · Paperback / Softback

Shipping usually within 6 to 7 weeks

Description

Read more

This gentle introduction to High Performance Computing (HPC) for Data Science using the Message Passing Interface (MPI) standard has been designed as a first course for undergraduates on parallel programming on distributed memory models, and requires only basic programming notions.
Divided into two parts the first part covers high performance computing using C++ with the Message Passing Interface (MPI) standard followed by a second part providing high-performance data analytics on computer clusters.
In the first part, the fundamental notions of blocking versus non-blocking point-to-point communications, global communications (like broadcast or scatter) and collaborative computations (reduce), with Amdalh and Gustafson speed-up laws are described before addressing parallel sorting and parallel linear algebra on computer clusters. The common ring, torus and hypercube topologies of clusters are then explained and global communication procedures on these topologies are studied. This first part closes with the MapReduce (MR) model of computation well-suited to processing big data using the MPI framework.
In the second part, the book focuses on high-performance data analytics. Flat and hierarchical clustering algorithms are introduced for data exploration along with how to program these algorithms on computer clusters, followed by machine learning classification, and an introduction to graph analytics. This part closes with a concise introduction to data core-sets that let big data problems be amenable to tiny data problems.
Exercises are included at the end of each chapter in order for students to practice the concepts learned, and a final section contains an overall exam which allows them to evaluate how well they have assimilated the material covered in the book.

List of contents

Preface.- Part 1: High Performance Computing (HPC) with the Message Passing Interface (MPI).- A Glance at High Performance Computing (HPC).- Introduction to MPI: The Message Passing Interface.- Topology of Interconnection Networks.- Parallel Sorting.- Parallel Linear Algebra.-The MapReduce Paradigm.- Part 11: High Performance Computing for Data Science.- Partition-based Clustering with k means.- Hierarchical Clustering.- Supervised Learning: Practice and Theory of Classification with k NN rule.- Fast Approximate Optimization to High Dimensions with Core-sets and Fast Dimension Reduction.- Parallel Algorithms for Graphs.- Appendix A: Written Exam.- Appendix B: SLURM: A resource manager and job scheduler on clusters of machines.- Appendix C: List of Figures.- Appendix D: List of Tables.- Appendix E: Index.

About the author

Frank Nielsen is a Professor at École Polytechnique in France where he teaches graduate (vision/graphics) and undergraduate (Java/algorithms),and a senior researcher at Sony Computer Science Laboratories Inc. His research includes Computational information geometry for imaging and learning and he is the author of 3 textbooks and 3 edited books. He is also on the Editorial Board for the Springer Journal of Mathematical Imaging and Vision.



Summary

This gentle introduction to High Performance Computing (HPC) for Data
Science using the Message Passing Interface (MPI) standard has been
designed as a first course for undergraduates on parallel programming on
distributed memory models, and requires only basic programming notions.
Divided
into two parts the first part covers high performance computing using
C++ with the Message Passing Interface (MPI) standard followed by a
second part providing high-performance data analytics on computer
clusters.
In the first part, the fundamental notions of blocking
versus non-blocking point-to-point communications, global communications
(like broadcast or scatter) and collaborative computations (reduce),
with Amdalh and Gustafson speed-up laws are described before addressing
parallel sorting and parallel linear algebra on computer clusters. The
common ring, torus and hypercube topologies of clusters are then
explained and global communication procedures on these topologies are
studied. This first part closes with the MapReduce (MR) model of
computation well-suited to processing big data using the MPI framework.
In
the second part, the book focuses on high-performance data analytics.
Flat and hierarchical clustering algorithms are introduced for data
exploration along with how to program these algorithms on computer
clusters, followed by machine learning classification, and an
introduction to graph analytics. This part closes with a concise
introduction to data core-sets that let big data problems be amenable to
tiny data problems.
Exercises are included at the end of each
chapter in order for students to practice the concepts learned, and a
final section contains an overall exam which allows them to evaluate how
well they have assimilated the material covered in the book.

Product details

Authors Frank Nielsen
Publisher Springer, Berlin
 
Languages English
Product format Paperback / Softback
Released 01.01.2016
 
EAN 9783319219028
ISBN 978-3-31-921902-8
No. of pages 282
Dimensions 157 mm x 234 mm x 18 mm
Weight 508 g
Illustrations XXXIII, 282 p. 101 illus. in color.
Series Undergraduate Topics in Computer Science
Undergraduate Topics in Computer Science
Subjects Natural sciences, medicine, IT, technology > IT, data processing > IT

B, machine learning, computer science, Programming Techniques, Computer programming

Customer reviews

No reviews have been written for this item yet. Write the first review and be helpful to other users when they decide on a purchase.

Write a review

Thumbs up or thumbs down? Write your own review.

For messages to CeDe.ch please use the contact form.

The input fields marked * are obligatory

By submitting this form you agree to our data privacy statement.