Fr. 83.00

Data-Intensive Workflow Management

Anglais · Livre de poche

Expédition généralement dans un délai de 1 à 2 semaines (titre imprimé sur commande)

Description

En savoir plus

Workflows may be defined as abstractions used to model the coherent flow of activities in the context of an in silico scientific experiment. They are employed in many domains of science such as bioinformatics, astronomy, and engineering. Such workflows usually present a considerable number of activities and activations (i.e., tasks associated with activities) and may need a long time for execution. Due to the continuous need to store and process data efficiently (making them data-intensive workflows), high-performance computing environments allied to parallelization techniques are used to run these workflows. At the beginning of the 2010s, cloud technologies emerged as a promising environment to run scientific workflows. By using clouds, scientists have expanded beyond single parallel computers to hundreds or even thousands of virtual machines.
More recently, Data-Intensive Scalable Computing (DISC) frameworks (e.g., Apache Spark and Hadoop) and environments emerged and are being used to execute data-intensive workflows. DISC environments are composed of processors and disks in large-commodity computing clusters connected using high-speed communications switches and networks. The main advantage of DISC frameworks is that they support and grant efficient in-memory data management for large-scale applications, such as data-intensive workflows. However, the execution of workflows in cloud and DISC environments raise many challenges such as scheduling workflow activities and activations, managing produced data, collecting provenance data, etc.
Several existing approaches deal with the challenges mentioned earlier. This way, there is a real need for understanding how to manage these workflows and various big data platforms that have been developed and introduced. As such, this book can help researchers understand how linking workflow management with Data-Intensive Scalable Computing can help in understanding and analyzing scientific big data.
In this book, we aim to identify and distill the body of work on workflow management in clouds and DISC environments. We start by discussing the basic principles of data-intensive scientific workflows. Next, we present two workflows that are executed in a single site and multi-site clouds taking advantage of provenance. Afterward, we go towards workflow management in DISC environments, and we present, in detail, solutions that enable the optimized execution of the workflow using frameworks such as Apache Spark and its extensions.

Table des matières

Preface.- Acknowledgments.- Overview.- Background Knowledge.- Workflow Execution in a Single-Site Cloud.- Workflow Execution in a Multi-Site Cloud.- Workflow Execution in DISC Environments.- Conclusion.- Bibliography.- Authors' Biographies .

A propos de l'auteur










Daniel C. M. de Oliveira obtained his Ph.D. in Systems and Computation Engineering at COPPE/Federal University of Rio de Janeiro, Brazil, in 2012. His current research interests include scientific workflows, provenance, cloud computing, data scalable and intensive computing, high performance computing, and distributed and parallel databases. He serves or served on the program committee of major international and national conferences (VLDB17, IPAW16 and 18, SBBD 15-18, etc.) and is a member of IEEE, ACM, and the Brazilian Computer Society. In 2016, he received the Young Scientist scholarship from the State Agency for Research Financing of Rio de Janeiro, FAPERJ, and the Level 2 research productivity grant from CNPq. He supervised 5 doctoral theses and 13 master's dissertations and coordinated projects funded by CNPq and FAPERJ. He has 2,077 citations in Google Scholar, an h-index of 22 and 96 articles listed in DBLP.

Ji Liu is at Microsoft Research Inria Joint Centre and Zenith team. The latter is part of INRIA Sophia-Antipolis Mediterranee and LIRMM at Montpellier. His research interests include scientific workflow, big data, Cloud computing, and multisite management. He graduated from the Xidian University in 2011. Then, he obtained his master's degree from Telecom Sud Paris in 2013 and Ph.D. in 2016 from University of Montpellier.
Esther Pacitti is a professor of computer science at University of Montpellier. She is a senior researcher and co-head of the Zenith team at LIRMM, pursuing research in distributed data management. Previously, she was an assistant professor at University of Nantes (2002-2009) and a member of Atlas INRIA team. She obtained her "Habilitation a Diriger les Recherches" (HDR) degree in 2008 on the topic of data replication on different contexts (data warehouses, clusters and peer-to-peer systems). Since 2004 she has served or is serving as program committee member of major international conferences (VLDB, SIGMOD, CIKM, etc.) and has edited and co-authored several books. She has also published a significant amount of technical papers and journal papers in well-known international conferences and journals.


Détails du produit

Auteurs Daniel C M de Oliveira, Daniel C. M. de Oliveira, Ji Liu, Daniel Oliveira, Esther Pacitti
Edition Springer, Berlin
 
Titre original Data-Intensive Workflow Management
Langues Anglais
Format d'édition Livre de poche
Sortie 01.01.2019
 
EAN 9783031007446
ISBN 978-3-0-3100744-6
Pages 161
Dimensions 191 mm x 10 mm x 235 mm
Illustrations XVII, 161 p.
Thème Synthesis Lectures on Data Management
Catégorie Sciences naturelles, médecine, informatique, technique > Informatique, ordinateurs > Communication des données, réseaux

Commentaires des clients

Aucune analyse n'a été rédigée sur cet article pour le moment. Sois le premier à donner ton avis et aide les autres utilisateurs à prendre leur décision d'achat.

Écris un commentaire

Super ou nul ? Donne ton propre avis.

Pour les messages à CeDe.ch, veuillez utiliser le formulaire de contact.

Il faut impérativement remplir les champs de saisie marqués d'une *.

En soumettant ce formulaire, tu acceptes notre déclaration de protection des données.