Read more
Informationen zum Autor Tarek El-Ghazawi received his PhD in electrical and computer engineering from New Mexico State University. Currently, he is an associate professor in the Electrical and Computer Engineering??Department at the George Washington University. His research? interests are in high-performance computing, computer architecture, reconfigurable computing, embedded systems, and experimental performance. He has over 70 technical journal and conference publications in these areas. He has served as the principal investigator for over two dozen funded research projects, and his research has been supported by NASA, DoD, NSF and industry. He has served as a guest editor for the IEEE concurrency and was an Associate Editor for the International Journal of Parallel and Distributed Computing and Networking. El-Ghazawi has also served as a visiting scientist at NASA GSFC and NASA Ames Research Center. He is a senior member of the IEEE and a member of the advisory board for the IEEE Task Force on Cluster Computing. William Carlson received his PhD in Electrical Engineering from Purdue University. From 1988 to 1990, he was an assistant professor at the University of Wisconsin-Madision. His research interestes include performance evaluation of advanced computer architectures, operating systems, languages and compilers for parallel and distributed computers. Thomas Sterling received his PhD as a Hertz Fellow from the Massachusetts Institute of Technology. His research interests include parallel computer architecture, system software and evaluation. He holds six patents, is the co-author of several books and has published dozens of papers in the field of parallel Computing. Katherine Yelick received her? PhD in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology. Her research interests include parallel computing, memory hierarchy optimizations, programming languages and compilers. Currently, she is a Professor of Computer Science at the University of California, Berkeley. Klappentext A must-have for UPC programmers and applications developersThis publication provides an in-depth interpretation of UPC language specifications for use in highly parallel systems. With its extensive use of examples, UPC programming case studies, and illustrations, it offers new insights into developing efficient and effective UPC applications such as high-speed signal processing and pattern recognition. As an added feature, readers have access to an ftp site containing an electronic copy of the full code and can make files for all the examples given in the text.The book provides all the information and guidance needed to use this powerful new programming language:* Chapter 1 provides a quick tutorial of the major features of the UPC language* Chapter 2 presents the UPC programming model and describes how shared and nonshared data are declared and used* Chapter 3 covers the critically important concept of pointers in UPC, identifying the types, declarations, and usage of the various UPC pointers and how they work with arrays* Chapter 4 explains how data and work can be distributed in UPC such that data locality is exploited through efficient data declarations and work-sharing constructs* Chapter 5 provides extensive treatment of dynamic memory allocation in the shared space* Chapter 6 covers thread and data synchronization, explaining the effective mechanisms provided by UPC for mutual exclusion, barriers, and memory consistency control* Chapter 7 offers programmers tools needed to write efficient applications* Chapter 8 introduces two UPC standard libraries: the collective operations library and the parallel I/O library* Appendices feature the UPC v1.1.1 specification; UPC v1.0 collective library specifications; UPC-IO v1.0 specifications; information on how to compile and run UPC programs; and a quick UPC reference cardUPC is ubiquitous. It is supported on paralle...