Facta Univ. Ser.: Elec. Energ., vol. 16, No. 2, December 2003, pp. 442-444

Harry Jordan, Gita Alaghband
FUNDAMENTALS OF PARALLEL PROCESSING
Hardcover, pp. 536, plus XVIII, $ 32,99
Pearson Education, Upper Saddle River, NJ. 07458, 2003
ISBN 0-13-901158-7
http://www.pearsoneduc.com

In general about the book

Thanks to advances in VLSI technology it is possible nowadays to build a cost-effective fast machine with large number of processors that can work concurrently on different parts of the same problem. These architectures have a great potential to enhance the performance substantially. Parallel processing has been and will continue to be a dominant feature of these modern computer architectures. Parallelism is generally accepted as the means by which these computers may transcend physical limitations of the computational speed of single-processor machines. Parallel machines can operate in different forms, such as SIMD or MIMD models. The major concern in the design of algorithms for those machines is the efficient use of the available parallelism; the peak performances are not achieved in many cases because of various constraints that prohibit the full utilization of the hardware. This book is concerned with the fundamentals of parallel processing. Its main intent is to explain clearly in more details the combination of parallel algorithm design, programming language structure, and computer architecture, in order to achieve better machine's performance. The book is divided into eleven chapters, two appendices, Bibliography and an Index.

Chapter content

Chapter 1 (Parallel machines and computations, pp. 1 - 21) serves as an introduction to parallel processing. At the beginning a brief review concerning evolution of parallelism in computer architectures is given. After that the basic concepts of vector processing, multiprocessing and interconnection networks are presented. At the end, it points out to some aspects of identifying parallelism in algorithms.

Chapter 2 (Potential for parallel computations, pp. 23 - 49) introduces the ideas of data dependencies, i.e. considers what kind of activity is available to be done in parallel. The parallel prefix algorithms are used to illustrate the algorithm characteristics.

Chapter 3 (Vector algorithms and architectures, pp. 51 - 108) describes vector or SIMD architectures that are characterized by elementary instructions that operate on all vector elements. Fortran 90, as a high level language that efficiently supports SIMD computing, is adopted. Pipelined SIMD processor structure and the memory interface of a pipelined SIMD computer are considered.

Chapter 4 (MIMD computers or multiprocessors, pp. 109 - 169) deals with multiple instruction stream multiple data stream (MIMD) computers. The difference between shared and distributed memory multiprocessors is explained. OpenMP, as an extension to a sequential programming language intended to support shared-memory MIMD computation, is involved. The basics of multithreading, as a form of pipelined MIMD, are discussed.

Chapter 5 (Distributed memory multiprocessors, pp. 171 - 222) introduces distributed memory multiprocessors as a collection of individual processors in where a local memory module is associated to each processor. Message passing interface (MPI), as an illustration of high-level language support for data communication in such architectures, is presented, The problems related to cache coherence and memory consistency are described, also.

Chapter 6 (Interconnection networks, pp. 223 - 267) discusses different types of interconnection networks, as building blocks, intended to facilitate communication and cooperation among processing elements in SIMD and MIMD computers. The analysis related to performance, properties and topologies of static and dynamic interconnection networks are also included in the same chapter.

For me as a reviewer, Chapter 7 (Data dependence and parallelism, pp. 269 - 330) is the most interesting one. In general, it is devoted to discovering parallel operations in sequential code, and covers code optimization that do no specify any control flow or other constraints on the order of operation, what is inherent in the flow dependences among the data.

Chapter 8 (Implementing synchronization and data sharing, pp. 331 - 365) deals with implementation details of synchronization constructs. The main topics considered here are related to synchronizing different kind of cooperative computations, waiting mechanisms, mutual exclusion, and synchronization barrier.

Chapter 9 (Parallel processor performance, pp. 367 - 408) provides a basic understanding of parallel program execution. It considers various performance models and illustrates their use in a process of performance measurements on real computer system.

Chapter 10 (Temporal behavior of parallel programs, pp. 409 - 437) concentrates on the balance between program execution demands, from one side, and system capabilities such as bandwidth, latency, scalability, and memory hierarchy, at the other side, and their influence on temporal behavior of the program structure.

Chapter 11 (Parallel I/O, 439 - 472) covers numerous aspects of parallelism in input-output operations. Parallel access disks are described, input-output dependent operations are involved, and parallel input-output methods on files are discussed.

Appendix A presents the message passing interface library routines and their attributes, while Appendix B describes different types of synchronization mechanisms. The extensive bibliographies will be particularly welcomed by those wishing to locate the important reference material most permanent to the application area (249 entries are included in the bibliographies).

Useful for senior level undergraduate and junior level graduate students

The book is highly comprehensive, up-to-date, well organized, and the presentation of the material is quite clear. I find the text both stimulating and interesting. The organizations of the individual chapters are almost identical. It begins with short introduction, after that in the central part the main topics are discussed. In the last part of the chapter the following subsections are included: Conclusion, Bibliographic notes, and unsolved Problems. The organization of the individual chapters and their order of presentation appear to have been thought out. The net effect of the authors' efforts have been the production of the text that provides a sound, step-by-step introduction to what is becoming a rapidly expanding volume of knowledge. Fundamentals of Parallel Processing will probably find the best use as a textbook for senior level undergraduate or junior level graduate students in computer science.

In summary, as a well-written source for fundamental concepts and methods in the area of parallel processing, this book represents a valuable contribution to the literature in this field. It should find place on the active bookshelf of all interested in parallel computing.

All in all, I highly recommend this book.

Prof. Mile Stojcev
Faculty of Electronic Engineering Nis
Beogradska 14, PO BOX 73
18000 Nis, Serbia and Montenegro