Vol. 13, No. 3, December 2000, 384--387

Gregory R. Andrews
Foundations of Multithreaded, Parallel and
Distributed Programming
Addison-Wesley, Reading Massachusetts, 2000
Paper bound pp. 664, USA $45,99
ISBN 0-201-35752-6
http://www.pearsoneduc.com

In general about the book

Very often application need more computing power than a sequential computer can provide. One way of overcoming this limitation is to use distributed and parallel system as a collection of hardware and software components so that it can offer the power required by computationally intensive applications. However, many of the old problems remain: Programming is still difficult, and getting the maximum performance out of all the processors, all the time, remains a challenge. Concurrent, Distributed and Parallel (C-D-P) concepts of writing program that does more than one thing at the time represent a good candidate solution for improving performance. Writing C-D-P programs requires looking of things in a way that are outside the experience of many programmers. As a consequence, there is a need for better understanding of what C-D-P programming can offer, how C-D-P programs can be constructed, and what the impact of C-D-P programming on high performance computing will be. The techniques and principles of C-D-P programming have been developed over the past several years by many contributors. Their efforts have been reported extensively in technical journals and books. Relatively few attempts have been made to present this material in a single volume textbook useful both to the students and specialists who have an interest in parallel \& distributed computing. The increased emphasis in this field of computational science has created a need for such a book. This book represents real attempt to present in one comprehensive volume: (1) The principles of concurrent programming mechanisms that use shared variables; (2) The basis of distributed programming in which processes communicate and synchronize by means of messages; and (3) some crucial concepts concerning parallel programming, focusing on high performance computation.

Chapter content

The author organized the book into three parts, divided into 12 chapters, added a glossary and index. Chapter 1 is general introduction to concurrency, hardware and applications, and contains the usual arguments about why studies of this type are important.

The remaining chapters are organized into three parts. Part I (chapters 2-6) deals with concurrent programming mechanisms that use shared variables, and hence that are directly suitable for shared memory systems. The complexity of these systems arises from the combinations of ways in which their parts interact. The design of such systems requires way of keeping these interactions under control. Concurrent programming mechanisms, such as locks and barriers, semaphores and monitors, provide a way of understanding and thereby controlling such phenomena.

Chapter 2 deals with fundamental concepts of processes and synchronization.

Chapter 3 presents two basic kinds of synchronizations, locks and barriers.

Chapter 4 includes discussion concerning semaphore as powerful tool, which may be used to solve most process-coordination problems.

Chapter 5 describes monitor as an encapsulation of a resource definition and all operators that manipulate the resource. It illustrates the use of monitors through several interesting examples, explains Java's threads and shows how to program monitors using Pthreads library.

Chapter 6 is a brief discussion of how to implement processes, semaphores and monitors on single processors and shared-memory multiprocessors.

A distributed system as a collection of heterogeneous computers and processors have grown in popularity nowdays due to rapid increase in PC's and workstation's computing power per dollar. This collection works closely together to accomplish a common goal, i.e. to provide a common consistent global view of the system including file system, time, security, access of resources, etc. Part II (Chapters 7-10) is devoted to distributed computing in which processes communicate and synchronize by means of messages.

Chapter 7 covers details concerning message passing as an interprocess communication mechanism, i.e. point to how two processes communicate by physically copying the shared data to the other process's by physically copying the shared data to the other process's address space.

The book extensively treats two additional communication primitives, remote procedure call and rendezvous, in Chapter 8.

The important topics of several paradigms for process interactions in distributed programs are covered in Chapter 9.

Chapter 10 describes how to implement message passing, remote procedure call, and rendezvous communication primitives.

Part III deals with parallel programming, primary intended for high-performance scientific computing. In essence there is no universal parallel programming system. Parallel programming involves all of the challenges serial programming - plus additional challenges, such as data partitioning, task partitioning, task scheduling, parallel debugging and synchronization. Unlike single processor, interconnection bandwidth and message latency dominate the performance of distributed and parallel system. Parallel programs described in Part III are written using shared variables or message passing, already described in Part I and II of this book.

Chapter 11 describes three important classes of scientific applications: (1) Greed computation for approximating solutions to partial differential equations; (2) Particle computation for modeling systems of interacting bodies; and (3) Matrix computations for solving systems of linear equations.

Chapter 12 studies additional languages, compiler, libraries, and tools for parallel-high-performance computing.

Each Part of the book begins with an introduction providing an overview of the topics that it will cover. The organization of the individual chapter is almost identical: it begins with introduction, after that follows the central part where the main topics are presented, and concludes with subsections "Historical Notes" (summarize the origin and evolution of each topic and how the topics relate to each other), "References" and "Exercises" (a set of problems that readers can use to study the material and test their understanding).

Useful book

The book is readable by any one with computer programming background. It does not get bogged down into mathematical rigor and details, rather it emphasizes the concepts. The treatment of material proceeds in a logical manner. The chapters are organized in a sensible sequence and are generally well balanced. Excellent examples and exercises are presented throughout to aid teaching and to help motivate students and engineers studying this interesting subject. Computer science students and programmers of concurrent, distributed and parallel systems will find the presentation of material both useful and stimulating. So, I recommend this book for reader who are interested in a theoretical and practical view of concurrent, distributed and parallel programming, and I encourage everyone who uses and teaches the mentioned aspects to get a copy of the book.

Mile Stojcev
Faculty of Electronic Engineering
Beogradska 14, P.O. Box 73
18000 Nish, Yugoslavia