Modeling and Exploiting the Power and Parallelism of Tomorrow’s Computers

 

 

Modeling and Exploiting the Power and Parallelism of Tomorrow’s Computers

Major changes in the way computers are being built require major revisions in the way computational tasks are abstractly described and reasoned about.

Summary

For the past four decades, the growth in computational power has been achieved by putting more and more transistors on chips: however, this trend (known as Moore’s law) seems to be reaching its limits. Today, increased efficiency is being achieved through parallelism, and this trend is likely to continue in the future, with all computing devices likely to have many processors. Given this trend, the single-processor computational models given by Turing and Von Newmann are fast losing their predictive power. What is the right framework in which to design algorithms and programs for these multiple-processor machines?

Rationale

Historically, mathematical models of computing machines have often predated their actual existence. The successful computational models of Turing and of Von Neumann paved the way for the art of computing as we know it today, and still serve as the primary frameworks used to describe and to reason about computing tasks. Indeed, their ongoing success and impact inspires abstraction as a tool to access the future.

The era of the single-processor computer, however, seems to be ended and these fundamental models are less relevant to future computer architectures. Moore’s law has turned from delivering more speed to delivering more processors, and parallelism is our way forward. In the future, all computing is best viewed as using many processes. Concurrently with this, massive data sets imply that communication bottlenecks (even within a single computer) and memory speeds are increasingly dominating the cost of computing. Efficient programming for many processors asks that more work be done on them locally, between their expensive communications.

Given these changes, the single-processor computing models of Turing/von Neumann are fast losing their relevance and predictive power; multiprocessor models derived from them are unlikely to scale as multicore processors grow. Good models/frameworks for parallel computing are desperately needed by the industry. For example, application-software vendors may avoid investing in large projects, rather than risk the wrong programming approach.

As an example of past successes, The MapReduce model for parallel systems allows the expressive power from functional programming to mask details of parallelism and communication, leaving the programmer free to specify the essence of massively distributed algorithms. The model is also influenced by work done in the theory community on the parallel prefix operation. . This demonstrates how abstract algorithmic thinking allows efficient programming for the computer architectures and systems of the future.

Contributors and Credits

Anupam Gupta, Rajmohan Rajaraman, Udi Wieder, David S. Wise, Lisa Zhang.

Also includes material from nugget proposed by Uzi Vishkin.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: