• August 2010
    M T W T F S S
    « Jul   Sep »

Multicore Processing: Breaking Through the Programming Wall.

Scientific Computing (08/12/10) Conway, Steve

Significant challenges remain for applications to take advantage of the first petascale supercomputers, which feature distributed memory architectures and multicore systems with more than 100,000 processor cores each. Although a few high-performance computing (HPC) applications run on parallel computing systems, the vast majority of HPC applications were originally written to be run on a single processor with direct access to main memory. Other issues with multicore HPC systems include the fact that to save energy and control heat, many do not operate at their top speed. In addition, computing clusters based on standard x86 processors dominate HPC systems. However, as standard x86 processors have increased the number of cores they use, they have increased their peak performance without corresponding increases in bandwidth. The relatively poor bytes/flops ratio of x86 processors also has limited cluster efficiency and productivity by making it increasingly difficult to move data into and out of each core fast enough to keep the cores busy. Meanwhile, massive parallelism from growing core counts and system sizes has outgrown programming paradigms, creating a parallel performance wall that will reshape the nature of HPC code design and system usage.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: