Researchers Squeeze GPU Performance From 11 Big Science Apps

HPC Wire (07/18/12) Michael Feldman

The Oak Ridge Leadership Computing Facility published a report in which researchers documented that graphical processing unit (GPU)-equipped supercomputers increased application speeds by a factor of between 1.4 and 6.1 across a range of science applications. The performance gains using GPU-based supercomputers indicate the technology is generating good results across a range of applications. The 11 simulation programs, which include S3D, Denovo, LAMMPS, WL-LSMS, CAM-SE, NAMD, Chroma, QMCPACK, SPECFEM-3D, GTC, and CP2K are used by tens of thousands of researchers around the world. The report was written by researchers from Oak Ridge National Laboratory, the National Center for Supercomputing Applications, and the Swiss National Supercomputing Center (CSCS). The researchers ran the programs on CSCS’ Monte Rosa, which has two AMD Interlagos central processing units (CPUs) per node, and TitanDev, which consists of hybrid nodes that each contain one NVIDIA Fermi GPU and one Interlagos CPU. The researchers found that only Chroma fully exploited the performance advantage of GPU-based processing. Meanwhile, another factor to consider in comparing application performance is power usage, since GPU accelerators use about twice as much power as high-end X86-based systems.

MORE

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: