• January 2013
    M T W T F S S
    « Dec   Feb »
     123456
    78910111213
    14151617181920
    21222324252627
    28293031  

Keeneland Project Deploys New GPU Supercomputing System for the National Science Foundation

 Georgia Tech News (11/13/12) Joshua Preston

The Keeneland Project, a collaborative effort between Georgia Tech, the University of Tennessee-Knoxville, the National Institute for Computational Sciences, and Oak Ridge National Laboratory, recently completed the installation and acceptance of the Keeneland Full Scale System (KFS). KFS is a supercomputing system designed to meet the compute-intensive needs of a wide range of applications through the use of NVIDIA graphics processing unit (GPU) technology. The researchers note that KFS is the most powerful GPU-based supercomputer available for research through the U.S. National Science Foundation’s Extreme Science and Engineering Discovery Environment (XSEDE) program. KFS has 264 nodes, and each node contains two Intel Sandy Bridge processors, three NVIDIA M2090 GPU accelerators, 32 GB of host memory, and a Mellanox InfiniBand FDR interconnection network. During KFS’ installation and acceptance testing, the Keeneland Initial Delivery System was used to start production capacity for XSEDE users seeking to run their applications on the system and who had received allocations for Keeneland through a peer review process. “Our Keeneland Initial Delivery system has hosted over 130 projects and 200 users over the past two,” notes the Keeneland Project’s principal investigator Jeffrey Vetter.

MORE

Advertisements

Bug Repellent for Supercomputers Proves Effective

Lawrence Livermore National Laboratory (11/14/12) Anne M. Stark

Lawrence Livermore National Laboratory (LLNL) researchers have developed the Stack Trace Analysis Tool (STAT), a highly scalable, lightweight tool that has been used to debug a program running more than one million MPI processors on the IBM Blue Gene/Q-based Sequoia supercomputer. The debugging tool is part of a multi-year collaboration between LLNL, the University of Wisconsin, Madison, and the University of New Mexico. The researchers say STAT has helped early access users and system integrators quickly isolate a wide range of errors, including complicated issues that only appeared at extremely large scales. “STAT has been indispensable in this capacity, helping the multi-disciplined integration team keep pace with the aggressive system scale-up schedule,” says LLNL’s Greg Lee. During testing, STAT was able to identify one particular rank process that was consistently stuck in a system call out of more than one million MPI processes, according to LLNL’s Dong Ahn. “It is critical that our development teams have a comprehensive parallel debugging tool set as they iron out the inevitable issues that come up with running on a new system like Sequoia,” says LLNL’s Kim Cupps.

MORE

Cray Bumps IBM From Top500 Supercomputer Top Spot

 IDG News Service (11/12/12) Joab Jackson

Oak Ridge National Laboratory’s Titan supercomputer system was named the world’s fastest supercomputer in the latest edition of the Top500 list.  Titan, a Cray XK7, took the top spot from Lawrence Livermore National Laboratory’s Sequoia supercomputer, the IBM BlueGene/Q, which came in second.  Under the Linpack benchmark, Titan executed 17.59 petaflops, compared to Sequoia’s 16.32 petaflops.  Also in the top five were the RIKEN Advanced Institute for Computational Science’s K supercomputer performing at 10.5 petaflops, the DOE Argonne National Laboratory’s Mira, a BlueGene/Q system performing at 8.16 petaflops, and the German Forschungszentrum Juelich’s Juqueen, also an IBM BlueGene/Q system, performing at 4.14 petaflops.  The most recent Top500 list had 23 systems demonstrating petaflop performance, just four and a half years after the National Center for Supercomputing Applications’ Roadrunner system became the first petaflop-scale system.  The latest list also reveals other trends in supercomputing.  This edition lists 62 systems using accelerator and co-processor technology such as NVIDIA graphics processing units, up from 58 six months ago.  The United States hosts 251 of the top 500 systems, while Asia and Europe host 123 and 105 systems, respectively.

MORE

Speeding Algorithms by Shrinking Data

MIT News (11/13/12) Kimberly Allen

Massachusetts Institute of Technology (MIT) researchers have developed a technique to represent data so that it takes up much less space in memory but can still be processed in conventional ways. The researchers say the approach will lead to faster computations, and could be more generally applicable than other big-data techniques because it can work with existing algorithms. The researchers tested the technique on two-dimensional location data generated by global positioning system receivers used in cars. The algorithm approximates the straight line that is made by the different points at which a car turns. The most important aspect of the algorithm is that it can compress data on the fly by using a combination of linear approximations and random samples, says MIT’s Dan Feldman. Although some of the information is lost during compression, the researchers were able to provide mathematical guarantees that the error introduced will stay beneath a low threshold. The researchers are now investigating how the algorithm can be applied to video data analysis, in which each line segment represents a scene, and the junctures between line segments represent cuts.

MORE

New NCSA Team to Focus on Big Science and Engineering Data Challenges

 NCSA News (11/08/12) Trish Barker

The U.S. National Center for Supercomputing Applications’ (NCSA’s) new Data Cyberinfrastructure Directorate will combine NCSA projects, personnel, and capabilities to focus on data-driven science.  “Science and engineering are being revolutionized by the increasingly large amounts and diverse types of data flowing from new technologies, such as digital cameras in astronomy, highly automated sequencers in biology, and the detailed simulations enabled by the new generation of petascale computers, including NCSA’s Blue Waters,” says NCSA’s Thom Dunning.  The Data Cyberinfrastructure team will fill the gap between data and the research and educational uses of the information that comes from that data, says NCSA’s Rob Pennington, who leads the effort.  He says data-driven science builds on advanced information systems to analyze raw data, and the resulting conclusions are accessible, searchable, and usable to the wider community of scientists and engineers.  “Many disciplines and projects face the same or very similar issues, so it is more productive for the researchers if we leverage solutions across multiple domains rather than each community separately grappling with its data challenges in isolation,” Pennington says.  The Data Cyberinfrastructure Directorate initially will focus on five areas of science and engineering, including astronomy, biomedicine, sustainability, industry, and geographic information systems.

MORE