• December 2012
    M T W T F S S
    « Nov   Jan »

Blue Waters Petascale Supercomputer Now in Friendly User Phase

NCSA News (11/06/12)

U.S. National Science Foundation-approved science and engineering teams now have access to the full Blue Waters petascale computing system. The National Center for Supercomputing Applications (NCSA) and Cray are currently conducting functionality, feature, performance, and reliability testing of the system at full scale. As the tests are completed, a representative production workload of science and engineering applications will run on the full Blue Waters system. The selected users will have access to the entire system during this window in order to help the Blue Waters team test and evaluate the full system. University of Illinois researchers used the system to explore aspects of the HIV capsid’s structural properties in a 100-nanosecond simulation. University of California, Santa Barbara researchers employed the system to complete their calculation of the spectroscopy of charmonium, the positronium-like states of a charm quark and an anticharm quark. NCSA notes that Blue Waters is designed for the most data-, memory-, and compute-intensive computational science and engineering work and to provide sustained performance of 1 petaflop on a range of science and engineering applications.


How to Steal Data From Your Neighbor in the Cloud

Technology Review (11/08/12) Tom Simonite

RSA researchers have shown that it is possible for software hosted by a cloud-computing provider to steal secrets from software hosted on the same cloud.  The researchers ran malware on hardware designed to mimic the equipment used by cloud companies such as Amazon, and they were able to steal an encryption key used to secure emails from the software belonging to another user.  “The basic lesson is that if you’ve got a highly sensitive workload, you shouldn’t run it alongside some unknown and potentially untrustworthy neighbor,” says RSA’s Ari Juels.  The researchers found that, since virtual machines running on the same physical hardware share resources, the actions of one can hinder the performance of another.  This phenomenon allows an attacker in control of one virtual machine to spy on the data stored in memory attached to one of the processors running in the cloud environment.  The RSA software abused a feature that allows software to get priority access to a physical processor when it needs it.  By regularly asking to use the processor, the attacker could probe the memory cache for evidence of the calculations the victim was performing with the email encryption key.  A worrisome application of this attack would be to use the method to steal the encryption keys used to secure Web sites offering services such as email, shopping, and banking.


Supercomputing for a Superproblem: A Computational Journey Into Pure Mathematics

University of Leicester (United Kingdom) (11/06/12)

Mathematician Yuri Matiyasevich is focusing on finding a solution to the challenging mathematical problem of the Riemann Zeta Function (RZF) hypothesis, and he has published a research report through the University of Leicester that regards the zeros of the function.  The paper details how supercomputers have helped mathematicians explore the hypothesis.  “The goal of this paper is to present numerical evidence for a new method for revealing all divisors of all natural numbers from the zeroes of the RZF,” says Leicester professor Alexander Gorban.  “This approach required supercomputing power.”  Gorban notes previous evidence exists of prestigious mathematical functions utilizing massive computations.  “Unfortunately, the Riemann hypothesis is not reduced to a finite problem and, therefore, the computations can disprove but cannot prove it,” he observes.  “Computations here provide the tools for guessing and disproving the guesses only.”  The RZF hypothesis appears on the list of Hilbert’s Problems and also is one of the Millennium Problems listed by the Clay Mathematics Institute.


CMU Joins Forces in Repurposing Supercomputers

Pittsburgh Tribune-Review (10/23/12) Debra Erdley

Researchers at Los Alamos National Laboratory (LANL), the U.S. National Science Foundation, New Mexico Consortium, and Carnegie Mellon University (CMU) recently launched the Parallel Reconfigurable Observational Environment (PRObE) program, a supercomputer research center using a cluster of 2,048 recently retired supercomputers from LANL.  “They decommission them every three or four years because the new computers make so much better results,” says CMU professor Garth Gibson.  PRObE partners successfully decommissioned and saved the computer clusters for reuse.  Although the main facility will stay in Los Alamos, CMU’s Parallel Data Lab in Pittsburgh will house two similar but smaller centers.  The Pittsburgh facilities will enable researchers to perform small experiments and demonstrate to the PRObE committee that they are ready to request time on the facility in Los Alamos.  “Unless they leave universities for government or industry jobs, researchers and students rarely have access to these expensive large-scale clusters,” Gibson says.  “That means they don’t get the training and education necessary to develop innovations.”  PRObE’s launch means that researchers will have the opportunity to experiment with supercomputers.  “We are taking a resource, handing it to scientists and saying, ‘Do your research on a dedicated facility,'” Gibson notes.


China Is Building a 100-Petaflop Supercomputer

IDG News Service (10/31/12) Michael Kan

The Chinese National University of Defense Technology is developing Tianhe-2, a supercomputer expected to run at 100 petaflops when it is launched in 2015.  Tianhe-2 could help keep China competitive with the future supercomputers of other countries, as industry experts estimate computers will start reaching 1,000-petaflop performance by 2018.  The Chinese government is aiming for China’s supercomputers to reach 100 petaflops in 2015, and then 1 exaflop in 2018, according to Institute of Software Chinese Academy of Sciences professor Zhang Yunquan.  Chinese supercomputers previously have relied on U.S.-made chips and software, but the Chinese government wants to develop more homegrown technology in future supercomputer systems.  “I think in the future, as China tries to reach for exascale computing, the designs of these new supercomputers could fully rely on domestic processors,” Zhang says.  “I wouldn’t dismiss the possibility.”  The European Union, Japan, and the U.S. have similar goals to create 100-petaflop systems by 2015, according to University of Tennessee professor Jack Dongarra.  However, he notes building more powerful supercomputers is rife with technical and financial challenges.