• April 2010
    M T W T F S S

High-Performance Computing Reveals Missing Genes.

Virginia Bioinformatics Institute (04/13/10) Whyte, Barry

Researchers at Virginia Tech’s (VT’s) Department of Computer Science and the Virginia Bioinformatics Institute used a supercomputer to locate small genes that have been overlooked by scientists in their search for the microbial DNA sequences of life. The researchers used the mpiBLAST computational tool, which enabled them to conduct the study in 12 hours, instead of the 90 years it would have taken using a standard personal computer. The researchers say the study is the first large-scale attempt to identify undetected genes of microbes in the GenBank DNA sequence repository, which currently contains more than 100 billion bases of DNA sequences. “This is a perfect storm, where an overwhelming amount of data is analyzed by state-of-the-art computational approaches, yielding important new information about genes,” says VT professor Skip Garner. There are currently more than 1,200 genome sequences of microbes stored in the GenBank database. “To facilitate the rapid discovery of missing genes in genomes, we used our mpiBLAST sequence-search tool to perform an all-to-all sequence search of the 780 microbial genomes that we investigated,” says VT professor Wu Feng.


Dynamic Nimbus Cloud Deployment Wins Challenge Award at Grid5000 Conference.

Argonne National Laboratory (04/26/10) Taylor, Eleanor

The Argonne National Laboratory and University of Chicago’s Nimbus toolkit, an open source set of software tools for providing cloud computing implementations, was demonstrated at the recent Grid5000 conference in France. Grid5000 is a testbed for studying large-scale systems using thousands of nodes distributed across nine sites in France and Brazil. University of Rennes student Pierre Riteau deployed Nimbus on hundreds of nodes spread across three Grid5000 sites to create a distributed virtual cluster. The deployment won Riteau the Grid5000 Large Scale Deployment Challenge Award. Argonne computer scientist Kate Keahey says the deployment was one of the largest to date and created a distributed environment that opens up computational opportunities for scientists by creating a “sky computing” cluster.


Mastering Multicore.

MIT News (04/26/10) Hardesty, Larry

Massachusetts Institute of Technology (MIT) researchers have developed software that makes computer simulations of physical systems run more efficiently on multicore chips. The system can break a simulation into much smaller chunks, which it loads into a queue. When a core finishes a calculation, it moves onto the next chunk in the queue, which saves the system from having to estimate how long each chunk will take to execute. Additionally, smaller chunks mean that the system can better handle the problem of boundaries. The management system can divide a simulation into chunks that are so small that they can fit in the cache along with information about the adjacent chunks. So a core working with one chunk can rapidly update factors along the boundaries of adjacent chunks. Using existing management systems, the MIT team found that a 24-core machine ran 14 times faster than a single-core machine. However, the new management system ran the same machine 22 times faster. The new system would allow individual machines within clusters to operate more efficiently as well.


New Research Offers Security for Virtualization, Cloud Computing.

NCSU News (04/27/10) Shipman, Matt

North Carolina State University (NCSU) researchers have developed HyperSafe, software for resolving security concerns related to data privacy in virtualization and cloud computing. A key threat to virtualization and cloud computing is malicious software that enables computer viruses to spread to the underlying hypervisor, which allows different operating systems to run in isolation from one another, and eventually to the systems of other users. HyperSafe leverages existing hardware features to secure hypervisors against such attacks. “We can guarantee the integrity of the underlying hypervisor by protecting it from being compromised by any malware downloaded by an individual user,” says NCSU professor Xuxian Jiang. HyperSafe uses non-bypassable memory lockdown, which blocks the introduction of new code by anyone other than the hypervisor administrator. HyperSafe also uses restricted pointer indexing, which characterizes a hypervisor’s normal behavior and prevents any deviation from that profile.


Opinion: Challenges to Exascale Computing.

International Science Grid This Week (04/07/10) Wladawsky-Berger, Irving.

Former IBM researcher and visiting lecturer at the Massachusetts Institute of Technology Irving Wladawsky-Berger writes that supercomputing and the information technology industry will need to undergo a major technology and architectural transition in order to reap the benefits of exascale computing. Wladawsky-Berger cites the U.S. Defense Advanced Research Projects Agency’s recognition of four key technology challenges through its ExaScale Computing Study. Those challenges encompass energy and power, memory and storage, concurrency and locality, and resiliency. One of exascale computing’s most persuasive arguments is its facilitation of a tipping point in predictive science, with a potentially huge impact on massively complex problems. Dealing with such problems, which contain innate uncertainties and unpredictability, entails the concurrent running of multiple copies of the same applications using numerous distinctive combinations of parameters. Areas that stand to benefit from this new style of predictive modeling include nuclear reactor design, climate studies, economics, medicine, government, and business, says Wladawsky-Berger.



World Record: Julich Supercomputer Simulates Quantum Computer.

EurekAlert (03/30/10) Schinarakis,

Kosta University of Groningen and Julich Supercomputing Center (JSC) researchers recently broke the world record for running software that can simulate the largest quantum computer system with almost 300,000 processors. “Our software is optimized so that thousands of processors can work seamlessly together,” says JSC professor Kristel Michielsen. With the new simulation methods, it will be possible to explore the phenomena and dynamics of quantum mechanical systems. Though current laboratory prototypes have only reached 8 bits in size, simulations can be used to study the properties of larger systems. Simulations make it possible to test the impact of external influences on the sensitive quantum system and how to compensate for resulting errors.