• September 2013
    M T W T F S S
    « Aug   Oct »

A Better Way for Supercomputers to Search Large Data Sets

Government Computer News (09/09/13) Kevin McCaney 

Lawrence Berkeley National Laboratory researchers have developed techniques for analyzing huge data sets by utilizing “distributed merge trees,” which take better advantage of a high-performance computer’s massively parallel architecture. Distributed merge tree algorithms are capable of scanning a huge data set, tagging the values a researcher is looking for, and creating a topological map of the data. Distributed merge trees separate the data sets into blocks and leverage a supercomputer’s massively parallel architecture to distribute the work across its thousands of nodes, according to Berkeley Lab’s Gunther Weber. He notes the algorithms also can separate important data from irrelevant data. Weber says the new technique will enable researchers to get more out of future supercomputers. “This is also an important step in making topological analysis available on massively parallel, distributed memory architectures,” Weber notes.


Managing Multicore Memory

MIT News (09/13/13) Larry Hardesty 

Massachusetts Institute of Technology (MIT) researchers have developed Jigsaw, a system that monitors the computations being performed by a multicore chip and manages cache memory accordingly. In experiments simulating the execution of hundreds of applications of 16- and 64-core chips, Jigsaw was able to accelerate execution by an average of 18 percent while reducing energy consumption by as much as 72 percent. Jigsaw monitors which cores are accessing which data most frequently and calculates the most efficient assignment of data to cache banks. Jigsaw also varies the amount of cache space allocated to each type of data, depending on how it is accessed, with the data that is reused frequently receiving more space than data that is accessed less often. In addition, by ignoring some scenarios that are extremely unlikely to arise in practice, the researchers developed an approximate optimization algorithm that runs efficiently even as the number of cores and the different types of data dramatically increases. MIT professor Daniel Sanchez notes that since the optimization is based on Jigsaw’s observations of the chip’s activity, “it’s the optimal thing to do, assuming that the programs will behave in the next 20 milliseconds the way they did in the last 20 milliseconds.”


DOE: Federal Spending Necessary for Exascale Supercomputer Development

FierceGovernmentIT (09/15/13) David Perera 

Federal agencies must spend money on the development of an exascale supercomputer if a viable machine is to be built by 2022, according to a report from the U.S. Energy Department to Congress. The report notes that an exaflop computer would consume more than 1 gigawatt of power, “roughly half the output of Hoover Dam,” if the approach to exascale supercomputing follows the one taken for petascale machines. The Energy Department achieved petascale-level performance by networking clusters of commodity processors and memory. Still, at the normal pace of technological improvement and without the benefit of commercially risky investments, an exascale machine would require more than 200 megawatts of power at an estimated cost of $200 million to $300 million annually. Developing an exascale system is expected to cost $1 billion to $1.4 billion. Supercomputers currently need 10 times more energy to bring two numbers from memory into the processor than to execute the subsequent operation itself, and the Energy Department estimates that by 2020, that ratio could reach 50 times more. The department also would need to address memory and data movement issues, as well as determine how to cope with runtime errors in an exascale machine.


The Masters of Uncertainty

HPC Wire (09/13/13) Nicole Hemsoth

The California Institute of Technology’s Houman Owhadi and Clint Scovel recently spoke with HPC Wire about Bayesian methods and the role of uncertainty in supercomputing. Bayesian inference allows researchers to test the outcomes of interest by modeling uncertainty combined with some prior data. Uncertainty quantification is especially useful with high-performance computing, as more advanced computers enable researchers to compute the Bayesian posterior, or the conditional probability of an uncertainty that is assigned after known evidence is considered, which previously could not be calculated. Fields such as risk analysis and climate modeling can particularly benefit from uncertainty quantification. For example, Boeing must certify that new airplane models have a probability of catastrophic event that is less than 10 to the power of minus nine per hour of flight. To perform safety assessments, Boeing cannot fly a billion airplanes to determine how many crash, Owhadi says, so the company takes limited data and processes it in an optimal way to predict risk. Owhadi notes that he and other researchers are developing an algorithmic framework that enables optimal processing of information. “We’re saying we want to formulate this problem that we’re trying to solve and we’re going to use our computing capability–in particular, high-performance computing–to solve these problems,” Scovel says.