The U.S. Is Busy Building Supercomputers, but Needs Someone to Run Them

Daily Beast (12/28/11) Dan Lyons

The United States is rapidly adding to its collection of supercomputers, with new high-performance computing (HPC) systems under development at various labs. However, there are not enough people who know how to make use of all the new supercomputing power, say HPC industry experts. This talent shortage is the “missing middle,” meaning there are enough specialists to run the handful of world-beating supercomputers that cost a few hundred million dollars, and plenty of people who can manage ordinary personal computers and servers, but there are not nearly enough people who know how to use the small and midsized HPC machines that cost between $1 million to $10 million. “We need people who can build the applications and algorithms needed to effectively use the equipment,” says the University of Tennessee’s Jack Dongarra. The Virtual School for Computational Science and Engineering is a program that offers online courses for graduate students who want to learn how to use HPCs. This year, 1,000 students participated, up from 40 in 2008 when the program began, according to National Center for Supercomputing Applications director Thom Dunning.

MORE

Congress Funds Exascale Computing.

InformationWeek (12/22/11) J. Nicholas Hoover

The U.S. Department of Energy recently won full Congressional funding to support the pursuit of exascale computing. House and Senate conferees agreed to provide $442 million for advanced scientific computing research, and $126 million will go toward exascale computing. Researchers were able to move from terascale to petascale computing in about 12 years, and it will likely take that long to reach the exascale plateau. According to an Oak Ridge National Lab report, exascale computing could enable researchers to more deeply understand nanotechnology, model climate processes at very high resolutions, and simulate nuclear interactions to a level not possible today. However, for future funding the Energy Department will need to provide Congress with an exascale computing plan. By Feb. 10, 2012, the department will need to provide a strategy that includes target dates, interim milestones, minimum requirements for an exascale system, multi-year budget estimates, breakdowns of each office and lab involved in exascale research, and a more granular budget request for 2013.

MORE

Russia Building 10-Petaflop Supercomputer.

Computerworld (12/23/11) Patrick Thibodeau

Moscow-based T-Platforms is developing a 10-petaflop supercomputer for M.V. Lomonsov Moscow State University. The system may indicate Russia’s intent to become a major participant in the race to build an exascale-class supercomputer. Russia “is committed to having exascale computation capabilities by 2018-2020 and is prepared to make the investments to make that happen,” says Exascale Report author Mike Bernhardt. The newest system at Lomonsov will be water-cooled, use Intel and NVIDIA chips, and should be operational by the end of 2013. “You can expect to see Russia holding its own in the exascale race with little or no dependence on foreign manufacturers,” Bernhardt says. Russia’s efforts to build an exascale-class system mirror those of other European nations, which also want to be less dependent on U.S.-made components. “At this point, there is unity in believing any company, on a global scale, would be foolish to state that they know the exact technology or components they will use to build an exascale machine,” Bernhardt says. “Systems will be hybrid, heterogeneous, and unique.”

MORE

Multi-Purpose Photonic Chip Paves the Way to Programmable Quantum Processors

 University of Bristol News (12/11/11)

 University of Bristol researchers have developed an optical chip that generates, manipulates, and measures two quantum phenomena, entanglement and mixture, which are essential for building quantum computers.  The researchers showed that entanglement can be generated, manipulated, and measured on a silicon chip.  The chip also has been able to measure mixture, which can be used to characterize quantum circuits.  “To build a quantum computer, we not only need to be able to control complex phenomena, such as entanglement and mixture, but we need to be able to do this on a chip, so that we can scalably and practically duplicate many such miniature circuits–in much the same way as the modern computers we have today,” says Bristol professor Jeremy O’Brien.  “Our device enables this and we believe it is a major step forward towards optical quantum computing.”  The chip consists of a network of tiny channels that guide, manipulate, and interact with single photons.  “It’s exciting because we can perform many different experiments in a very straightforward way, using a single reconfigurable chip,” says Bristol’s Peter Shadbolt.  The researchers are now scaling up the complexity of the device for use as a building block for quantum computers.

MORE

In Race for Fastest Supercomputer, China Outpaces U.S.

Newsweek (11/28/11) Dan Lyons

China is outpacing the United States in terms of supercomputer development. In November the Chinese debuted the Tianhe-1A, a supercomputer with five times the processing power of the biggest computer at the U.S.’s Lawrence Livermore National Laboratory. The country that develops a superior high-performance computing system gains massive economic and military advantages, and the U.S.’s loss to a competing nation in the field of supercomputing could jeopardize its edge in many scientific, security, and military areas. To close the gap, Livermore scientists are developing Sequoia, a supercomputer that will combine 1.6 million microprocessors and trump the Tianhe-1A’s computing power by a factor of eight. Although the United States has 263 of the world’s 500 largest supercomputers, China has built 74 in just 10 years. Adding to U.S. developers’ pain are Chinese organizations devising supercomputer components that will allow China to end its reliance on U.S. vendors for parts. Livermore scientists also project the emergence of an exascale machine that taps the computing muscle of about 1 billion microprocessors and delivers six times the power of Sequoia within a decade. Crucial to this milestone will be a new kind of microprocessor that is far more energy-efficient than today’s chips.

MORE

Supercomputers Take a Cue From Microwave Ovens

Lawrence Berkeley National Laboratory (12/01/11) Linda Vu

 To develop more efficient supercomputers, Lawrence Berkeley National Laboratory (LBNL) researchers are studying consumer electronics such as microwave ovens, cameras, and cell phones, in which chips, batteries, and software are optimized to the device’s application. The co-design approach makes scientists and computer engineers a part of the supercomputer design process, so that systems are purpose-built for a scientific application from the bottom up. “Co-design allows us to design computers to answer specific questions, rather than limit our questions by available machines,” says LBNL’s Michael Wehner. The researchers recently published a paper arguing that the scientific supercomputing community should follow consumer electronics by starting with an application and using that as a metric for successful hardware and software design. “Because the ultimate goal of the embedded market is to maximize battery life, these technologies have always been driven by maximizing performance-per-watt and minimizing cost,” says LBNL’s John Shalf. He notes that co-designed supercomputers will be less general purpose than typical supercomputers, but he says that much of what is included in modern supercomputers is of little use to scientific computing.

MORE