• February 2011
    M T W T F S S

DOE Research Group Makes Case for Exascale.

HPC Wire (02/21/11) Tiffany Trader

The U.S. Department of Energy’s (DOE’s) Office of Advanced Scientific Computing Research recently published an article stating that although exascale computing has the potential to lead to scientific breakthroughs, the technology will not be easy or inexpensive to develop. Exascale computing could lead to precise long-range weather forecasting, new alternative fuels, and advances in disease research, according to the DOE paper. However, creating an exascale system faces many obstacles, says Argonne National Laboratory’s Rick Stevens. An exascale system will require billions of cores, so there needs to be an effective model that can take advantage of all of them, in what will likely be an extreme parallel system. An exascale system also would require more than a gigawatt of electricity, which could only come from its own power plant. Stevens says researchers are looking to graphics processing units as a way to minimize energy requirements. He also notes that computer reliability issues will be magnified a thousandfold in an exascale system. All of these issues will require government funding to solve, so “complex and coordinated [research and development] efforts [are required] to bring down the cost of memory, networking, disks, and all of the other essential components of an exascale system,” Stevens says.


Toward Computers That Fit on a Pen Tip: New Technologies Usher in the Millimeter-Scale Computing Era.

University of Michigan News Service (02/22/11) Nicole Casal Moor

University of Michigan researchers, led by professors Dennis Sylvester, David Blaauw, and David Wentzloff, recently presented papers at the International Solid-State Circuits Conference in which they discussed a prototype implantable eye pressure monitor for glaucoma patients and a compact radio that does not need to be tuned to find a signal and could be used to track pollution, monitor structural integrity, or perform surveillance. The research utilizes millimeter-scale technologies to create devices for use in ubiquitous computing environments. The glaucoma eye pressure monitor is slightly larger than one cubic millimeter and contains an ultra-low-power microprocessor, a pressure sensor, memory, a thin-film battery, a solar cell, and a wireless radio transmitter that sends data to an external reading device. “This is the first true millimeter-scale complete computing system,” Sylvester says. Wentzloff and doctoral student Kuo-Ken Huang have developed a tiny radio with an on-chip antenna that can keep its own time and serve as its own reference, which enables the system to precisely communicate with other devices. “By designing a circuit to monitor the signal on the antenna and measure how close it is to the antenna’s natural resonance, we can lock the transmitted signal to the antenna’s resonant frequency,” Wentzloff says.


Obama Sets $126M for Next-Gen Supercomputing.

Computerworld (02/17/11) Patrick Thibodeau

President Obama’s 2012 budget proposal calls for $126 million for the development of next-generation exascale supercomputers, with about $91 million going to the U.S. Department of Energy’s (DOE’s) Office of Science and $36 million going to the National Nuclear Security Administration. The funding is part of a general DOE advanced computing request of $465 million for 2012, which marks a 21 percent increase over the 2010 budget. Exascale systems will beat the power of the fastest current supercomputer by 1,000-fold, and the White House’s funding for such systems reflects its plan for a predictable future pathway for high-performance computing. The creation of an exascale system is expected by 2020, but that depends on the development of software systems that can use what may amount to 100 million cores. Meanwhile, DOE is constructing 10 petaflop systems. Modeling and simulation are the chief supercomputing applications, and with an increase in system size comes a gain in resolution. Faster networking and other technological milestones that must be achieved to build exascale systems may eventually migrate to business-class servers.


Energy Aims to Retake Supercomputing Lead From China.

Government Computer News (02/11/11) Henry Kenyon

The U.S. Department of Energy’s (DOE’s) Argonne National Laboratory has commissioned the development of a supercomputer that will be capable of executing 10 petaflops. IBM will build the machine, which will be based on a version of the latest Blue Gene supercomputer architecture. The supercomputer will be operational in 2012, and its performance will be vastly superior to today’s most powerful supercomputer, China’s Tianhe-1A system, which has a peak performance of 2.67 petaflops. The system also will be the most energy-efficient computer in the world due to a combination of new microchip designs and very efficient water cooling. The supercomputer, which will be housed at the Argonne Leadership Computing Facility, will be used to conduct a variety of modeling and simulation tests that current machines are unable to perform. By 2012, IBM also will be responsible for two other systems operating at 10 petaflops or higher–the 20 petaflop Sequoia for the DOE’s Lawrence Livermore National Laboratory and the 10 petaflop Blue Waters system for the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign. The new class of supercomputers is expected to pave the way for the emergence of exascale computers–machines that are 1,000 times faster than petascale systems–by the end of the decade.


UF Leads World in Reconfigurable Supercomputing.

University of Florida News (02/15/11) Ron Word

University of Florida researchers say the Novo-G is the world’s fastest reconfigurable supercomputer and that it is capable of executing some key science applications faster than the Chinese Tianhe-1A system, which was rated the world’s most powerful supercomputer in the Top500 list in November. Florida professor Alan George notes that the Top500 list scores systems based on their performance of a few basic routines in linear algebra using 64-bit, floating-point arithmetic. He says many important apps do not comply with that standard, and software apps for most computers must conform to fixed-logic hardware structures that can slow down computing speed and boost energy consumption. However, reconfigurable systems feature architecture that can adjust to match each app’s unique requirements, leading to higher speed and more energy efficiency due to adaptive hardware customization. Novo-G employs 192 reconfigurable processors and “can rival the speed of the world’s largest supercomputers at a tiny fraction of their cost, size, power, and cooling,” according to the researchers, who say it is particularly well suited for applications in genome research, cancer diagnosis, plant science, and large data set analysis.


New Supercomputers Boost Imaging Grunt.

ZDNet Australia (02/08/11) Colin Ho

IBM recently announced that the Australian Synchrotron and Monash University has purchased two supercomputers, to be use in collaboration with the Commonwealth Scientific and Industrial Research Organisation (CSIRO) and the Victorian Government, to create a near real-time atomic-level imaging and visualization facility. The supercomputers will enable researchers to study objects at an atomic level, create three-dimensional images, and process large amounts of data collected by the program. The joint Monash-CSIRO program, called the Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE) facility, will study a variety of topics, ranging from biology to geology. “The unique nature of these facilities is the focus on imaging and visualization,” says Monash University’s Wojtek Goscinski. The merger of atomic-level detail and semi-real-time analysis makes MASSIVE an important move forward for scientific research, says Australian Synchrotron director Andrew Peele.


Super Computer to Be Used for Agricultural Research.

Daily News & Analysis (India) (02/09/11) Arun Jayan

The Indian Council of Agricultural Research is building a national agricultural bioinformatics grid with assistance from the Center for Development of Advanced Computing (C-DAC). The grid is designed to improve agricultural productivity and help address issues such as food security. “Now scientists have to wait for a production cycle to get over to analyze various issues like quality of seed, produce, and weather pattern,” says C-DAC’s Goldi Misra. However, high-performance computers could be used for such analysis instead, Misra says. The first phase of the project will focus on connecting government agencies with high-speed networks. Agricultural universities and research centers also could be added to the grid to enable researchers to perform complex analytical processes. The grid will provide computational support for high-quality research in agriculture and biotechnology, says Indian Agricultural Statistics Research Institute researcher Anil Rai. “This will lead to the development of superior varieties [of] seeds, the right fertilizers, and will help various other processes to enhance agricultural productivity on sustainable basis,” he says.


Exabytes: Documenting the ‘Digital Age’ and Huge Growth in Computing Capacity.

Washington Post (02/10/11) Brian Vastag

The global capacity to store digital information totaled 276 exabytes in 2007, according to a University of Southern California (USC) study. However, that data is not distributed equally, with a distinct line dividing rich and poor countries in the digital world, says USC’s Martin Hilbert. In 2007, people in developed countries had access to about 16 times greater bandwidth than those in poor countries. “If we want to understand the vast social changes underway in the world, we have to understand how much information people are handling,” Hilbert says. The study found that 2002 marked the first year that worldwide digital storage capacity was greater than total analog capacity. “You could say the digital age started in 2002,” Hilbert says. “It continued tremendously from there.” Digital media accounted for 25 percent of all information stored in the world in 2000, but just seven years later 94 percent of all the information storage capacity on Earth was digital, with the remaining six percent comprised of books, magazines, video tapes, and other non-digital media forms. The study found that digital storage capacity grew 23 percent a year from 1986 to 2007, while computing power increased 58 percent a year during the same period. Hilbert notes that people generate 276 exabytes of digital data every eight weeks, but much of that information is not stored long term.


100-Petaflop Supercomputers Predicted By 2017.

InformationWeek (02/08/11) Antone Gonsalves

University of Tennessee’s Jack Dongarra predicts that there will be 100-petaflop supercomputer systems by 2017, and exascale systems, which could be 1,000 times faster than petascale systems, by 2020. “This will happen if the funding is in place for those machines,” says Dongarra, who helped design the testing system used to generate the Top500 supercomputing list. IBM plans to build 10-petaflop computer systems at both the Argonne National Laboratory and the University of Illinois at Urbana-Champaign in the next year, as well as a 20-petaflop system at the Lawrence Livermore National Laboratory in 2012. Cray also is ready to launch a 10-petaflop system. “Everything is moving along according to Moore’s law, so things are doubling every 18 months, roughly,” Dongarra says. IBM’s new systems will feature Blue Gene supercomputers, called Blue Gene/Q, and will run on specially designed systems-on-a-chip. The Argonne System will be used for industry, academic, and government work, while the Livermore system will be used for advanced uncertainty quantification simulations and weapons development.


Next-Generation Supercomputers.

IEEE Spectrum (02/11) Peter Kogge

Supercomputing performance upgrades are unlikely to be as spectacular in the next decade as they were in the last two, writes University of Notre Dame professor Peter Kogge. The U.S. Defense Advanced Research Projects Agency hoped that an exaflops-class supercomputer would be practically realizable by 2015, but a panel Kogge organized to debate this question concluded that such a breakthrough requires a complete rethinking of supercomputer construction in order to dramatically minimize power consumption. An additional challenge is keeping a massive number of microprocessor cores busy at the same time. Kogge says that “unless memory technologies emerge that have greater densities at the same or lower power levels than we assumed, any exaflops-capable supercomputer that we sketch out now will be memory starved.” Another daunting challenge is providing long-term storage with sufficient speed and density to retain checkpoint files, while reducing the operating voltage would make the transistors susceptible to new and more frequent faults. Nevertheless, Kogge thinks exaflop systems are attainable, but creating such a supercomputer “will demand a coordinated cross-disciplinary effort carried out over a decade or more, during which time device engineers and computer designers will have to work together to find the right combination of processing circuitry, memory structures, and communications conduits.”