• May 2011
    M T W T F S S

A New Generation of Smarter, Not Faster, Supercomputers

 HPC Wire (05/19/11) Nicole Hemsoth

Exascale-level supercomputing offers much promise, but many barriers remain to taking advantage of the technology. Argonne National Lab’s Rick Stevens says that although the size of the programming challenges is intimidating, power also is a major concern. A billion-processor computer, relying on current technologies, would require more than 1 gigawatt of electricity. Another barrier to exascale computing systems is general reliability, in that it will be difficult to keep the systems running for more than a few minutes at a time, Stevens says. Since boosting the power will become increasingly difficult, the role of hyper-smart cluster management software will become more critical, says Altair Engineering’s Bill Nitzberg. Instead of focusing on making the next generation of supercomputers able to run on less power, there needs to be a focus on making better use of the power available, Nitzberg says. “When I think of making the future generation of computers smarter, the computer scientist in me thinks about optimization and the environmental side of me thinks about power,” he says.


Forecast for Processing and Storing Ever-Expanding Science Data: Cloudy.

Scientific American (05/04/11) Larry Greenemeier

Scientists that previously relied on time-shared access to high-performance computers to analyze large datasets are now turning to cloud-based services. The U.S. National Science Foundation and Microsoft recently awarded about $4.5 million in funding to 13 research projects dedicated to studying cloud services for scientific uses. The projects include the J. Craig Venter Institute’s effort to computationally model protein-to-protein interactions, the University of North Carolina at Charlotte’s research on gene regulatory systems in single-celled organisms, and a project co-led by researchers at the universities of South Carolina and Virginia to study the management of large watershed systems. Likewise, the European Space Agency (ESA) uses Amazon Web Services to provide Earth-related data to scientists, governmental agencies, and other organizations. Amazon says that during peak usage times, the service enables ESA to simultaneously provide 30 terabytes of images and data to more than 50,000 users worldwide. “The perfect scenario for using the cloud in biotech is to outsource small amounts of data into the cloud that require a massively parallel computing system for processing and then have the results of that processing returned to you,” says Distributed Bio’s Giles Day.


Julich Supercomputing Center Boots Up GPU Cluster.

HPC Wire (05/05/11)

Germany’s Julich Supercomputing Center (JSC) has gone live with its new Julich Dedicated GPU Environment (JUDGE) cluster, which will be used for ensemble simulations in climate and atmospheric research, as well as for data analysis and simulations on big data sequences in biology and brain research. JUDGE will enable JSC to optimize the applications for the highest performance. The hybrid system uses graphical general processing units (GPGPUs) and conventional processors. GPGPUs can help boost performance without significantly increasing energy consumption, which is important because improvements in energy efficiency will allow further supercomputing advancements in the future. NVIDIA’s Stefan Kraemer says the JUDGE cluster is a good example of how computers need to continue to be developed in the future, following the target of exascale computing. “This is valid not only in regard to performance, but also to energy consumption and energy efficiency,” he says. “Pilot projects like JUDGE play a key role in this process and are a key step on the way to hybrid systems.”


World’s Servers Process 9.57ZB of Data a Year.

Computerworld (05/09/11) Lucas Mearian

University of California, San Diego (UCSD) researchers estimate that the world’s 27 million business servers processed 9.57 zettabytes of information in 2008. “Most of this information is incredibly transient: It is created, used, and discarded in a few seconds without ever being seen by a person,” says UCSD professor Roger Bohn. The study included estimates of the amount of data processed as input and delivered by servers as output. The researchers used cost and performance benchmarks for online transaction processing, Web services, and virtual machine processing tasks to reach their estimates. “The exploding growth in stored collections of numbers, images, and other data is well known, but mere data becomes more important when it is actively processed by servers as representing meaningful information delivered for an ever-increasing number of uses,” says UCSD researcher James Short. The study found that entry-level servers processed about 65 percent of the world’s data in 2008, while midrange servers processed about 30 percent, and high-end servers processed about 5 percent.


Chinese Chip Wins Energy-Efficiency Crown.

IEEE Spectrum (05/11) Joseph Calamia

The next Chinese supercomputer will use the Godson-3B processor, which can perform 128 billion floating-point operations per second while consuming only 40 watts, which tops the performance per watt of competing systems by at least 100 percent. The processor relies on a modified mesh network that features additional direct core connections to move data efficiently. The eight-core chip consists of two four-core clusters where each core sits on a corner of a square of interconnects. Each corner also is linked to its opposite through two diagonal interconnects that form an X through the square’s center. Both four-core units are connected via a crossbar interconnect, and the chip’s developers expect the scalability of their modified mesh to be an advantage as designers place more cores on future chips. Boosting the number of cores in a mesh puts a strain on the system, but Tilera’s Matthew Mattina says a mesh interconnect offers bandwidth scaling superior to that of the ring configuration typical of most microprocessors. Godson architect Yunji Chen says a mesh design also supports more favorable latency.


Panel: Wall Ahead in Multicore Programming.

EE Times (05/03/11) Rick Merritt

A panel of experts at the recent Multicore Expo said that programmers will need new tools and methods to reap the benefits of increasingly parallel chips. “The wall is there,” says Nokia Siemens Networks’ Alex Bachmutsky. “We probably won’t have any more products without multicore processors [but] we see a lot of problems in parallel programming.” Rewriting existing programs is expensive, and although some algorithms can be changed, Bachmutsky notes that changing all the cell towers and phones is not doable. An audience member also warned that developers can no longer expect next-generation processors to boost the performance of their apps. LSI engineer Rob Munoz said that parallel software is difficult to develop, maintain, and evolve. And managing multithreaded applications where threads may move between different cores also is a problem, points out consultant Mike Anderson. He says the industry needs to understand what it means to be parallel before even thinking about a new programming language.


Graphene Optical Modulators Could Lead to Ultrafast Communications.

UC Berkeley News Center (05/08/11) Sarah Yang

University of California, Berkeley researchers have developed graphene-based technology that could revolutionize digital communications. The researchers, led by Berkeley professors Xiang Zhang and Feng Wang, built a tiny optical device using graphene that can switch light on and off, a fundamental aspect of a network modulator. The researchers say that graphene-based modulators could enable consumers to stream full-length high-definition movies onto a smartphone in just a few seconds. “This new technology will significantly enhance our capabilities in ultrafast optical communication and computing,” Zhang says. The researchers achieved a modulation speed of one gigahertz, but theorized that speeds as high as 500 gigahertz on a single modulator are possible. Graphene can absorb a broad spectrum of light, which allows the material to carry more data than conventional modulators, which only operate at a bandwidth of up to 10 nanometers. “What we see here and going forward with graphene-based modulators are tremendous improvements, not only in consumer electronics, but in any field that is now limited by data transmission speeds, including bioinformatics and weather forecasting,” Zhang says.