• March 2013
    M T W T F S S

Engineers Develop Techniques to Improve Efficiency of Cloud Computing Infrastructure

UCSD News (CA) (03/06/13) Ioana Patringenaru

Researchers at the University of California, San Diego (UCSD) and Google say they have developed a method for running cloud computing’s infrastructure up to 20 percent more efficiently.  The researchers first gathered live data from Google’s computers, and then conducted experiments with the data in a controlled environment on an isolated server.  “If we can bridge the current gap between hardware designs and the software stack and access this huge potential, it could improve the efficiency of Web service companies and significantly reduce the energy footprint of these massive-scale data centers,” says UCSD’s Lingjia Tang.  After analyzing the data, the researchers found that the application ran significantly better when it accessed data located nearby on the server, instead of in remote locations.  Although data location is important, the researchers also found that competition for shared resources within a server also plays a role.  Based on these results, the researchers developed a metric, called the NUMA score, which determines how well random-access memory is allocated in warehouse-scale computers.


Big Blue, Big Bang, Big Data: Telescope Funds Computing R&D

CNet (03/05/13) Stephen Shankland

IBM is attempting to advance supercomputing technology in processing, optical communications, and memory through an international initiative to study the Big Bang’s radio remnants using the Square Kilometer Array radio telescope.  Prior to the telescope’s construction start, IBM is working to devise the required computing technology through a five-year alliance with the Netherlands Institute for Radio Astronomy.  The telescope will generate 14 exabytes of data daily, and this data must be refined into about 1/1000 its size for further processing.  Processing functions will be handled by IBM microservers, packed together densely and enhanced by hot-water cooling.  The microservers will communicate over a system data pathway that can accommodate 10-gigabit Ethernet links and support communications with disks, USB devices, and other system plug-ins.  IBM intends to use optical interconnects instead of copper wiring to transmit data to the processors.  The company also is exploring phase-change memory technology as the project’s data storage instrument, as it is faster and more durable than flash memory and is capable of storing data even when power is deactivated.  In addition, the project is probing the use of programmable accelerator chips specialized for extremely rapid performance on jobs such as pattern recognition, data filtering, or mathematical transformation.


Supercomputing Challenges and Predictions

HPC Wire (02/27/13) Richard L. Brandt

The future of supercomputing was a popular topic at SC12 in Salt Lake City last November. Flying cars were not on the agenda, but experts made smarter predictions, such as improved weather forecasting and faster discovery of new drugs. IEEE has created a summary of the supercomputing predictions and challenges made by its members at the event. Materials sciences research will lead to cheaper batteries with more capacity, says Rajeev Thakur, technical program chair of SC12 and deputy director of the Mathematics and Computer Science Division at Argonne National Laboratory. Thakur also believes more questions about the universe will be answered as a result of cosmological simulations. Bronis de Supinski, co-leader of the Advanced Simulation and Computing program’s Application Development Environment and Performance Team at Lawrence Livermore National Laboratory, predicts a better power grid, but says the need for cheaper power and less dissipation would present problems. He also sees issues with memory bandwidth and capacity, while Thakur cites funding as an area of concern.