NSF’s Most Powerful Computing Resource Has Opened Its Doors to Six Science Teams

 National Science Foundation(03/21/12) Lisa-Joy Zgorski

 The U.S. National Science Foundation (NSF) and the University of Illinois’ National Center for Supercomputing Applications (NCSA) recently selected six teams to use the Blue Waters’ Early Science System before the full supercomputing system is deployed later this year.  “It began as an idea, and now thanks to sustained collaborative efforts by the entire project team, the vendor, and researchers, this computational tool is beginning to advance fundamental understanding in a wide range of scientific topics,” says NSF’s Irene Qualters.  NSF and NCSA awarded more than 24 research teams with time to use Blue Waters on compelling research queries.  A smaller group of six teams was chosen to use the Early Science System.  The six teams will pursue research in modeling high-temperature plasmas, simulating the formation and evolution of the Milky Way’s distant past, examining the protein that encases the HIV-1 genome, exploring explosive burning in Type 1a supernovae, and simulating the end of both the 20th and 21st centuries to explore changes in frequency and intensity of extreme events.  Once fully deployed, Blue Waters is expected to make arithmetic calculations at a sustained rate of more than 1 petaflop per second.

MORE

Advertisements

Scale-Out Processors: Bridging the Efficiency Gap Between Servers and Emerging Cloud Workloads

 HiPEAC (03/19/12)

Ecole Polytechnique Federale de Lausanne (EPFL) professor Babak Falsafi recently presented “Clearing the Clouds: A Study of Emerging Workloads on Modern Hardware,” which received the best paper award at ASPLOS 2012. “While we have been studying and tuning conventional server workloads (such as transaction processing and decision support) on hardware for over a decade, we really wanted to see how emerging scale-out workloads in modern data centers behave,” Falsafi says. “To our surprise, we found that much of a modern server processor’s hardware resources, including the cores, caches, and off-chip connectivity, are overprovisioned when running scale-out workloads leading to huge inefficiencies.” Efficiently executing scale-out workloads requires optimizing the instruction-fetch path for up to a few megabytes of program instructions, reducing the core complexity while increasing core counts, and shrinking the capacity of on-die caches to reduce area and power overheads, says EPFL Ph.D. student Mike Ferdman. The research was partially funded by the EuroCloud Server project. “Our goal is a 10-fold increase in overall server power efficiency through mobile processors and [three-dimensional] memory stacking,” says EuroCloud Server project coordinator Emre Ozer.

MORE

Schools Pool to Stay Cool

IEEE Spectrum  (21/03/2012) Mark Anderson

When one of the world’s most ambitious university computer centers opens later this year, it will be a “green” facility—green in its environmental cred and green in its bottom line, too. The US $95 million, 8400-square-meter Massachusetts Green High-Performance Computing Center (MGHPCC), located in Holyoke, Mass., will be on a par with the data centers that house the world’s fastest supercomputers, say its proponents. The center has pooled the computing resources of five of the top research universities in the northeastern United States: Boston University, Harvard University, MIT, Northeastern University, and the University of Massachusetts.

MORE

CU and NIST Scientists Reveal Inner Workings of Magnets, a Finding That Could Lead to Faster Computers.

University of Colorado (03/14/12) Margaret Murnane

 Researchers at the University of Colorado Boulder and the U.S. National Institute of Standards and Technology (NIST) used specialized X-ray lasers to reveal the inner workings of magnets, a breakthrough they say could lead to faster and smarter computers.  Using a light source that creates X-ray pulses one quadrillionth of a second in length, the researchers were able to observe how magnetism in nickel and iron atoms works, and found that each metal behaves differently.  “The discovery that iron and nickel are fundamentally different in their interaction with light at ultrafast time scales suggests that the magnetic alloys in hard drives could be engineered to enhance the delivery of the optical energy to the spin system,” says NIST’s Tom Silva.  The researchers found that different kinds of magnetic spins in metal scramble on different time scales.  “What we have seen for the first time is that the iron spins and the nickel spins react to light in different ways, with the iron spins being mixed up by light much more readily than the nickel spins,” Silva says.  The discovery could help researchers develop a magnetic system optimized for maximum disk drive performance.

MORE

Pi Day: How the ‘Irrational’ Number Pushed the Limits of Computing

Government Computer News (03/14/12) William Jackson

The challenge of determining the value of Pi has helped push the envelope of computing. “It has played a role in computer programming and memory allocation and has led to ingenious algorithms that allow you to calculate this with high precision,” says mathematician Daniel W. Lozier, retired head of the mathematical software group in the U.S. National Institute of Standards and Technology’s Applied and Computational Mathematics Division. “It’s a way of pushing computing machinery to its limits.” Memory is crucial in executing calculations, as are techniques for calculating efficiently, given the large strings of numbers involved. The calculation of Pi to longer and longer number strings has improved along with the advancement of computers, and the current record-holder is Japan’s T2K Supercomputer, which calculated the value to 2.6 trillion digits in about 73 hours and 36 minutes. That marks a considerable upgrade from the ENIAC computer’s 1949 estimation of Pi to 2,037 digits, which took 70 hours. Ten years later an IBM 704 was able to calculate Pi to 16,157 places in four hours and 20 minutes. “Pi serves as a test case for mathematical studies in the area of number theory,” Lozier notes.

MORE

The Hidden Risk of a Meltdown in the Cloud

Technology Review (03/13/12)

Despite the rising popularity of cloud-based computing, the risks of a full-scale cloud migration have yet to be properly explored, says Yale University professor Bryan Ford. He notes that in the worst-case scenario, a cloud could experience a full meltdown that could seriously threaten any business that relies on it. “This simplistic example might be unlikely to occur in exactly this form on real systems–or might be quickly detected and ‘fixed’ during development and testing–but it suggests a general risk,” Ford says. He notes, for example, that a lack of transparency between different cloud providers could lead to conflicting internal control loop cycles. “Non-transparent layering structures … may create unexpected and potentially catastrophic failure correlations, reminiscent of financial industry crashes,” Ford warns. A more general risk occurs when systems are complex because unrelated parts become intertwined in unexpected ways. He notes that only recently have industry experts begun to realize that bizarre and unpredictable behavior often occurs in systems consisting of networks of networks. “We should study [these unrecognized risks] before our socioeconomic fabric becomes inextricably dependent on a convenient but potentially unstable computing model,” Ford says.

MORE

Deutsche Telekom Claims Record Data Transfer Record

 BBC News (03/06/12)

Deutsche Telekom researchers recently achieved a usable bit rate of 400 Gbps over a single channel of its fiber-optic network, more than double the 186 Gbps record set by researchers in the United States and Canada last year. The researchers sent data along the company’s network between Berlin and Hanover and back again, a total distance of 456 miles. The experiment delivered a maximum 512 Gbps down each channel, of which 400 Gbps was usable data, which is the equivalent of transmitting 77 music CDs simultaneously in one second. Each optical fiber can carry up to 48 channels, which means the total potential throughput could reach up to 24.6 Tbps. Deutsche Telekom says the achievement was realized by working with Alcatel Lucent to create new technologies installed in its terminal stations at either end of the fiber. Much of the speed upgrade was accomplished via enhancements to the software used for forward error correction, which allowed a limited volume of corrupted bits to be corrected without having to resend the information. “It means improvements can be carried out without digging up the existing fiber, without massive hardware replacement–that’s actually the charm of the thing,” says Deutsche Telekom’s Heinrich Arnold.

MORE