• November 2010
    M T W T F S S
    « Oct   Dec »
    1234567
    891011121314
    15161718192021
    22232425262728
    2930  

Reaching for Sky Computing.

International Science Grid This Week (11/10/10) Miriam Boon

Researchers are developing tools to combine independent cloud computing environments to form an architectural concept dubbed sky computing by researchers at the Universities of Chicago (UC) and Florida in a recent paper. “[In the paper] we talked about standards and cloud markets and various mechanics that might lead to sky computing over multiple clouds, and then that idea was picked up by many projects,” says UC’s Kate Keahey. Inspired by the paper, the Universite de Rennes 1’s Pierre Riteau contacted Keahey to explore the sky computing concept further. The researchers used Grid’5000 and FutureGrid, two cyberinfrastructure platforms for large-scale parallel and distributed computing research, to create a heterogeneous environment for testing the sky computing concept. “The biggest challenge was being able to make it go to a large scale, because when you switch from about 30 machines to 1,000, you have a lot of issues that appear, and for this we had to improve some parts of the system,” Riteau says. “The fact is that cloud computing created new patterns, and we are having to figure out now how to build tools that will take advantage of those patterns,” Keahey says.

-MORE-

Supercomputers ‘Will Fit in a Sugar Cube,’ IBM Says.

BBC News (11/12/10) Jason Palmer

IBM researchers led by Bruno Michel have developed a water-cooling method for creating supercomputer processors that could shrink them to the size of a sugar cube. The approach, called Aquasar, involves stacking many computer processors on top of one another and cooling them with water flowing between each one. Aquasar is almost 50 percent more energy efficient than the world’s leading supercomputers, according to IBM. “In the future, the ‘Green 500’ will be the important list, where computers are listed according to their efficiency,” Michel says. The water-cooling system is based on a slimmed-down, more efficient circulation of water that borrows ideas from the human body’s circulatory system. “But several challenges remain before this technology can be implemented–issues concerning thermal dissipation are among the most critical engineering challenges facing [three-dimensional] semiconductor technology,” Michel says.

-MORE-

New Standard for Supercomputing Proposed.

Sandia National Laboratories (11/15/10) Neal Singer

Sandia National Laboratories has developed Graph500, a new supercomputing rating system that will be released at the Supercomputing Conference 2010. Graph500 tests supercomputers for their skill in analyzing large, graph-based structures that link the many data points used in biological, social, and security problems. “By creating this test, we hope to influence computer makers to build computers with the architecture to deal with these increasingly complex problems,” says Sandia’s Richard Murphy. However, Graph500 was not created to compete with the Linpack standard test for supercomputers. “There have been lots of attempts to supplant it, and our philosophy is simply that it doesn’t measure performance for the applications we need, so we need another, hopefully complementary, test,” Murphy says. The Graph500 benchmark creates a large graph that inscribes and links huge numbers of participants, and a parallel search of that graph. Machines designed to do well on the Graph500 test could be used for problems in cybersecurity, medical informatics, data enrichment, social networks, and symbolic networks. “Many of us on the steering committee believe that these kinds of problems have the potential to eclipse traditional physics-based high-performance computing over the next decade,” Murphy says.

-MORE-

U.S. Building Next Wave of Supercomputers.

Computerworld (11/12/10) Patrick Thibodeau

Both the Oak Ridge National Laboratory (ORNL) and the Lawrence Livermore National Laboratory (LLNL) are building 20-petaflop supercomputer systems that are expected to ready in 2012. The new systems would be much more powerful than today’s fastest supercomputers. James Hack, director of ORNL’s National Center for Computational Sciences, says their machine will use accelerators to boost performance, but he offered no other details about its design. Lawrence Livermore’s machine is being built by IBM and could be eligible for consideration on the June 2012 Top500 list, says LLNL’s Don Johnston. Meanwhile, China also is looking to build more powerful supercomputers, and analysts say the global attention of the supercomputing race may raise the profile of the industry and boost government funding. The international supercomputing competition is occurring in conjunction with the development of new architectures and programming models to support exascale systems, which are 1,000 times more powerful than a petascale system. Exascale will have “tremendous implications for human health, biology, and many other fields, too,” says ORNL’s Jeremy Smith.

-MORE-

SDSC Part of DARPA Program to Deliver Extreme Scale Supercomputer.

University of California, San Diego (11/04/10) Jan Zverina

The University of California, San Diego’s San Diego Supercomputer Center (SDSC) will provide expertise to the U.S. Defense Advanced Research Projects Agency’s Ubiquitous High Performance Computing multi-year program, which is developing the next generation of extreme scale supercomputers. The program will be completed in four phases, the first two of which expect to be finished in 2014, and the final two phases expect to result in a full prototype system in 2018. The program’s applications of interest include rapid processing of real-time sensor data, establishing complex connectivity relationships within graphs, and complex strategy planning. SDSC’s Performance Modeling and Characterization (PMaC) laboratory will analyze and map applications for efficient operation on the project’s Intel hardware. “We are working to build an integrated hardware/software stack that can manage data movement with extreme efficiency,” says SDSC’s Allan Snavely. “The Intel team includes leading experts in low-power device design, optimizing compilers, expressive program languages, and high-performance applications, which is PMaC’s special expertise.”

-MORE-

Supercomputing Fest Will Spotlight World’s Fastest Computers, High-Performance Issues.

Network World (11/11/10) Jon Brodkin

SC10, the annual supercomputing conference, will highlight high-performance computing (HPC) advances in computation, networking, storage, and analysis. Harvard Business School professor Clayton Christensen will lead a discussion focused on the challenges that the HPC industry faces “as it seeks new paradigms to frame its emerging enabling technologies for continued performance growth.” Researchers from the University of California, San Diego will present a paper that studies the effects of flash memory and the role that other nonvolatile storage types play in supercomputing. The paper says that nonvolatile solid-state storage promises to address slow performance issues and facilitate “faster, cheaper, and more agile” high-performance systems. Meanwhile, researchers from NASA, the California Institute of Technology, and the University of Southern California will present a joint paper that examines data-sharing options for data workflows on the Amazon EC2 cloud computing service. The paper says that “one of the advantages of cloud computing and virtualization is that the user has control over what software is deployed, and how it is configured. However, this flexibility also imposes a burden on the user to determine what system software is appropriate for their application.”

-MORE-

Internet2’s New Leader Outlines Vision for Superfast Education Networks

Chronicle of Higher Education (11/02/10) Jeff Young

New Internet2 CEO H. David Lambert says superfast computer networks will enable universities to connect to global satellite campuses, noting that U.S. colleges and universities now have more than 160 campuses overseas. Universities also need superfast computer networks to participate in international research and to build better ties with communities near their campuses, Lambert says. However, he says there are many financial and cultural obstacles that stand in the way of the development of superfast computer networks. Lambert cites a need for better cooperation among various national and regional university networking projects as one of the many challenges. The project to bring broadband to communities, which received $62.5 million in federal stimulus money, will help universities show lawmakers that such initiatives are worthy of support, Lambert says. People in academe should continue to play a leading role in building the Internet, he says.

-MORE-