• March 2011
    M T W T F S S
    « Feb   Apr »
     123456
    78910111213
    14151617181920
    21222324252627
    28293031  

20 Petaflops: New Supercomputer for Oak Ridge Facility to Regain Speed Lead Over the Chinese

PhysOrg.com (03/23/11) Bob Yirka

The U.S. Department of Energy has commissioned Titan, a supercomputer that is expected to achieve 20 petaflops per second, which would make it the fastest computer in the world. Last fall China’s National University of Defense team unveiled the Tianhe-1A, a machine capable of reaching a speed of 2.5 petaflops and is ranked first on the Top500 list. Cray Computer will build Titan, which will use XT3, 4, and 5 processor boxes and will be configured in a three-dimensional torus topology rather than as an array. Titan will use a Gemini XE interconnect, one of two new pieces of proprietary hardware that will be added to build the computer. The other is a graphics display unit co-processor, which will enable Titan to perform calculations more quickly. The computer also will use globally addressable memory, which will enable data to move through input and output channels without slowing down. Titan will join a collection of some of the fastest computers in the world at the Oak Ridge National Laboratory, and will be used to calculate complex energy systems. The project will cost approximately $100 million, with the first phase expected by the end of the year and the second phase to be completed in 2012.

MORE

Homemade CPUs on the Way for Local Supercomputers.

People’s Daily (China) (03/07/11)

China’s supercomputers will use Chinese-made chips by the end of this year, says Hu Weiwu, the chief developer of the Loongson series of processors at the Chinese Academy of Sciences (CAS). Hu says the forthcoming Dawning 6000 supercomputer will use Loongson microchips as its core component and will have a computing speed of more than 1,000 trillion operations per second. “Just like a country’s industry cannot always depend on foreign steel and oil, China’s information industry needs its own [central processing unit],” he says. The Dawning 6000 supercomputer will employ less than 10,000 Loongson microchips and will boast greater energy-efficiency. The Institute of Computing Technology of CAS, the Jiangnan Institute of Computing Technology, and the National University of Defense Technology all have their own supercomputer projects that are scheduled to be running on Chinese-made microchips by the end of 2011. However, Hu notes that few applications have been developed for them so far. “We have enough supercomputers in China but still can’t fully utilize them,” he says.

MORE

Oak Ridge Looks Toward 20 Petaflop Supercomputer.

HPC Wire (03/07/11) Michael Feldman

Oak Ridge National Laboratory (ORNL) is planning to build a third supercomputer called Titan, that will run at 20 petaflops and should be completed in 2012. ORNL’s other two supercomputers–Jaguar and Kraken–run at 2.3 and 1.0 petaflops, respectively. The Titan system will cost about $100 million, according to ORNL associate lab director Jeff Nichols, making it less expensive than the Department of Energy’s other 20-petaflop system, the IBM Blue Gene/Q Sequoia supercomputer, which is expected to cost more than $200 million. Titan will be a Cray supercomputer powered by NVIDIA Tesla graphics processing units, and both it and Sequoia could challenge Chinese systems for the title of most powerful supercomputer in 2012. Sequoia will be primarily used for classified nuclear weapons simulations as part of NASA’s Stockpile Stewardship program, in addition to running scientific applications in astronomy, energy, genomics and climatology. Titan will be dedicated to running a variety of open science applications.

MORE

White House Announces Project to Spur HPC Adoption in US Manufacturing.

Michael Feldman, HPCwire Editor (03/03/2011)

The White House hosted a press conference on Wednesday to announce a new public-private partnership that aims to bring HPC technology to the have-nots of the US manufacturing sector. Using a $2 million grant from the US Department of Commerce and an additional $2.5 million investment from industrial partners, a consortium has been formed to broaden the use of HPC technology by small manufacturing enterprises (SMEs).

MORE

Retooling Algorithms.

MIT News (02/25/11) Larry Hardesty

Massachusetts Institute of Technology (MIT) professor Charles Leiserson says the best method for rewriting algorithms to run on parallel processors is to use a divide-and-conquer technique, which involves splitting problems into smaller parts to make them easier to solve, allowing the computer to cater an algorithm’s execution to the resources available. However, the divide-and-conquer method does not reveal where or how to divide the problems, which must be answered on a case-by-case basis. The divide-and-conquer strategy also means continually splitting up and recombining data, as it is passed between different cores, which can cause more difficulties, such as data storage. MIT graduate student Tao Benjamin Schardl developed a new method of organizing data, called a bag, which led to a new algorithm for searching data trees that provides linear speedup, often considered the holy grail of parallelization because it can double the efficiency of the algorithm.

MORE

Grid Pioneer Ian Foster Discusses the Future of Science in the Cloud

HPC in the Cloud (02/24/11) Rich Wellner

University of Chicago professor Ian Foster, director of the Computation Institute, has a vision to facilitate a transformational change in scientific research, to the point where research capabilities such as massive data and exponentially faster computers become accessible to researchers everywhere. “We need to take the [information technology (IT)] required for research and deliver that IT in a convenient and cost-effective manner, just as Google delivers email and Salesforce.com delivers customer relationship management,” Foster says. He notes that the Globus Alliance has started to move in this direction by concentrating on transferring large volumes of data between locations, bundling this capability into a service called Globus Online. Foster describes Globus Online as “our first foray into what you might call a computational science cloud: Hosted services that let you ‘use the grid’ without installing software.” Foster cites an instance in which an early user employed Globus Online to move data from a single source to 11 sites across the United States as an example of how Globus Online can enhance computing processes. Another goal of Foster’s is making the sharing of files and results with collaborators more intuitive, manageable, and direct in order to eliminate much of the tedium inherent in scientific research.

MORE