• June 2011
    M T W T F S S
    « May   Jul »

Japanese ‘K’ Computer Is Ranked Most Powerful

New York Times (06/19/11) Verne G. Kopytoff

Japan’s K Computer was ranked number one on the latest Top500 list of the world’s fastest supercomputers. The K Computer achieved a top speed of 8.2 petaflops per second, three times faster than China’s Tianhe-1A supercomputer, which previously held the record as the world’s fastest computer. The K Computer, which was built by Fujitsu and is located at the Riken Advanced Institute for Computational Science, is composed of 672 cabinets filled with system boards and uses enough electricity to power almost 10,000 homes at a cost of about $10 million per year, according to the University of Tennessee at Knoxville professor Jack Dongarra, who keeps the official rankings of computer performance. “It’s a very impressive machine,” Dongarra says. “It’s a lot more powerful than the other computers.” Japan and China hold four of the top five spots on the latest Top500 list, while the United States has five of the top 10 spots on the list. It is the first time since 2004 that Japan has topped the supercomputer rankings, but Dongarra notes that the Blue Waters supercomputer being built at the University of Illinois at Urbana-Champaign could offer speeds similar to the K Computer.


IBM Helps Build Students’ Software Development Skills.

 eWeek (06/07/11) Darryl K. Taft

 IBM announced during its Innovative 2011 conference that it is bringing its Jazz development environment to universities.  IBM says JazzHub is part of an effort to assist students and professionals in building the necessary software development skills to develop complex, intelligent product designs.  JazzHub will enable university teams to develop directly on the IBM Jazz.net Web site at no cost.  The cloud-based service is designed to serve as an open ecosystem for students to build new and innovative software applications.  North Carolina State University (NCSU), which is participating in the JazzHub beta program, plans to immediately incorporate JazzHub into its coursework.  Previously, the university used Jazz for research analyzing information about artifacts and in an online course in Agile software development.  “Software engineering courses are meant to prepare students for the practice of designing, developing, understanding, and maintaining software in the real world, and the effectiveness of these courses has a tremendous impact on the software industry,” says NCSU professor Jim Yuill.  “IBM’s continued commitment to provide collaborative tools, at no charge to students, greatly improves the quality of their learning.”


New Parallelization Technique Boosts Our Ability to Model Biological Systems.

 NCSU News (06/09/11) Matt Shipman

 A new technique for using multi-core chips more efficiently has been developed by researchers at North Carolina State University (NCSU).  The team created a way for passing information back and forth between cores on a single chip by using threads to create locks that control access to shared data, says project leader and NCSU professor Cranos Williams.  “This allows all of the cores on the chip to work together to solve a unified problem,” Williams says.  The team tested the approach by running three models through chips that utilized one core, as well as chips that used the parallelization technique to utilize two, four, and eight cores.  In the models, the chip that utilized eight cores ran at least 7.5 times faster than the chip that used only one core.  The technique improved the efficiency of algorithms used to build models of biological systems, creating more realistic models that can account for uncertainty and biological variation.  Drug development and biofuels engineering are among the research areas that stand to benefit from the parallelization technique.


The Million-Dollar Puzzle That Could Change the World

 New Scientist (06/01/11) Jacob Aron

 The single biggest problem in computer science, for which the Clay Mathematics Institute is offering a $1 million prize to whoever solves it, is determining whether P equals NP, which raises the issue that computation has a fundamental, innate limitation that goes beyond hardware.  The complexity class of NP, or non-deterministic polynomial time, is comprised of problems whose solutions are difficult to come by but can be confirmed in polynomial time.  All problems in the set P also can be found in the set NP, but the core of the P=NP? problem is whether the reverse also applies.  If P turns out not to be equal to NP, it demonstrates that some problems are so involved naturally that crunching through them is impossible, and validating this theory would gain insight on the performance of the latest computing hardware, which divides computations across multiple parallel processors, says the University of Massachusetts, Amherst’s Neil Immerman.  The field of cryptography also could be affected by this proof, as even the toughest codes would be cracked by a polynomial-time algorithm for solving NP problems.  On the other hand, finding an algorithm to solve an NP-complete problem would enable any NP problem to be solved in polynomial time, establishing a universal computable solution.  This could support the creation of algorithms that execute near-perfect speech recognition and language translation, and that facilitate computerized visual information processing equal to that of humans.


The Largest Telescope Ever Built Will Rely on Citizens to Analyze Its Reams of Data

 Popular Science (05/27/11) Clay Dillow

 To prove that they can handle the Square Kilometer Array (SKA), the world’s biggest and most sensitive radio telescope, researchers from Australia and New Zealand plan to launch a huge cloud computing initiative that will replicate the data flow required to run the giant telescope.  The SKA will consist of 3,000 radio dishes spread as far as 2,000 miles from the central core.  The telescope will generate such a massive volume of data that SKA could need data links with a capacity greater than that of the current Internet.  To support the SKA, Australia is investing $80 million into the Pawsey Center supercomputing hub, which will be petaflop-capable and the third fastest supercomputer in the world when it goes online in 2013.  However, even that capacity may be insufficient, so researchers plan to use cloud computing to distribute the data across thousands of computers and mainframes at universities and institutes around the world.  The researchers say the plan would solve the problem of constantly adding computing capacity and it would engage the public and the global academic community.


PRACE Offers Access to Europe’s Fastest Supercomputers.

 PRACE (05/27/11)

The PRACE Research Infrastructure is making three Tier-0 supercomputers at the highest performance level and 17 national Tier-1 systems available to European researchers in academia and industry.  The Tier-0 systems will include HERMIT, a Cray XE6 system that will be installed next fall at the University Stuttgart’s High Performance Center.  HERMIT will have an initial peak performance of 1 petaflop/s and then a 4-5 petaflop/s second installation step in 2013.  Researchers will have access to the 1 petaflop/s IBM BlueGene/P system JUGENE, hosted by the Julich Supercomputing Center, and the Bull Bullx cluster CURIE at Bruyeres-Le-Chatel, which will reach a peak performance of more than 1.6 petaflop/s in its second installation phase in October.  “PRACE is proving to be the European supercomputer infrastructure,” says European Union commissioner Neelie Kroes.  “PRACE is a key driver for the development of European science and technology and provides vital support to researchers addressing the major challenges of our time like climate change, energy saving, and the aging population.”