• August 2010
    M T W T F S S
    « Jul   Sep »
     1
    2345678
    9101112131415
    16171819202122
    23242526272829
    3031  

Android Phones Can Substitute for Supercomputers.

Wired News (08/20/10) Ganapati, Priya

Researchers at the Massachusetts Institute of Technology (MIT) and the Texas Advanced Computing Center (TACC) have developed an Android application that can take simulations from supercomputers and solve them further on a mobile phone. “The idea of using a phone is to show we can take a device with one chip and low power to compute a solution so it comes as close to the one solved on a supercomputer,” says TACC’s John Peterson. The program uses a technique called certified reduced basis approximation, which enables researchers to take a complex problem, define the values that are most relevant to the problem, and set the upper and lower bounds. “The payoff for model reduction is large when you can go from an expensive supercomputer solution to a calculation that takes a couple of seconds on a smartphone,” says MIT’s David Knezevic.

-MORE-

Multicore Processing: Breaking Through the Programming Wall.

Scientific Computing (08/12/10) Conway, Steve

Significant challenges remain for applications to take advantage of the first petascale supercomputers, which feature distributed memory architectures and multicore systems with more than 100,000 processor cores each. Although a few high-performance computing (HPC) applications run on parallel computing systems, the vast majority of HPC applications were originally written to be run on a single processor with direct access to main memory. Other issues with multicore HPC systems include the fact that to save energy and control heat, many do not operate at their top speed. In addition, computing clusters based on standard x86 processors dominate HPC systems. However, as standard x86 processors have increased the number of cores they use, they have increased their peak performance without corresponding increases in bandwidth. The relatively poor bytes/flops ratio of x86 processors also has limited cluster efficiency and productivity by making it increasingly difficult to move data into and out of each core fast enough to keep the cores busy. Meanwhile, massive parallelism from growing core counts and system sizes has outgrown programming paradigms, creating a parallel performance wall that will reshape the nature of HPC code design and system usage.

-MORE-

Award-Winning Supercomputer Application Solves Superconductor Puzzle.

Oak Ridge National Laboratory (08/09/10) Freeman, Katie

Oak Ridge National Laboratory (ORNL) researchers have found that superconducting materials perform best when high- and low-charge density varies on the nanoscale level. The researchers rewrote computational code for the numerical Hubbard model that previously assumed copper-compound superconducting materials known as cuprates to be homogenous from atom to atom. “Cuprates and other chemical compounds used as superconductors require very cold temperatures, nearing absolute zero, to transition from a phase of resistance to no resistance,” says ORNL’s Jack Wells. The colder the conductive material has to get to reach the resistance-free superconductor phase, superconductor power infrastructures become more costly and less efficient. “The goal following the Gordon Bell Prize was to take that supercomputing application and learn whether these inhomogeneous stripes increased or decreased the temperature required to reach transition,” Wells says. The researchers hope a material could become superconductive at an easily achieved and maintained low temperature, eliminating much of the accompanying cost of the cooling infrastructure.

-MORE-

Glaucoma Sufferers to Benefit From Supercomputer.

University of Melbourne (08/10/10) O’Neill, Emma

The Victorian Life Sciences Computation Initiative (VLSCI), led by researchers at the University of Melbourne, will use the IBM Blue Gene supercomputer’s large-scale processing capacity to conduct computer simulations of eye tests that assess the whole field of vision. The researchers say the results could lead to the development of faster and more accurate vision tests. “Currently these take days on a standard computer, but with Blue Gene we can do them in minutes, allowing even more complex approaches to be evaluated,” says Melbourne professor Andrew Turpin. He notes that current clinical tests of the vision field are highly variable, and adds that a reliable determination of whether vision is deteriorating due to glaucoma can take several years. “Our novel combination of data from both images of the optic nerve, and our new visual field testing strategies, will hopefully markedly reduce this time,” Turpin says.

-MORE-

Data World Record Falls as Computer Scientists Break Terabyte Sort Barrier.

UCSD News (CA) (07/27/10) Kane, Daniel

University of California, San Diego (UCSD) researchers recently broke the terabyte barrier by sorting more than 1 terabyte of data in 60 seconds. The researchers also were able to sort 1 trillion data records in 172 minutes. “In data centers, sorting is often the most pressing bottleneck in many higher-level activities,” says UCSD professor Amin Vahdat. To break the terabyte barrier the researchers built a system consisting of 52 computer nodes. Each node is a commodity server with two quad-core processors, 24 GB of memory, and 16 500-GB disks. “If a major corporation wants to run a query across all of their page views or products sold, that can require a sort across a multi-petabyte dataset and one that is growing by many gigabytes every day,” Vahdat says. “Companies are pushing the limit on how much data they can sort, and how fast. This is data analytics in real time.”

-MORE-