• August 2012
    M T W T F S S

Supercomputers Solve Riddle of Congenital Heart Defects

University of Copenhagen (08/13/12)

University of Copenhagen researchers used a supercomputer to analyze millions of data points relating to congenital heart defects and found that a wide variety of risk factors influence the molecular biology of heart development.  “The discovery of a biological common denominator among many thousands of risk factors is an important step in health research, which in time can improve the prevention and diagnosis of congenital heart defects,” says Copenhagen professor Lars Allan Larsen.  The researchers analyzed several thousand genetic mutations and environmental risk factors associated with heart malformations with the goal of finding a pattern.  “Our investigations show that many different genetic factors together with environmental factors can influence the same biological system and cause disease,” says Harvard University’s Kasper Lage.  “The results are also interesting in a broader perspective, because it is probable that such interactions are also valid for diseases such as schizophrenia, autism, diabetes, and cancer.”  Copenhagen professor Soren Brunak notes the study’s outcomes illustrate how different combinations of variations in hereditary material can dispose an individual to disease, which may be useful in improving the efficiency of treatment by customizing an optimal approach for each individual patient.


Discovery May Simplify Quantum Computer Development

Computerworld Australia (08/07/12) Byron Connolly

 Australian National University (ANU), National University of Singapore (NUS), and University of Queensland researchers have suggested that background interference in quantum-level measurements, known as quantum discord, could be the key to discovering quantum computing’s potential.  Previously, quantum computing researchers have thought that quantum entanglement, a very difficult phenomenon to achieve, was the only way to develop quantum computing technologies.  However, the ANU, NUS, and Queensland researchers have demonstrated that quantum discord, a more robust and easy to access phenomenon, also could be used to develop quantum technology.  “In the long term, we want to have a revised understanding of what makes a quantum computer tick,” says ANU professor Ping Koy Lam.  “The hope is that we can simplify how quantum computers work and [make them] more accessible.”  NUS researchers first discovered the direct connection between quantum power and quantum discord, and then ANU scientists encoded information onto laser light to demonstrate the unlocking of this quantum resource.  “These results show that discord has potential that can be unlocked for quantum technologies,” Lam says.


Vipin Kumar to Receive 2012 ACM SIGKDD Innovation Award

CCC Blog (08/06/12) Erwin Gianchandani

University of Minnesota professor Vipin Kumar will receive ACM’s Special Interest Group on Knowledge Discovery and Data Mining (SIGKDD) 2012 Innovation Award at the 18th international ACM SIGKDD Conference.  The award is given to “one individual or one group of collaborators whose outstanding technical innovations in the KDD field have had a lasting impact on advancing the theory and practice of the field.”  SIGKDD notes that “professor Kumar has made numerous significant and impactful contributions to a wide range of core data mining areas including graph partitioning, clustering, association analysis, high performance and parallel data mining, anomaly/change detection, and data-driven discovery methods for analyzing global climate and ecosystem data.”  Kumar’s research group has been at the forefront in the development of data driven discovery methods for analyzing global climate and ecosystem data.  SIGKDD also praises Kumar’s work on change detection in spatio-temporal data, which has advanced the current state of the art in the monitoring of global forest cover using satellite data.  Global-scale application of these techniques has yielded comprehensive histories of large-scale ecosystem changes caused by fires, logging, droughts, flood, and farming that help to understand how such disturbances correspond with global climate variability and human activity.


A New Effective Approach for Image Stylization as Proposed by Researchers from China

Science in China Press (08/04/12) Li Ping

Researchers at the Chinese University of Hong Kong, Shanghai Jiao Tong University, and the Beijing Institute of Technology have developed a structure-aware image stylization method to create the effects of artistic drawing and painting using single digital images as input.  They also applied hardware graphics-processing unit (GPU) parallelism to allow for real-time non-photorealistic rendering for more efficient processing.  In addition, the researchers created an image structure map to naturally model the fine structure details in the original images to preserve the image structure between the original and stylized images.  The researchers note that structure-aware stylization usually requires less time for picture recognition, and they found that stylization exhibits more attractiveness to the people viewing the test images.  The new GPU-based structure-aware image stylization method preserves the fine structure between the original and stylized images while avoiding salience information loss caused by contrast abstraction.  The new method also applies an image structure map to naturally model the detailed image structure present in the original images.  The image structure map systematically models the boundary information within the imagery and accentuates the underlying inner structure detail for further stylization.


Mars Rover Curiosity Will Phone Home on NASA’s Interplanetary Internet

Government Computer News (08/06/12) Kevin McCaney

The U.S. National Aeronautics and Space Administration’s (NASA’s) Deep Space Network (DSN) will be key to what scientists learn about Mars.  DSN, which functions like an interplanetary Internet, carries the data collected by the Mars Science Laboratory Curiosity rover about 139 million miles back to Earth.  DSN comprises large dish antenna arrays at three locations approximately 120 degrees apart on Earth.  Each complex has a 70-meter antenna and several 34-meter antennas that give strong signals and the ability to send and receive large quantities of information.  Mission controllers at the Jet Propulsion Laboratory will communicate with Curiosity primarily via orbiters about 250 miles above Mars.  The rover can more easily connect with the orbiters using UHF software-defined radio.  It also could send signals directly to Earth via its X-band transmitter but power limitations would limit this approach to about three hours a day.  During an eight-minute period, Curiosity will be able to transmit about 60 megabits of data to the orbiter, which can then relay the information to Earth.  NASA is working to improve transmission speeds through space and plans to use arrays of smaller antennae to achieve the highest possible data rates.


National Lab Replaces Supercomputer With Newer, Faster Model

InformationWeek (07/31/12) Patience Wait

Argonne National Laboratory recently started accepting applications from scientists that want to use its new Mira supercomputer, which is ranked the third fastest in the world and has 768,000 core processors and operates at more than eight petaflops. Mira’s initial applications include studying the quantum mechanics of new materials, measuring the role and impact of clouds on climate, and modeling earthquakes. Those and 13 other projects are part of Argonne’s Early Science Program and are intended advance science, as well as evaluate Mira’s performance, according to Argonne’s Mike Papka. “A new architecture with a new system software stack, and at a scale that is larger than anyone else has run previously, results in a system that will have issues never seen before,” Papka says. “These issues need to be exposed and addressed before we go into production, and it often requires real users running real code on the system.” About 60 percent of Mira’s processing cycles will be devoted to projects selected for the U.S. Department of Energy’s Innovative and Novel Computational Impact on Theory and Experiment program, and 30 percent will go to projects accepted into the Advanced Science Computing Research Leadership Computing Challenge.


Writing Graphics Software Gets Much Easier

MIT News (08/02/12) Larry Hardesty

Massachusetts Institute of Technology (MIT) researchers have developed Halide, a programming language they say creates software that is easier to read, write, and revise than image-processing programs written in a conventional language.  Halide also automates code-optimization procedures, making the coding process much faster than with other languages.  The researchers used Halide to rewrite several common image-processing algorithms that had already been optimized by professional programmers.  They say the Halide versions saw as much as six-fold performance gains.  A Halide program has one section for the algorithms and another for the processing schedule, which can specify the size and shape of the image chunks that each core needs to process at each step in the schedule.  After the schedule has been developed, Halide automatically handles the accounting.  “When you have the idea that you might want to parallelize something a certain way or use stages a certain way, when writing that manually, it’s really hard to express that idea correctly,” notes MIT’s Jonathan Ragan-Kelley.  University of California, Davis professor John Owens says Halide “really has all the pieces you want from a completed system, and it’s in a really important application domain.”


Chasing Science as a Service

Texas Advanced Computing Center (07/18/12) Aaron Dubrow

The Texas Advanced Computing Center (TACC) has developed the A Grid and Virtualized Environment (AGAVE) advanced programming interface (API), which aims to extend the U.S.’s advanced computing resources to a much larger audience.  “When services have been built to that level, research starts moving really fast,” says TACC’s Rion Dooley.  “You can start leveraging manpower and focus exclusively on the science rather than the computation and technology needed to accomplish that science.”  Dooley says AGAVE is a flexible, Web-friendly platform that enables researchers with little programming experience to add functionality to their scientific computing software.  “If we can give thousands of researchers a few percent of their time back, that’s a win,” he says.  AGAVE also gives developers access to some of the U.S.’s most powerful supercomputers to facilitate their research.  For example, AGAVE is being used as part of the iPlant project, leveraging supercomputing resources at the Pittsburgh Supercomputing Center, the San Diego Supercomputing Center, and TACC.  The second major release of the AGAVE API will include support for new types of systems, such as public and private clouds, that will give users faster turnaround times on their experiments.


Hawking Launches Supercomputer

University of Cambridge (07/20/12)

Stephen Hawking launched Europe’s most powerful shared-memory supercomputer during the recent Numerical Cosmology 2012 workshop at the University of Cambridge’s Center for Mathematical Sciences.  SGI manufactured the COSMOS supercomputer, which will help expand understanding of the universe.  “Cosmology is now a precision science, so we need machines like COSMOS to reach out and touch the real universe, to investigate whether our mathematical models are correct,” Hawking says.  He notes that significant advances have been recently made in cosmology and particle physics, pointing out that finding an ultimate theory in principle would enable researchers to predict everything in the universe.  “Even if we do find the ultimate theory, we will still need supercomputers to describe how something as big and complex as the universe evolves, let alone why humans behave the way they do,” Hawking says.  COSMOS is part of the Science and Technology Facilities Council DiRAC High Performance Computing facility, which serves Britain’s cosmologists, astronomers, and particle physicists, as well as non-academic users.  The current research program of the COSMOS consortium focuses on advancing understanding of the origin and structure of the universe.


Researchers Squeeze GPU Performance From 11 Big Science Apps

HPC Wire (07/18/12) Michael Feldman

The Oak Ridge Leadership Computing Facility published a report in which researchers documented that graphical processing unit (GPU)-equipped supercomputers increased application speeds by a factor of between 1.4 and 6.1 across a range of science applications. The performance gains using GPU-based supercomputers indicate the technology is generating good results across a range of applications. The 11 simulation programs, which include S3D, Denovo, LAMMPS, WL-LSMS, CAM-SE, NAMD, Chroma, QMCPACK, SPECFEM-3D, GTC, and CP2K are used by tens of thousands of researchers around the world. The report was written by researchers from Oak Ridge National Laboratory, the National Center for Supercomputing Applications, and the Swiss National Supercomputing Center (CSCS). The researchers ran the programs on CSCS’ Monte Rosa, which has two AMD Interlagos central processing units (CPUs) per node, and TitanDev, which consists of hybrid nodes that each contain one NVIDIA Fermi GPU and one Interlagos CPU. The researchers found that only Chroma fully exploited the performance advantage of GPU-based processing. Meanwhile, another factor to consider in comparing application performance is power usage, since GPU accelerators use about twice as much power as high-end X86-based systems.