• September 2012
    M T W T F S S

UCSB Researchers Demonstrate That 15=3×5 About Half of the Time

University of California, Santa Barbara (08/20/12) Andrea Estrada

University of California, Santa Barbara (UCSB) researchers have designed and fabricated a quantum processor that can factor a composite number into its constituent prime factors.  The researchers were able to factor the number 15 into its prime factors, three and five.  “We chose the number 15 because it is the smallest composite number that satisfies the conditions appropriate to test [Peter Shor’s prime factoring] algorithm–it is a product of two prime numbers, and it’s not even,” says UCSB researcher Erik Lucero.  The researchers say their achievement represents a milestone in the effort to build a quantum computer capable of factoring much larger numbers, with ramifications for cryptography and cybersecurity.  “What is important is that the concepts used in factoring this small number remain the same when factoring much larger numbers,” says UCSB professor Andrew Cleland.  The research represents a significant step toward a scalable quantum architecture while meeting a benchmark for quantum computation.  “After repeating the experiment 150,000 times, we showed that our quantum processor got the right answer just under half the time,” Lucero says.


Recreating a Slice of the Universe

Center for Astrophysics (08/15/12) David A. Aguilar

Researchers at the Harvard-Smithsonian Center for Astrophysics (CfA) and the Heidelberg Institute for Theoretical Studies (HITS) have developed Arepo, software that can accurately follow the birth and evolution of thousands of galaxies over billions of years.  “We’ve created the full variety of galaxies we see in the local universe,” says CfA’s Mark Vogelsberger.  Arepo generates a full simulation of the universe, taking as input only the observed afterglow of the Big Bang and evolving forward in time for 14 billion years.  “We took all the advantages of previous codes and removed the disadvantages,” says HITS researcher Volker Springel.  Arepo utilizes a grid that flexes and moves in space to match the motions of the underlying gas, stars, dark matter, and dark energy.  The simulations ran on Harvard’s Odyssey high-performance supercomputer, using 1,024 processor cores, which enabled the program to compress 14 billion years of universal evolution into a few months.  “Our simulations improve over previous ones as much as the Giant Magellan Telescope will improve upon any telescope that exists now,” notes CFa’s Debora Sijacki.


Rootbeer Brings GPGPU Integration to Java

bit-tech.net (08/13/12) Gareth Halfacree

Syracuse University researchers recently released the source code for the Rootbeer compiler, a tool designed to make it easier to write code for execution on a graphics processor.  The researchers say Rootbeer enables programmers to access the power of a graphics processing unit directly within Java.  “Rootbeer allows developers to simply write code in Java and the (de)serialization, kernel code generation, and kernel launch is done automatically,” say Syracuse researcher Phil Pratt-Szeliga.  “This is in contrast to Java language bindings for CUDA or OpenCL, where the developer still has to do these things manually.”  The Rootbeer compiler supports all of the standard Java features, except for dynamic method invocation, reflection, and native methods.  During testing, the researchers developed three performance example applications with the best demonstrating a 100-fold performance boost compared to central-processing unit-based execution.


Climate Science Triggers Torrent of Big Data Challenges

HPC Wire (08/15/12) Dawn Levy

Oak Ridge National Laboratory (ORNL) supercomputers running models to assess climate change ramifications and mitigation tactics are rapidly generating a wide variety of big data in vast volumes. ORNL’s Galen Shipman says climate researchers have significantly boosted the temporal and spatial resolution of climate models as well as their physical and biogeochemical complexity, contributing to the amount of data produced by the models. Shipman notes it often takes weeks or months to analyze the climate models’ data sets with traditional analysis tools, and the Department of Energy’s (DOE’s) Office of Biological and Environmental Research (BER) is striving to address this challenge through multiple projects that have yielded parallel analysis and visualization tools. Shipman also says substantial efforts have been made to deliver the infrastructure to support the geographically distributed data, especially between DOE supercomputing centers, while DOE BER continues to invest in the software technologies needed to maintain a distributed data archive with multiple petabytes of climate data stored worldwide through the Earth System Grid Federation project. Shipman says data movement is the biggest challenge for most current visualization workloads, and he cites in situ analysis where visualization and analysis are embedded within the simulation as a promising approach.