• October 2012
    M T W T F S S

Signature of Long-Sought Particle Could Revolutionize Quantum Computing Seen by Purdue Physicist

Purdue University News (09/25/12) Elizabeth K. Gardner

Purdue University professor Leonid Rokhinson is the leader of a team that successfully demonstrated the fractional a.c. Josephson effect, a signature of Majorana fermions that could make fault-tolerant quantum computing a reality.  The particles could potentially encode quantum information in a way that addresses quantum bits’ vulnerability to small disruptions in the local environment.  “Information could be stored not in the individual particles, but in their relative configuration, so that if one particle is pushed a little by a local force, it doesn’t matter,” Rokhinson says.  He also notes Majorana fermions are novel in their ability to retain an interaction history that can be used for quantum information encoding.  “When you swap two Majorana fermions, it leaves a mark by altering their quantum mechanical state,” Rokhinson points out.  “This change in state is like a passport book full of stamps and provides a record of exactly how the particle arrived at its current destination.”  University of Maryland professor Victor Yakovenko notes the observation of the fractional a.c. Josephson effect does not necessarily mean fault-tolerant quantum computing will be realized soon.  However, he says it “could open the door to a whole new field of the topological effects of quantum mechanics.”


Meet Mira, the Supercomputer That Makes Universes

The Atlantic (09/25/12) Ross Andersen

One of the largest and most complex universe simulations ever attempted will be run in October by Mira, the world’s third fastest supercomputer.  The model will condense more than 12 billion years’ worth of cosmic evolution into two weeks, tracking trillions of particles as they form into the universe’s web-like structure.  Argonne National Laboratory physicist Salman Habib says this structure remains consistent over many universe simulations of increasing scale.  He says the size of supercomputers such as Mira, which has nearly a petabyte of memory, makes universe simulations possible thanks to the tremendous increase in speed.  “If you tried to do a simulation like this on a normal computer, you wouldn’t be able to fit it, and even if you could fit it, if you tried to run it, it would never finish,” Habib notes.  He predicts that next-generation computers may require new models for programming, powering, or error correction because the physical limits of Moore’s Law will have been reached.  “There is some hope that there will be investment [in technologies to exponentially ramp up computer speed], because supercomputer simulations are increasingly being used outside the basic sciences,” Habib says.


Breakthrough in Bid to Create First Quantum Computer

University of New South Wales (09/20/12) Miles Gough

University of New South Wales (UNSW) researchers say they have created the first working quantum bit based on a single atom in silicon, which could lead to the development of ultra-powerful computers.  The breakthrough enables researchers to both read and write information using the spin of an electron bound to a single phosphorus atom embedded in a silicon chip.  “For the first time, we have demonstrated the ability to represent and manipulate data on the spin to form a quantum bit, or ‘qubit,’ the basic unit of data for a quantum computer,” says UNSW professor Andrew Dzurak.  The finding follows a 2010 study by the same UNSW researchers, who demonstrated the ability to read the state of an electron’s spin.  The discovery of how to write the spin state completes the two-stage process required to operate a quantum bit.  The researchers achieved their result using a microwave field to gain control over an electron bound to a single phosphorous atom.  “We have been able to isolate, measure, and control an electron belonging to a single atom, all using a device that was made in a very similar way to everyday silicon computer chips,” says UNSW’s Jarryd Pla.


India Plans Fastest Supercomputer by 2017

Indian Express (09/16/12)

India’s Center for Development of Advanced Computing (C-DAC) has drafted a proposal for developing a range of petaflop and exaflop computers over five years.  The exaflop supercomputers would be at least 61 times faster than the Sequoia, the world’s most powerful supercomputer, which has registered a top computing speed of 16.32 petaflops.  India’s telecom and information technology minister Kapil Sibal shared the roadmap in a letter to Prime Minister Manmohan Singh.  Sibal also wants to return the task of coordinating overall supercomputing activities to the Department of Electronics and Information Technology (DEITY).  The proposal calls for the Indian government to give the task of setting up a National Apex Committee to oversee the implementation of the proposed Supercomputing Mission to DEITY, and have C-DAC establish petaflop and exascale supercomputing facilities and development projects.


A Network to Guide the Future of Computing

CORDIS News (09/06/12)

The European Network of Excellence on High Performance and Embedded Architecture and Compilation (HiPEAC) aims to steer and increase European research in the area of high-performance and embedded computing systems. A new edition of the HiPEAC Roadmap, which has become a guidebook for the future of computing systems research in Europe, will be published later this year. “We didn’t really set out doing it with that aim in mind, but the [European Commission] took notice of it, consulted with industry on it, found the challenges we had identified to be accurate, and started to use it to focus research funding,” says Ghent University professor Koen De Bosschere. The latest edition of the HiPEAC report concludes that specializing computing devices is the most promising path for dramatically improving the performance of future computing. Therefore, HiPEAC has identified seven concrete research objectives, ranging from energy efficiency to system complexity and reliability, related to the design and the exploitation of specialized heterogeneous systems. “We can only go so far by following current trends and approaches, but in the long run we will nonetheless want and require more processing power that is more reliable, consumes less energy, produces less heat, and can fit into smaller devices,” De Bosschere says.


Southampton Engineers a Raspberry Pi Supercomputer

University of Southampton (United Kingdom) (09/12/12)

University of Southampton researchers have developed Iridis-Pi, a supercomputer made from 64 Raspberry Pi computers and Lego.  “We installed and built all of the necessary software on the Pi, starting from a standard Debian Wheezy system image, and we have published a guide so you can build your own supercomputer,” says Southampton professor Simon Cox.  Iridis-Pi runs off of one 13 Amp mains socket and uses Message Passing Interface to communicate between nodes using Ethernet.  The researchers note the entire system cost less than 2,500 pounds Sterling and includes 64 processors and one terabyte of memory.  The researchers used the Python Tools for Visual Studio plug-in to develop software for the system.  “The team wants to see this low-cost system as a starting point to inspire and enable students to apply high-performance computing and data handling to tackle complex engineering and scientific challenges as part of our ongoing outreach activities,” Cox says.


Researchers Craft Program to Stop Cloud Computer Problems Before They Start

NCSU News (09/10/12) Matt Shipman

North Carolina State University (NCSU) researchers have developed software that prevents performance disruptions in cloud computing systems by automatically identifying and responding to abnormal activities before they develop into larger problems.  The program analyzes the amount of memory being used, network traffic, central processing unit (CPU) usage, and other data in a cloud computing infrastructure to determine what behaviors can be classified as normal.  The software defines normal behavior for every virtual machine in the cloud and then looks for changes that could affect the system’s ability to provide service to its users.  If the program identifies a virtual machine that is acting abnormally, it runs a diagnostic that can determine which metrics are affected without exposing other data.  “If we can identify the initial deviation and launch an automatic response, we can not only prevent a major disturbance, but actually prevent the user from even experiencing any change in system performance,” says NCSU professor Helen Gu.  She notes that once the system is operational, it uses less than one percent of the CPU load and 16 megabytes of memory.  During testing, the system identified 98 percent of anomalies.


Intel and HP to Build World’s Most Efficient Supercomputer

Techworld (09/11/12) Curtis Sophie

Researchers at Hewlett-Packard and Intel are developing an energy-efficient supercomputing system for the U.S. Department of Energy’s National Renewable Energy Laboratory (NREL). The system will be powered by a combination of current 32nm Xeon E5 processors and future 22nm Ivy Bridge processors, together with about 600 Xeon Phi co-processors. When fully operational, the system’s total peak performance should exceed one petaflop, making it the largest supercomputer dedicated solely to renewable energy and energy efficiency research. The installation will use warm water liquid cooling technology to maximize the reuse of heat. Excess heat from the system will be guided into neighboring offices and labs and sent to other areas of the NREL campus to reduce central heating costs. The cooling system should help the NREL facility to become the world’s most efficient data center, with a power usage effectiveness rating of at least 1.06. “At NREL, we have taken a holistic approach to sustainable computing,” says NREL’s Steve Hammond. “This new system will allow NREL to increase our computational capabilities while being mindful of energy and water used.” The system is scheduled for completion next summer.