• April 2012
    M T W T F S S

Simulating Tomorrow’s Chips

MIT News (04/13/12) Larry Hardesty

Researchers at the Massachusetts Institute of Technology’s (MIT’s) Computer Science and Artificial Intelligence Laboratory have developed Arete, a method for improving the efficiency of hardware simulations of multicore chips.  The researchers say Arete guarantees the simulator will not fall into a deadlock state in which cores get stuck waiting for each other to give up system resources.  Arete also could make it easier for designers to develop simulations and for outside observers to understand what those simulations are meant to accomplish.  The method involves a circuit design that enables the ratio between real clock cycles and simulated cycles to fluctuate as needed, which allows for faster simulations and more economical use of the field-programmable gate array’s (FPGA’s) circuitry.  “What we’re proposing is, instead of having this in your head, let’s start with a specification,” says MIT graduate student Asif Khan.  The researchers’ high-level language, known as StructuralSpec, builds on the BlueSpec hardware design language developed at MIT in the late 1990s.  StructuralSpec users provide a high-level specification of a multicore model, and the program produces the code that implements that model on an FPGA.


EU Investigates Internet’s Spread to More Devices

BBC News (04/12/12)

The European Commission (EC) expects rapid growth in the number of household appliances and other devices that are connected to the Internet by 2020, and as a result is launching a consultation over controls for the way information is gathered, stored, and processed.  The typical person currently has at least two devices connected to the Internet.  However, by 2015 that number is expected to grow to seven Web-connected devices per person, for a total of 25 billion worldwide, and more than 50 billion by 2020.  The EC notes that previous technological advances have led to new legislation, such as the European Union’s (EU’s) Privacy and Data Communications Directive, which requires users to give permission for Web sites to install tracking cookies into their browsers.  “Technologies like these need to be carefully designed if they are to enhance our private lives, not endanger them,” says EU representative Emma Draper.  “Sharing highly sensitive personal data–like medical information–to a network of wireless devices automatically creates certain risks and vulnerabilities, so security and privacy need to be built in at the earliest stages of the development process.”


AFOSR Seeking “Transformational Computing”

CCC Blog (04/12/12) Erwin Gianchandani

The U.S. Air Force Office of Scientific Research (AFOSR) has launched a basic research initiative that aims to bring together the computational hardware, software, aerospace sciences, physics, and applied mathematics communities to create a novel and unique capability to design high-performance computing platforms to facilitate the development of Air Force systems.  The initiative aims to resolve the issues of increased power consumption and the slowing of Moore’s law.  “There is a clear need for a fundamental basic research program wherein the hardware and algorithms are placed on an equal footing to develop specialized, heterogeneous, and very high-performance systems to answer the computational objectives of the Air Force and the Department of Defense,” AFOSR’s announcement says.  The goal is to develop the fundamental research to enable highly focused, potentially heterogeneous computational platforms that offer orders of magnitude better performance for single application areas.  “This effort will be focused on fundamental and interdisciplinary research in computational hardware, computational software, and mathematical models, and is especially interested in work that characterizes the relationship between hardware and algorithms,” the announcement says.


Exascale Storage Group Aims to Bring I/O Up to Speed

HPC Wire (04/05/12) Michael Feldman

The European Open File System consortium created an Exascale IO Workgroup (EIOW) with the goal of designing and building open source, input/output middleware to meet the needs for exascale storage.  Xyratex’s Peter Braam, one of the principal drivers behind EIOW, helped facilitate much of the initial discussion on the topic.  “One point that we all agreed with is that we need to start with the applications, putting aside current models, and what is it that applications will require in the exascale era,” Braam says.  He notes the new initiative focuses on what users need, and the increased focus on I/O will address previous imbalances.  When managing data on an exascale level, there is demand for a new paradigm, which is why exascale development is the focus of the new initiative, Braam says.  The first working group focused on how application programmers envision using huge data stores and what their requirements are.  He says the three requirements that were most prominent were how can applications influence the life cycle of the data in terms of reuse, longevity, and importance; the topic of nested metadata providing bundles with all the data belonging to an application; and schemas to describe metadata and data structure.


Discovery May Lead to Significantly More Efficient Method of Data Storage

UNL News (04/06/12) Jean Jones

University of Nebraska-Lincoln (UNL) researchers say they have discovered a more efficient method of data storage that could lead to new generations of technology.  The key to the research is the scanning probe microscopy technique, which is based on exerting highly localized mechanical, electrical, or magnetic influence on an object by using a tiny physical probe and measuring the object’s response.  The tip of the probe can scan a surface and offer researchers feedback.  The probe also can be used to electrically change the local properties of ferroelectric materials.  By applying an electric potential to the probe, a nanoscale-sized bit of electrical information can be stored in the ferroelectric material, a principle that is central to data storage and similar to hard disk drives.  “It’s a completely voltage-free switching of polarization, which is what makes the results of this research unique,” says UNL’s Alexei Gruverman.  He notes the research opens up a new way to store data much more densely than has been possible before.  The researchers hope to build on this discovery by investigating possible applications.


Opening the Gate to Robust Quantum Computing

Ames Laboratory (04/09/12)

Researchers at the U.S. Department of Energy’s Ames Laboratory, the University of California, Santa Barbara, and the University of Southern California have developed a method to protect quantum information from degradation by the environment while performing computation in a solid-state quantum system. They say the discovery could lead to robust quantum computation with solid-state devices and using quantum technologies for magnetic measurements with single-atom precision. “The big step forward here is that we were able to decouple individual qubits from the environment, so they retain their information, while preserving the coupling between the qubits themselves,” says Ames researcher Viatsheslav Dobrovitski. Solid-state hybrid systems are important for quantum information processing systems because they consist of different types of qubits that each perform separate functions. “This type of hybrid system may be particularly good for quantum information processing because electrons move fast, can be manipulated easily, but they also lose quantum information quickly,” Dobrovitski notes. The researchers found a point in which both the electron and the nucleus can be decoupled from their environment, while retaining their relationship with each other. They showed that this technique could be used for small-scale quantum information processing.


Quantum Computer Built Inside a Diamond

 USC News (04/04/12) Robert Perkins

University of Southern California (USC) scientists and a team of researchers have built a quantum computer in a diamond to demonstrate the viability of solid-state quantum computers.  The quantum computing system featured two quantum bits, known as qubits, made of subatomic particles.  The researchers took advantage of the impurities in the diamond, using a rogue nitrogen nucleus as the first qubit, and a flawed electron as the second qubit.  The researchers say the diamond-based quantum computer is the first to incorporate decoherence protection, using microwave pulses to continually switch the direction of the electron spin rotation.  The researchers demonstrated that the diamond-incased system operates in quantum fashion by seeing how closely it matched Grover’s algorithm, which is a search of an unsorted database.  Their system was able to find the correct answer as part of Grover’s algorithm on the first attempt about 95 percent of the time.  The researchers say the future of quantum computing may reside in solid-state quantum computers because they can be easily scaled up in size, in contrast to earlier gas- and liquid-state systems.


UK’s Fastest Supercomputer to Be Built in Halton

Runcorn and Widnes (04/05/12) Oliver

Ellis Daresbury Laboratory will be the site of the fastest supercomputer in the United Kingdom. The Science and Technology Facilities Council (STFC) announced that it plans to build one of the world’s top software research centers, using the most powerful hardware from IBM. The supercomputer is expected to reach speeds of 1.4 petaflops. According to the International Center of Excellence for Computational Science and Engineering (ICE-CSE), the goal is to make high-performance computing (HPC) accessible for U.K. industry. A representative for Daresbury Laboratory says HPC can aid research and innovation, which companies need to compete effectively. The research center will provide the ability to simulate complex systems, such as mapping the human brain or modeling the earth’s climate. “The ICE-CSE is a key component of the government’s e-infrastructure initiative,” says professor John Womersley, chief executive of STFC. “It is also essential to the U.K. maintaining its position as a major innovative economy and a global scientific research leader.”


Forecasting a Warming World Via Thousands of PCs

  Computerworld (04/02/12) Patrick Thibodeau

 Oxford University researchers recently conducted a climate change study using 50,000 PCs to run simulations that were originally written for a high-performance computing system.  The researchers conducted the study on ClimatePrediction.net, which uses the Berkeley Open Infrastructure for Network Computing (BOINC) framework for distributed computing.  ClimatePrediction, which is the only distributed network for climate change research, has more than 500,000 registered hosts, according to Oxford researcher Daniel Rowlands.  He says the research, which is focused on continent-wide changes around the world, would have cost more than $1 million to do on a commercial cloud service.  “We are completely indebted to our volunteers,” Rowlands says.  He notes the study took about 5,000 years of central-processing unit computer time.  The BOINC framework also is used for the Rosetta@home project, which is investigating protein folding, PrimeGrid@home, which conducts mathematical research, and MilkyWay@home, which creates three-dimensional models of the galaxy.


Supercomputing Education in Russia

  HPC Wire (04/04/12) Vladimir Voevodin

 Cultivating a national system for training highly skilled supercomputing professionals is the goal of Russia’s Supercomputing Education project, which has completed its second year, writes Moscow State University’s Vladimir Voevodin.  In its first year the project sought to develop and implement the fundamental elements of such a system in Russia’s top academic institutions, and 62 Russian universities and more than 600 people participated in the effort.  The foundation of the project’s success is the national System of Research and Education Centers for Supercomputing Technologies, whose primary goal is organizing the training and retraining of supercomputing professionals in academia, institutes, industry, and business.  Last year saw the initiation of large-scale training of entry-level specialists on supercomputing technologies, and the development of a knowledge base on parallel computing and supercomputing technologies was a key project component.  Some 37 courses covering the main chapters in the knowledge base were devised and disseminated among universities.  In progress is a broad program for developing and reviewing educational literature on supercomputing technologies, and retraining programs for professors and faculty were deployed in all Russian federal districts in 2011.