• May 2012
    M T W T F S S

Tiny Crystal Revolutionizes Computing

University of Sydney (04/26/12) Verity Leatherdale

Researchers at the University of Sydney, the U.S. National Institute of Standards and Technology, Georgetown University, North Carolina State University, and the Council for Scientific and Industrial Research have developed a tiny crystal that enables a computer to perform calculations that are too difficult for the world’s most powerful supercomputers.  “The system we have developed has the potential to perform calculations that would require a supercomputer larger than the size of the known universe–and it does it all in a diameter of less than a millimeter,” says Sydney’s Michael Biercuk.  The new quantum simulator is potentially faster than any known computer by 10 to the power of 80, according to the researchers.  They say the crystal goes beyond all previous experimental attempts in providing “programmability” and the critical threshold of qubits needed for the simulator to exceed the capability of most supercomputers.  The simulator also can be used to gain insights about complex quantum systems.  “We are studying the interactions of spins in the field of quantum magnetism–a key problem that underlies new discoveries in materials science for energy, biology, and medicine,” Biercuk says.


Disruptive Innovation–in Education

KurzweilAI.net (04/20/12)

A new online-learning initiative from the Massachusetts Institute of Technology (MIT) has the potential to reinvent education, says Anant Agarwal, who will head the Open Learning Enterprise and oversee the development of MITx.  Agarwal says MITx is an effort to move the complete MIT classroom experience online, including video lectures, homework assignments, lab work, and a final grade.  The prototype course has more than 120,000 enrollees.  A pioneer in the development of parallel computing, Agarwal also has pursued innovations in online education, and he developed a program called WebSim that enabled students to process real-world electrical signals by assembling virtual circuits on a computer screen rather than physical circuits at a lab bench.  The development of tools such as WebSim will be crucial to the expansion of MITx.  Agarwal says the automation of lectures and grading could give professors and teaching assistants more time to work directly with students, including in open-ended research projects, and tools developed through MITx could enable students to learn in a more interactive fashion, and at their own pace and schedule.  “No one knows how it’s going to evolve,” he says.  “But it has the potential to change the world.”


NASA’s Space Apps Competition Takes on Big Ideas Government Technology

(04/23/12) Sarah Rich

The U.S. National Aeronautics and Space Administration (NASA) recently held its first International Space Apps Challenge, attracting participants from 24 countries for the opportunity to help find solutions for some of the space agency’s biggest challenges.  The two-day apps challenge also focused on software, open hardware, data visualizations, and citizen science platform solutions.  The participants were presented with various challenges, and were able to choose which projects they wanted to work on.  At the San Francisco site, participants worked with project members in other locations by blogging, video chatting, and tweeting.  One team developed a deployment capsule for experiments that would be sent into space, a mechanism that could be used for engaging students in science education.  OpenROV, which developed additional capabilities for an underwater robot prototype, and Daily Myths, an interactive Web tool for learning about astronomy through trivia-style questions, were nominated to advance to compete globally in the NASA competition.  The event is “part of the Obama administration’s open government strategy and the notion there is that if we open information data access to government, to citizens and people, then they will be able to do amazing things with that data,” says NASA’s Linda Cureton.


China Mulls National CPU Architecture Spec

EE Times (04/23/12)

China’s Ministry of Industry and Information Technology recently launched a program that aims to define a national processor architecture.  If the initiative is successful, the processor could become a requirement for use in any projects seeking government funding.  Chinese government officials also recently hosted the first China National Instruction Set Architecture meeting, which is one of several efforts to set Chinese standards and establish Chinese intellectual property (IP) instead of paying for IP from foreign countries.  “I got the impression it’s a matter of months” before the processor group chooses a national standard, says MIPS Technologies’ Robert Bismuth.  Conventional ARM cores are too expensive for some China electronics companies who want lower-cost alternatives, according to an anonymous Chinese executive.  “We understand China’s initial desire to have its own [instruction set architecture (ISA)], and we continue to cooperate and discuss with the key people involved to reach a good solution,” says ARM president Tudor Brown.  The Chinese government wants “a common software ecosystem and the only way to get that is with a common ISA,” Bismuth notes.


CISE Researchers Discuss ‘Security for Cloud Computing’

CCC Blog (04/20/12) Erwin Gianchandani

Researchers at the University of Illinois at Urbana-Champaign and the U.S. National Science Foundation recently organized a workshop on security for cloud computing. The goal of the workshop was to identify the research challenges for securing cloud computing services and systems and to rally a broader computer science and engineering research community behind the challenges that need to be solved. The researchers developed challenges in adversary models for cloud computing, delegation and authorization in cloud computing, end-to-end security in cloud computing, and new problems in security for cloud computing. The workshop brought forward many new challenges in well-known areas of security as well as new security problems that are emerging in the cloud computing domain. The researchers noted that the effort is needed because clouds are complex systems with hundreds of service dependencies, competing solutions, and multi-tenancy demands, and are being held back by a lack standards and pressures for interoperability, bandwidth, and other resources.


This Supercomputer Is Rethinking the Future of Software

Tech Republic (04/18/12) Nick Heath

The International Center of Excellence for Computational Science and Engineering’s Daresbury lab recently installed an IBM BlueGene/Q supercomputer in order to help re-engineer software to run on future computers with millions of cores.  Daresbury lab director Adrian Wander says that most existing software will not run on future machines with millions of cores because of the different hardware architecture.  “There’s a whole bunch of technical issues around the application software that we need to address now if we are going to have applications that will run on these systems in five years’ time,” Wander notes.  He says the rapid increase in the number of cores means that the conventional x86 computer architecture is not an option for supercomputers of the future.  “Today, the Blue Gene/Q has 16 GB of memory per core, if we are going to 1 GB or 0.5 GB per core [in future machines] we’re going to have to do a major redesign of the code,” Wander says.  Future central processing units also will include additional circuitry to aid processing and new software to take advantage of the more diverse range of processing units.  Wander notes the additional cores also will give business access to much more processing power.


A New Breed of Heterogeneous Computing

HPC Wire (04/18/12) Michael Feldman

The foundation of high-performance computing (HPC) is undergoing a revolution, with the introduction of add-on accelerators such as graphic processing units, Intel’s MIC chip, and field-programmable gate arrays (FPGAs), writes Michael Feldman.  However, he says an emerging variant of this heterogeneous computing approach could replace the current accelerator model in the near future.  ARM recently announced its big.LITTLE design, a chip architecture that integrates large, performant ARM cores with small, power-efficient ones.  This approach aims to minimize the power draw in order to extend the battery life of mobile devices.  The big core/little core model was developed by researchers at the University of California, San Diego (UCSD) and Hewlett-Packard Labs in 2003.  “The key insight was that even if you map an application to a little core, it’s not going to perform much worse than running it on a big core,” says UCSD researcher Rakesh Kumar.  The big/little model has both types of cores on the same die and it consolidates on a homogeneous instruction set.  Feldman says assigning tasks to cores would be more static for HPC, because maximizing throughput is the overall goal.  The most likely architectures to adopt the big/little model are x86 and ARM.