Next Generation of Supercomputers Requires Radical Redesign

Mother Nature Network (01/22/12) Jeremy Hsu

 The next generation of exascale supercomputers could complete one billion billion calculations per second, which would be 1,000 times faster than today’s most powerful supercomputers.  However, just one exascale system would require the power equivalent to the maximum output of the Hoover Dam.  Researchers recently gathered to discuss the challenges of supercomputing energy efficiency during a workshop held by the Institute for Computational and Experimental Research in Mathematics (ICERM) at Brown University.  “We’ve been increasing computing power by 1,000 fold every few years for a while now, but now we’ve reached the limits,” says ICERM director Jill Pipher.  The U.S. Department of Energy wants to develop an exascale supercomputer that would use less than 20 megawatts of power by 2020, which would require a drastic change in computer architecture.  Changing computer architecture would also require a rewrite of the software programs that run on conventional computers.  “Now, if you’re building these new machines, you’re going to have to try writing programs in different ways,” Pipher says.  One promising solution is using graphics processing units (GPUs) instead of central processing units (CPUs).  GPUs use almost eight times less energy than a CPU per computer calculation.

MORE

Advertisements

Quantum Mechanics Enables Perfectly Secure Cloud Computing.

University of Vienna (01/19/12)

 University of Vienna researchers say that secure cloud computing can be achieved using the principles of quantum mechanics and quantum computing.  The researchers envision a future in which quantum computing capabilities only exist in a few specialized facilities around the world, and users would then interact with those facilities to outsource their quantum computations.  However, the researchers say that global cloud computing needs to become safer to ensure that users’ data remains private.  Quantum computing “can preserve data privacy when users interact with remote computing centers,” says Vienna researcher Stefanie Barz.  She says that quantum computing enables the delegation of a quantum computation from a user who does not hold any quantum computational power to a quantum server, while guaranteeing that the user’s data remains private, a process known as blind quantum computing.  Blind quantum computing involves the user preparing multiple qubits in a secret state and sending those qubits to the quantum computer, which entangles the qubits according to a standard scheme.  The user tailors measurement instructions to the specific state of each qubit and sends them to the quantum server.  The results of the computation are then sent back to the user who can interpret and use the results of the computation.

MORE

China’s Dark Horse Supercomputing Chip: FeiTeng

HPC Wire (01/19/12) Michael Feldman

Chinese researchers at the National University of Defense Technology (NUDT) are developing the FeiTeng processor, an architecture that could launch Chinese supercomputing past the exascale barrier.  FeiTeng was specifically designed for high-performance computing (HPC) and its original version delivered a peak performance of 16 gigaflops, consuming just 8.6 watts of power, which would yield an energy efficiency of about 1.8 gigaflops per watt.  The NUDT researchers also developed a programming language called SF95, which extended FORTRAN95 with 10 compiler directives to utilize the architecture.  China wants to develop and use domestic microprocessors for its HPC industry, and it is likely that the FeiTeng processors will replace both Intel and NVIDIA chips in a future NUDT supercomputer.  China’s currently most powerful machine, Tianhe-1A, is powered by Intel Xeon and NVIDIA Tesla chips.

MORE

Data Mine This: Government Challenges Scientists

InformationWeek (01/11/12) Elizabeth Montalbano

Data-mining experts from academia as well as public- and private-sector organizations have until Feb. 22 to submit applications for the National Institute of Standards and Technology’s (NIST’s) 21st annual Text Retrieval Conference (TREC). The conference will focus on advanced search techniques for large, digital data collections, and will feature Contextual Suggestion and Knowledge Base Acceleration as new tracks. Contextual Suggestion will investigate how to search information that is dependent on context and user interests, while Knowledge Base Acceleration will aim to improve the efficiency of people who maintain knowledge bases by having the system itself monitor data streams and suggest modifications or extensions. The Crowdsourcing, Legal, Medical Records, Microblog, Session, and Web task topics will carry over from TREC 2011. The conference is scheduled for November, and participants will be able to begin data-mining document sets related to the eight tracks beginning in March. NIST will release the tracks and data sets to be mined, and scientists will be able to focus on developing algorithms to find information from the data collections.

MORE

New Storage Device Is Very Small, at 12 Atoms.

New York Times (01/12/12) John Markoff

IBM researchers have devised a method for magnetically storing data on arrays composed of just a few atoms in a breakthrough that could lead to a new class of nanomaterials for more power-efficient devices, and perhaps open up new avenues in quantum computing research. The researchers stored data on a 12-atom structure, whereas the most advanced magnetic storage systems have up to now required roughly 1 million atoms to store a single bit. The storage was effected by configuring two rows of six iron atoms on a surface of copper nitride atoms, using a scanning tunneling microscope. The antiferromagnetic properties of the atomic cluster make such closeness possible. The array was constructed at a temperature near absolute zero, but the researchers say the same experiment could be performed at room temperature with no more than 150 atoms. They also note that smaller atom groups start to exhibit quantum mechanical behavior, and could theoretically be arranged into Qbits. Antiferromagnetic materials are currently used to make recording heads employed in today’s hard disk drives and in the new spin-transfer-torque RAM memory chip. IBM researcher Andreas Heinrich says the design of novel materials using self-assembly techniques is a focus of many research groups.

MORE

NASA: Prize Money a Bargain for Better Software.

  Government Computer News (01/09/12) William Jackson

 Researchers at the U.S. National Aeronautics and Space Administration (NASA) and the Harvard Business School in 2010 launched the NASA Tournament Lab, an online platform for contests between independent programmers who compete to create software and algorithms and solve computational problems.  “We’re always looking at ways to fill gaps in our technical capabilities,” says NASA’s Jason Crusan.  The researchers use the Tournament Lab to order a program or algorithm for a relatively small amount of prize money.  The first challenge presented in the Tournament Lab was developing an algorithm to optimize the contents of the medical kits that go with astronauts on missions.  NASA developed specifications and 516 programmers worked on the problem.  A total of $1,000 in prize money was awarded to the top five performers in each group.  The best submission was more effective than NASA’s previous algorithm by a factor of three, and NASA is still using it today.  “We didn’t think we would have as high a success rate as we’ve had,” Crusan says.  “There are a lot of smart people in the world.”  NASA also has used crowdsourcing for a way to identify, characterize, and count lunar craters in NASA images.

MORE

Chinese Crunch Human Genome With Videogame Chips.

Wired News (01/06/12) Eric Smalley

BGI, a Chinese supercomputer lab, recently switched to servers that use graphics processing units (GPUs) built by NVIDIA, which enabled it to cut its genome analysis time by more than an order of magnitude. The feat that enabled BGI and NVIDIA to accomplish this was porting key genome analysis tools to NVIDIA’s GPU architecture, a nontrivial accomplishment that the open source community and others have been working toward, says the Jackson Laboratory’s Gregg TeHennepe. With GPUs, BGI gets faster results for its existing algorithms or it can use more sensitive algorithms to achieve better results, says bioinformatics consultant Martin Gollery. In addition, GPUs can be used to analyze genomes that could allow researchers studying biology and drug development to better treat patients. “The researcher now no longer has to own a sequencer or a cluster, and does not have to have employees to manage both of these technologies,” Gollery says. GPU-enabled cloud services will be useful once the data is in the cloud, and cloud service providers are increasingly adding GPU capabilities. Another advantage of GPU-enabled cloud services is that research organizations can test GPU versions of algorithms without having to have a GPU system in-house, Gollery notes.

MORE