• December 2013
    M T W T F S S
    « Nov   Jan »
     1
    2345678
    9101112131415
    16171819202122
    23242526272829
    3031  

NASA Begins Exploring Quantum Computing

Federal Computer Week (11/22/13) Frank Konkel

U.S. National Aeronautics and Space Administration (NASA) researchers have started running applications on a novel machine, the D-Wave Two, to explore quantum computing.  Rupak Biswas with NASA’s Exploration Technology Directorate says the agency’s initiatives on the D-Wave Two have focused on planning missions, scheduling processes, and re-analyzing portions of data collected by the Kepler telescope.  NASA also wants to use D-Wave Two to schedule supercomputing tasks.  For example, Biswas says, the machine should be capable of mining an immense number of node combinations to tell engineers precisely which nodes to use for best results.  D-Wave Two’s calibration complexity means that it takes about a month to boot up, while its 512-qubit Vesuvius processor operates at 20 millikelvin, which is 100 times colder than outer space.  Using the machine entails engineers mapping a problem in quadratic unconstrained binary optimization (QUBO), while an even bigger challenge is embedding the QUBO model onto the supercomputer’s quantum architecture.  Once this is done, D-Wave Two generates answers as probability, and Biswas says the device so far demonstrates quantum tunneling and superposition.  NASA will share access to the machine over the next five years as part of an alliance with Google and the Universities Space Research Association.

MORE

Advertisements

Opening Up the Accelerator Advantage

HPC Wire (11/26/13) Tiffany Trader

The U.S. National Science Foundation (NSF) has awarded a grant of nearly $2 million for a project that seeks to move supercomputing capabilities beyond the domain of elite scientists.  Researchers at the Georgia Institute of Technology and the University of Southern California will create tools designed to help developers take advantage of hardware accelerators in a cost-effective and power-efficient manner.  They will make use of tablets, smartphones, and other Internet-era devices, says Georgia Tech’s David Bader.  “We want to take science that used to be only available to elite scientists and bring that to everyone around the planet,” Bader says.  “We are bringing supercomputing to the masses.”  Over the next three years, the researchers will work on different types of optimizations for XScala, the software framework for developing efficient accelerator kernels.  The project also will focus on security and social network analysis.  In addition, the team will focus on XBazaar, a public software repository and forum that is similar to an app store.  “XBazaar will serve as a one-stop shop for high-performance algorithms and software for multi-core and many-core processors,” according to the NSF announcement.

MORE

Argonne Lab Taking Next Steps to Exascale Computing

InformationWeek (11/26/13) Patience Wait

Scientists at the Argonne National Laboratory’s Mathematics and Computer Science Division working on the Argo project are creating a prototype exascale operating system and runtime software that would reach the exaflop mark.  Scientists from the Lawrence Livermore and Pacific Northwest National Laboratories and several universities are collaborating with the Argonne team on the project, which is funded through a $9.75-million Department of Energy Office of Science grant.  Computer chips are no longer making performance gains, says Argo program manager Pete Beckman.  “We’ve been turning up the clock every year, but we got to this point at about 3 gigahertz where it really hasn’t gotten any faster,” Beckman says.  “Instead, now companies are making them more parallel.”  Massive parallel processing requires both hardware and software changes, which Argo will address by developing an open source prototype operating system that can run on various architectures.  The researchers aim to have the first prototype systems by the end of the three-year project, but experts predict full-scale exaflop computing will not be feasible until 2018.  The researchers say additional computing power will enable breakthroughs in the most challenging scientific problems, such as understanding the workings of subatomic particles.

MORE

Data Mining Reveals the Secret to Getting Good Answers

 Technology Review (12/03/13)

Although question-and-answer websites are hugely popular and can be very helpful, they often have trouble dealing with the massive amount of questions and answers that get submitted daily.  To help filter the information, many websites allow users to rank both the questions and the answers, gaining a reputation for themselves as they contribute.  Still, it can be difficult to weed out off topic and irrelevant questions and answers.  However, State Key Laboratory for Novel Software Technology researchers have developed an algorithm that completes the task.  “To the best of our knowledge, we are the first to quantitatively validate the correlation between the question quality and its associated answer quality,” says State Key researcher Yuan Yao.  The researchers started the study by examining 2 million questions from 800,000 users who produced more than 4 million answers and 7 million comments on the Stack Overflow website.  The researchers examined the relationship between well-received questions and answers, and found that they are strongly correlated.  The algorithm can predict the quality score of the question and its expected answers, which enables it to find the best and worst questions and answers.

MORE

Cambridge U Deploys U.K.’s Fastest Academic-Based Supercomputer

 Campus Technology (12/11/13) Leila Meyer

The University of Cambridge now has the fastest academic supercomputer in the United Kingdom.  Cambridge has deployed the supercomputer as part of the computing system development in the Square Kilometer Array Open Architecture Lab, which is building the world’s largest radio telescope.  The university partnered with Dell, NVIDIA, and Mellanox to build the system, named Wilkes.  The supercomputer consists of 128 Dell T620 servers and 256 NVIDIA K20 graphical processing units connected by 256 Mellanox Connect IB cards.  Wilkes has a computational performance of 240 teraflops and ranked 166th on the November 2013 Top500 list of supercomputers.  The system also has a performance of 3,631 megaflops per watt and ranked second in the November 2013 Green500 list.  Wilkes uses Mellanox’s FDR InfiniBand solution as the interconnect, and uses NVIDIA RDMA communication acceleration to significantly increase its parallel efficiency.  The supercomputer will “enable fundamental advances in many areas of astrophysics and cosmology,” says Mellanox’s Gilad Shainer.

MORE

Massachusetts Launches Open Cloud to Spur Big Data R&D

 Government Computer News (12/16/13) Rutrell Yasin

The Massachusetts state government is working with local research universities and technology firms to establish the Massachusetts Open Cloud (MOC), a cloud framework that will serve as a regional hub for big data research and innovation.  The MOC will be a marketplace that enables users to supply, buy, and resell hardware capacity, software, and services.  The framework will focus on analyzing large data sets such as those targeted by the Massachusetts Big Data Initiative.  State officials say the effort should improve computational infrastructure and turn cloud computing and big data analysis into strong, locally hosted industries.  The universities are collaborating with tech firms on the open-cloud technology, which will provide  infrastructure as a service, application development, and big data platform services.  The cloud platform should eliminate the time and financial obstacles that make it difficult for small companies to enter markets.  Participants providing services will be responsible for their operation, determining fees, managing shared cloud services, and collecting charges as well as a small overhead to pay for MOC operations.  Meanwhile, several research projects are using the open cloud, including a user interface for the marketplace and a test cloud by Boston University for department-scale user testing.

MORE

‘Approximate Computing’ Improves Efficiency, Saves Energy

 Purdue University News (12/17/13) Emil Venere

Purdue University researchers are developing computers capable of approximate computing, which means they can perform imperfect calculations that are good enough for certain tasks that do not require perfect accuracy, potentially doubling efficiency and reducing energy consumption.  “The need for approximate computing is driven by two factors: a fundamental shift in the nature of computing workloads, and the need for new sources of efficiency,” says Purdue professor Anand Raghunathan.  The researchers developed a range of hardware techniques to demonstrate approximate computing, showing a potential for improvements in energy efficiency.  The researchers also have shown how to apply approximate computing to programmable processors, which are found in computers, servers, and consumer electronics.  “In order to have a broad impact we need to be able to apply this technology to programmable processors,” says Purdue professor Kaushik Roy.  “And now we have shown how to design a programmable processor to perform approximate computing.”  The researchers achieved this by altering the instruction set, which is the interface between software and hardware.  Quality fields added to the instruction set let the software tell the hardware the level of accuracy required for a given task.  The researchers also produced a prototype programmable processor based on this approach.

MORE