• August 2017
    M T W T F S S
    « May    

UT Austin’s New Supercomputer Stampede2 Storms Out of the Corral in Support of U.S. Scientists

UT News
Faith Singer-Villalobos
July 28, 2017

The University of Texas at Austin’s (UT Austin) Texas Advanced Computing Center (TACC) has launched Stampede2, the most powerful supercomputer at any U.S. university, which UT Austin president Gregory L. Fenves says will enable researchers “to take on the greatest challenges facing society.” Stampede2 was built with a $30-million National Science Foundation (NSF) grant, and its applications will include large-scale models and data analyses using thousands of processors simultaneously, and smaller computations or interactions via Web-based community platforms. TACC executive director Dan Stanzione predicts Stampede2 “will serve as the workhorse for our nation’s scientists and engineers, allowing them to improve our competitiveness and ensure that UT Austin remains a leader in computational research for the national open science community.” Stampede2 will have a peak performance of 18 petaflops while consuming half the power of Stampede1. It will be made available to researchers via NSF’s Extreme Science and Engineering Discovery


Google Plans to Demonstrate the Supremacy of Quantum Computing

IEEE Spectrum, Rachel Courtland
May 24, 2017

Google researchers say they plan to boost the volume of superconducting qubits built on integrated circuits (ICs) to create a 7×7 array and push operations to the limits of even the best supercomputers, demonstrating “quantum supremacy” by year’s end. The team says it will perform operations on a 49-qubit system that will trigger chaotic evolution yielding what appears to be random output, which classical computers can model for smaller systems. University of California, Santa Barbara professor John Martinis says the qubits constituting the array also could be employed to build larger “universal” quantum systems with error correction, capable of performing useful tasks such as decryption. Martinis says the challenge of scaling up the quantum IC involves maintaining qubits’ function without losing fidelity or boosting error rates. “Error rate and scaling tend to kind of compete against each other,” he notes. The team also sees the possibility of scaling up systems beyond 50 qubits without error correction.



The Next Supercomputing Superpower–Chinese Technology Comes of Age

Asian Scientist (01/03/17) Rebecca Tan

China has been the ranking leader on the Top500 list of the world’s most powerful supercomputers since June 2013, claiming unsurpassed growth compared to all other countries, according to University of Tennessee professor Jack Dongarra. The ascent of China, which did not even make the Top500 list until 2001, raised fears of its supercomputers being used for nuclear applications, given the growing need for such resources to simulate nuclear tests. Despite a U.S. ban on selling microchips to China, Stony Brook University professor Deng Yuefan says China’s supercomputing progress has continued unabated. One result was the rollout of China’s Shenwei SW26010 chips, which put the Sunway TaihuLight system at the top of the Top500 list with a Linpack benchmark of 93 petaflops and also tripled its predecessor’s efficiency. Deng says China is making investments in software development to put its supercomputers to good use. He says this is evident in the use of Sunway TaihuLight by three of the six finalists for the 2016 ACM Gordon Bell Prize, including the winning team, at the SC16 conference in November. Meanwhile, China also is in a race with Japan and the U.S. to build the first exascale supercomputers.


U.S. Exascale Computing Update With Paul Messina

HPC Wire (12/08/16) Tiffany Trader

In an interview, Distinguished Argonne Fellow Paul Messina discusses stewardship of the Exascale Computing Project (ECP), which has received $122 million in funding (with $39.8 million to be committed to 22 application development projects, $34 million to 35 software development proposals, and $48 million to four co-design centers). Messina notes experiments now can be validated in multiple dimensions. “With exascale, we expect to be able to do things in much greater scale and with more fidelity,” he says. Among the challenges Messina expects exascale computing to help address are precision medicine, additive manufacturing with complex materials, climate science, and carbon capture modeling. “The mission [of ECP] is to create an exascale ecosystem so that towards the end of the project there will be companies that will be able to bid exascale systems in response to [request for proposals] by the facilities, not the project, but the typical DOE facilities at Livermore, Argonne, Berkeley, Oak Ridge, and Los Alamos,” Messina says. In addition to a software stack to meet exascale app needs, Messina says there should be a high-performance computing (HPC) software stack “to help industry and the medium-sized HPC users more easily get into HPC.” Messina also stresses the need for exascale computing to be a sustainable ecosystem.


China’s Policing Robot: Cattle Prod Meets Supercomputer

Computerworld (10/31/16) Patrick Thibodeau

Chinese researchers have developed AnBot, an “intelligent security robot” deployed in a Shenzhen airport. The backend of AnBot is linked to China’s Tianhe-2 supercomputer, where it has access to cloud services. AnBot uses these technologies to conduct patrols, recognize threats, and identify people with multiple cameras and facial recognition. The cloud services give the robots petascale processing power, well beyond the processing capabilities in the robot itself. The supercomputer connection enhances the intelligent learning capabilities and human-machine interface of the devices, according to a U.S.-China Economic and Security Review report that focuses on China’s autonomous systems development efforts. The report found the ability of robotics to improve depends on the linking of artificial intelligence (AI), data science, and computing technologies. In addition, the report notes simultaneous development of high-performance computing systems and robotic mechanical manipulation give AI the potential to unleash smarter robotic devices that are capable of learning as well as integrating inputs from large databases. The report says the U.S. government should increase its own efforts in developing manufacturing technology in critical areas, as well as monitoring China’s growing investments in robotics and AI companies in the U.S.


A Billion Billion Calculations per Second: Where No Computer Has Gone Before

South China Morning Post (Hong Kong) (10/29/16) Viola Zhou

China has launched the development of its first exascale high-performance computer to maintain its lead position in the global supercomputing race. The system will run at 1,000 petaflops, topping the speed of China’s Sunway TaihuLight computer by a factor of 10. China’s Ministry of Science and Technology has allocated funding to three research institutions to devise prototypes to meet its five-year target of putting an exascale computer into operation. The participating institutions include Sugon, the National University of Defense Technology, and the National Research Center of Parallel Computer Engineering and Technology (developer of Sunway TaihuLight, currently the top supercomputer in the world). University of Science and Technology of China professor An Hong says once the prototypes are finished, the ministry will choose two teams with the best designs to construct the fully functioning exascale system. For the first time this year, China dethroned the U.S. as the country with the most supercomputers in the Top500 ranking. “The demand for computing speed has no limits,” An says. “Now we have the money and technology, we can build better computers for scientists to use.”


More Computations for Less Energy

Electronic Specifier (11/03/16) Enaie Azambuja

EUROSERVER, a leading European Union-funded research project, is clearing a path toward lower energy consumption in data centers. EUROSERVER combines the concept of chiplets, where multiple silicon subsystems are mounted in an integrated device, and a new system architecture to enable more energy-efficient servers. The project has yielded system architecture and runtime software innovations that include sharing of peripheral devices, access to system-wide memory, data compression to better use memory, and lightweight hypervisor capabilities. The growing capacity and number of data centers is accompanied by increasing financial and environmental impacts of their energy consumption. EUROSERVER will develop a new type of server derived from efficient and scalable ARM processors and the flexibility of a system-on-chip (SoC) design. “The SoC architectures and advanced packaging solutions being developed bring us one step closer to scalability and power efficiency in data centers,” says EUROSERVER coordinator Isabelle Dor. “We are also delighted that two startups have been created to leverage innovations from the project.” The startups include KALEAO, which has rolled out a unique generation of Web-scale, true-converged server appliance with physicalized resource sharing, OpenStack virtualization services, and extreme core density, supporting low energy consumption and significant computing capabilities.