• October 2017
    M T W T F S S
    « Aug    
     1
    2345678
    9101112131415
    16171819202122
    23242526272829
    3031  

U.S. Exascale Computing Update With Paul Messina

HPC Wire (12/08/16) Tiffany Trader

In an interview, Distinguished Argonne Fellow Paul Messina discusses stewardship of the Exascale Computing Project (ECP), which has received $122 million in funding (with $39.8 million to be committed to 22 application development projects, $34 million to 35 software development proposals, and $48 million to four co-design centers). Messina notes experiments now can be validated in multiple dimensions. “With exascale, we expect to be able to do things in much greater scale and with more fidelity,” he says. Among the challenges Messina expects exascale computing to help address are precision medicine, additive manufacturing with complex materials, climate science, and carbon capture modeling. “The mission [of ECP] is to create an exascale ecosystem so that towards the end of the project there will be companies that will be able to bid exascale systems in response to [request for proposals] by the facilities, not the project, but the typical DOE facilities at Livermore, Argonne, Berkeley, Oak Ridge, and Los Alamos,” Messina says. In addition to a software stack to meet exascale app needs, Messina says there should be a high-performance computing (HPC) software stack “to help industry and the medium-sized HPC users more easily get into HPC.” Messina also stresses the need for exascale computing to be a sustainable ecosystem.

MORE

Advertisements

Revealed: Google’s Plan for Quantum Computer Supremacy

New Scientist (08/31/16) Jacob Aron

Google expects to have the world’s largest working quantum computer ready soon, as researchers say the company is on the verge of a breakthrough. Hints were dropped in July when Google published a study in which it announced a plan to achieve “quantum supremacy” by building the first quantum computer that can perform a task beyond the capabilities of classic computers. Google publicly announced a 9-quantum-bit (qubit) system, but its goal is a 50-qubit computer that can model the behavior of a random arrangement of quantum circuits. “They’re doing a quantum version of chaos,” says Simon Devitt at Japan’s RIKEN Center for Emergent Matter Science. After pushing classical computing to its limit in the simulation of quantum circuit behavior on the Edison supercomputer to set the goal it hopes to achieve, Google hired University of California, Santa Barbara professor John Martinis to design superconducting qubits. Devitt thinks quantum supremacy could be achieved by the end of 2017, although meeting the challenge even within the next five years would still be a major accomplishment. Building a 50-qubit quantum device would be the first step toward a fully scalable machine, which Devitt says will indicate the technology is “ready to move out of the labs.”

MORE

Chameleon: Why Computer Scientists Need a Cloud of Their Own

HPC Wire (05/05/16) Tiffany Trader

The U.S. National Science Foundation-funded Chameleon cloud testbed in less than a year of operation has contributed to innovative research in high-performance computing (HPC) containerization, exascale operating systems, and cybersecurity. Chameleon principal investigator Kate Keahey, a Computation Institute fellow at the University of Chicago, describes the tool as “a scientific instrument for computer science where computer scientists can prove or disprove hypotheses.” Co-principal investigator Dan Stanzione, executive director of the Texas Advanced Computing Center at the University of Texas at Austin, says Chameleon can meet the oft-denied request from the software or computer science research community to make fundamental changes to the way the machine operates. With Chameleon, users can configure and test distinct cloud architectures on various problems, such as machine learning and adaptive operating systems, climate modeling, and flood prediction. Keahey says support for research at multiple scales was a key design element of the instrument. One project using Chameleon involves comparing performance between containerization and virtualization as they apply to HPC applications. Keahey says it is “a good example of a project that really needs access to scale.” Another major Chameleon user is the Argo Project, an initiative for designing and prototyping an exascale operating system and runtime.

MORE

 

The Exascale Revolution

HPC Wire (10/23/14) Tiffany Trader

Experts are coming to a consensus that the shift from the petascale to the exascale supercomputing eras is going to be more challenging than many previously anticipated. At the recent Argonne National Laboratory Training Program in Extreme Scale Computing, Pete Beckman, director of Argonne’s Exascale Technology and Computing Institute, highlighted some of the possible problems. One major concern is power and the costs associated with it. Although supercomputers have been getting more energy-efficient, Beckman uses the example of the most recent generations of IBM supercomputers to demonstrate a 5x trajectory of energy efficiency gains that would still have an exascale system requiring 64 megawatts of power, which could cost tens of millions of dollars a year. These cost concerns are prompting many countries to pursue exascale computing on an international scale, forming multinational partnerships to share the massive costs. The U.S. and Japan recently entered such an agreement, and Europe is looking to join them. However, China is proceeding on its own, largely on the strength of its own native technology. Beckman also addressed challenges relating to memory and resilience and the need to update software to be able to make use of exascale resources.

MORE

New Degrees of Parallelism, Old Programming Planes

HPCWIRE (28/08/2014) Nicole Hemsoth

Exploiting the capabilities of HPC hardware is now more a matter of pushing into deeper levels of parallelism versus adding more cores or overclocking. What this means is that the time is right for a revolution in programming. The question is whether that revolution should be one that torches the landscape or that handles things “diplomatically” with the existing infrastructure.

MORE

Big Data Reaches to the Stratosphere

HPC Wire (04/03/14) Tiffany Trader

A position paper by Berlin Technical University professor Volker Markl developed at the recent Big Data and Extreme-scale Computing workshop emphasizes the goals and challenges of big data analytics. “Today’s existing technologies have reached their limits due to big data requirements, which involve data volume, data rate and heterogeneity, and the complexity of the analysis algorithms, which go beyond relational algebra, employing complex user-defined functions, iterations, and distributed state,” Markl writes. To correct this requires deploying declarative language concepts for big data systems. However, the effort presents several challenges, including designing a programming language specification that does not demand systems programming skills; plotting out programs expressed in this language to a computing platform of their own choosing, and performing them in a scalable fashion. Markl says next-generation big data analytics frameworks such as Stratosphere can enable deeper data analysis. Stratosphere integrates the advantages of MapReduce/Hadoop with programming abstractions in Java and Scala and a high-performance runtime to facilitate massively parallel in-situ data analytics. Markl says Stratosphere is so far the only system for big data analytics featuring a query optimizer for advanced data analysis programs that transcend relational algebra, and the goal is to enable data scientists to concentrate on the main task without spending too much time on instilling scalability.

MORE

Researchers Implement HPC-First Cloud Approach

HPC Wire (01/29/14) Tiffany TraderĀ 

North Carolina State University researchers have demonstrated a proof-of-concept for a novel high-performance cloud computing platform by merging a cloud computing environment with a supercomputer. The implementations show that a fully functioning production cloud computing environment can be completely embedded within a supercomputer, allowing users to benefit from the underlying high-performance computing hardware infrastructure. The supercomputer’s hardware provided the foundation for a software-defined system capable of supporting a cloud computing environment. This “novel methodology has the potential to be applied toward complex mixed-load workflow implementations, data-flow oriented workloads, as well as experimentation with new schedulers and operating systems within an HPC environment,” the researchers say. The software utility package, Kittyhawk, serves as a provisioning engine and offers basic low-level computing services within a supercomputing system. Kittyhawk is what allowed the researchers to construct an embedded elastic cloud computing infrastructure within the supercomputer. The HPC-first design approach to cloud computing leverages the “localized homogeneous and uniform HPC supercomputer architecture usually not found in generic cloud computing clusters,” according to the researchers. This type of system has the potential to support multiple workloads, including traditional HPC simulation jobs, workflows that involve both high-performance computing and non-high performance computing analytics, and data-flow orientation work.

MORE