• May 2016
    M T W T F S S

U.S. Efforts to Build Next-Gen Supercomputer Take Shape

Computerworld (04/25/16) Patrick Thibodeau

The U.S. government has set a 2023 deadline to develop an exascale computer that can solve science problems 50 times faster than is possible with currently available 20-petaflop computers. The system will consume between 20 MW and 30 MW of power. “The U.S. faces serious and urgent economic, environmental, and national security challenges based on energy, climate, and growing security threats,” says the U.S. Department of Energy (DOE). “High-performance computing (HPC) is a requirement for addressing such challenges, and the need for the development of capable exascale computers has become critical for solving these problems.” The government plans to spend almost $300 million on exascale system development in 2016, while a 2017 budget calls for a slightly higher allocation. Published DOE planning documents estimate the total cost for an exascale system at about $3 billion. University of Tennessee professor Jack Dongarra notes China is pursuing a 100-petaflop system with two projects, and an announcement is expected soon. The competition to build an exascale system “is really up for grabs at this point,” says IDC analyst Steve Conway.


Nvidia GPU-Powered Autonomous Car Teaches Itself to See and Steer

Network World (04/28/16) Steven Max Patterson

An Nvidia engineering team built an autonomous car that combines a camera, a Drive-PX embedded computer, and 72 hours of training data. The researchers trained a convolutional neural network (CNN) to map raw pixels from the camera directly to steering commands. Three cameras and two computers were utilized by the training system to obtain three-dimensional video images and steering angles from the vehicle driven by a human. Nvidia researchers watched for changes in the steering angle as the training signal mapped the human driving patterns into bitmap images recorded by the cameras, and learning was enabled using the CNN to generate the internal representations of the processing steps of driving. The open source machine-learning system Torch 7 was used to render the learning into the processing steps that autonomously saw the road, other vehicles, and obstacles to steer the test vehicles. The steering directions the CNN performed in a simulated response to the 10-frames-per-second images captured by the human-driven car were compared to the human steering angles, teaching the system to see and steer. On-road testing proved CNNs can learn the task of lane detection and road following without manually and explicitly deconstructing and classifying road or lane markings, semantic abstractions, path planning, and control.


Future Directions for NSF Advanced Computing Infrastructure Report Now Available

National Science Foundation (05/04/16) Aaron Dubrow

A just-issued report commissioned by the U.S. National Science Foundation (NSF) and conducted by National Academies of Sciences, Engineering, and Medicine studies priorities and associated trade-offs for advanced computing investments and strategy. The Future Directions for NSF Advanced Computing Infrastructure to Support U.S. Science and Engineering in 2017-2020 report used community input from more than 60 individuals, research groups, and organizations. NSF’s study request in 2013 prompted the Computer Science and Telecommunications Board to organize a committee to make recommendations on establishing a framework to position the U.S. for continued science and engineering leadership, guarantee resources fulfill community needs, help the scientific community keep pace with the revolution in computing, and sustain the advanced computing infrastructure. NSF has funded the enablement of a cyberinfrastructure ecosystem combining superfast and secure networks, cutting-edge parallel computing, efficient software, state-of-the-art scientific instruments, and massive datasets with expert staff across the country. The agency requested $227 million for its 2016 advanced cyberinfrastructure budget, up from $211 million in 2014. “[The report’s] timing and content give substance and urgency to NSF’s role and plans in the National Strategic Computing Initiative,” says NSF’s Irene Qualters.


Chameleon: Why Computer Scientists Need a Cloud of Their Own

HPC Wire (05/05/16) Tiffany Trader

The U.S. National Science Foundation-funded Chameleon cloud testbed in less than a year of operation has contributed to innovative research in high-performance computing (HPC) containerization, exascale operating systems, and cybersecurity. Chameleon principal investigator Kate Keahey, a Computation Institute fellow at the University of Chicago, describes the tool as “a scientific instrument for computer science where computer scientists can prove or disprove hypotheses.” Co-principal investigator Dan Stanzione, executive director of the Texas Advanced Computing Center at the University of Texas at Austin, says Chameleon can meet the oft-denied request from the software or computer science research community to make fundamental changes to the way the machine operates. With Chameleon, users can configure and test distinct cloud architectures on various problems, such as machine learning and adaptive operating systems, climate modeling, and flood prediction. Keahey says support for research at multiple scales was a key design element of the instrument. One project using Chameleon involves comparing performance between containerization and virtualization as they apply to HPC applications. Keahey says it is “a good example of a project that really needs access to scale.” Another major Chameleon user is the Argo Project, an initiative for designing and prototyping an exascale operating system and runtime.