• December 2010
    M T W T F S S

Intel: Why a 1,000-Core Chip Is Feasible.

ZDNet UK (12/25/10) Jack Clark

Intel has developed experimental chips with 48 and 80 cores through its Terascale Computing Research Program. At the Supercomputing Conference 2010, Intel’s Timothy Mattson claimed that the Terascale Program’s 48-core chip could theoretically scale to 1,000 cores. The 48-core chip’s architecture could support 1,000 cores because Intel does not have cache coherency overhead, Mattson says. “The challenge this presents to those of us in parallel computing at Intel is, if our [fabrication department] could build a 1,000-core chip, do we have an architecture in hand that could scale that far?” he says. “And if built, could that chip be effectively programmed?” Mattson says that there is no theoretical limit to the number of cores that can be used, but the number of cores does depend on how much of the program can be parallelized, and how much overhead and load imbalance a program can uphold. He says a key question is whether there are usage models and applications that need that many cores. “As I see it, my job is to understand how to scale out as far as our fabs will allow and to build a programming environment that will make these devices effective,” Mattson says. “I leave it to others in our applications research groups and our product groups to decide what number and combination of cores makes the most sense.”


Cloud Essential to R&D in Australia: NICTA.

Computerworld Australia (12/10/10) Chloe Herrick

National ICT Australia (NICTA) is using cloud computing to access computational and storage resources at an unprecedented scale. “The ability to process literally billions and billions of records of data at a very short completion time means we can conduct science experiments in particular domains that we haven’t been able to do so before,” says NICTA principal research leader Anna Liu. “The other value of cloud computing is we can use it right now, we do not necessarily have to spend a lot of time to secure a large infrastructure grant in order to build up our own compute clusters and then to do science experiments with it.” Microsoft recently announced a partnership with NICTA, the Australian National University, and the Commonwealth Scientific and Industrial Research Organization to provide the organizations with three years of free access to Microsoft’s Windows Azure Cloud computing platform. “What we need to do, is let scientists be scientists, they don’t want to be system administrators, they want to focus on the science and be able to access very large amounts of data and the tools to analyze that data in easy ways, and they want to be able to do it from their desktop,” says Microsoft Research director Dennis Gannon.


IBM Xeon-Based Supercomputer to Hit Three Petaflops.

eWeek Europe (United Kingdom) (12/14/10) Matthew Broersma

IBM plans to build an Intel Xeon-based supercomputer that will reach a peak speed of three petaflops and use a hot water-cooling system, which will result in 40 percent less power consumption than an air-cooled machine. The system, called SuperMUC, will be housed at the Leibniz Supercomputing Centre (LRZ) as part of the Partnership for Advanced Computing in Europe high-performance computing (HPC) infrastructure, according to Intel. The SuperMUC system will be IBM’s second water-cooled supercomputer, following the Aquasar system that was set up at the Swiss Federal Institute of Technology Zurich in July. IBM’s hot-water cooling technique cools HPC components with warm water and uses micro-channel liquid coolers hooked directly to the processors. Water generally removes heat 4,000 times more efficiently than air, according to Intel. “SuperMUC will provide previously unattainable energy efficiency along with peak performance by exploiting the massive parallelism of Intel’s multicore processors and leveraging the innovative hot-water cooling technology pioneered by IBM,” says LRZ’s Arndt Bode. The system will use more than 14,000 Xeon processors. IBM’s development team will rely on Intel researchers for energy-efficiency contributions, and LRZ researchers for their expertise in high-end supercomputing systems.


Japanese Supercomputer Gets Faster But Draws No More Power

Computerworld (12/06/10) Martyn Williams

The Tokyo Institute of Technology recently developed Tsubame 2.0, a high-performance computer that is the second most energy-efficient supercomputer in the world, while achieving a peak performance of 2.4 petaflops. Tsubame 2.0 runs on a combination of central-processing units (CPUs) and graphic processing units (GPUs), which specialize at quickly performing computations on large amounts of data while using much less power than CPUs. The university’s CIO put a limit on how much electricity and physical space the researchers could use to build their new supercomputer. “It wasn’t the money, it wasn’t the space, it wasn’t our knowledge or capability, it was the power that basically was the limiter,” says Satoshi Matsuoka, director of the university’s Global Scientific Information and Computing Center. Tsubame 2.0 features 1,408 computing nodes and 448 processing cores, which results in nearly 1.9 million GPU cores and gives Tsubame 2.0 much of its power. The machine ranked fourth in the recent Top500 supercomputing list and second in the Green 500 energy-efficiency list.


Intel Charts Its Multicore and Manycore Future for HPC.

HPC Wire (12/01/10) Michael Feldman

Intel recently mapped out its strategy for multicore and manycore architectures. Intel’s Rajeeb Hazra says the company’s objective “is to bring to the high-performance computing (HPC) marketplace innovations that drive essentially all of HPC, from the very high end of supercomputing to volume workstations.” Intel’s new Many Integrated Core (MIC) architecture is positioned to be a primary tool in that effort, with Hazra noting that it will form the foundation for its manycore processor design for the next 10 years and beyond. MIC is designed to pack many floating point operations into an extremely energy-efficient bundle. Hazra points out that performance upgrades in the top 100 supercomputers over the last decade were chiefly facilitated through the scale-out model, but this solution will be practical for only a few more years. He says Intel plans to provide the performance per watt similar to that of general-purpose computing on graphics processing units, but in an x86 architecture that enables applications to migrate from single-threaded codes to highly-parallel codes without revising the underlying model. Intel will provide compiler and runtime software support for MIC, as well as a common set of development tools to be employed across the Xeon and MIC products, with a goal of maximizing the productivity of programmers.


Python 3.2 Tweaked for Parallel Development.

 IDG News Service (12/08/10) Joab Jackson

Python’s developers plan to offer greater support for writing multithreaded applications in the 3.2 version of the open source programming language. The first beta version of Python 3.2 has been released, with developers concentrating on bugs, general improvements, and maintaining the language syntax and semantics of Python 3.0. The pre-release version offers a package that brings together a set of functions that could make concurrent programming easier for multicore processors. “Python currently has powerful primitives to construct multi-threaded and multi-process applications but parallelizing simple operations requires a lot of work,” according to the original proposal for the project. A new top-level library will include several classes that could ease concurrency programming, such as the ability to execute calls asynchronously. Other new features include an improved Secure Sockets Layer module, a new module to access configuration information, as well as an extension that enables the programming language’s source code files to be shared among different versions of the Python interpreter.