• February 2011
    M T W T F S S
    « Jan   Mar »
     123456
    78910111213
    14151617181920
    21222324252627
    28  
  • Advertisements

Exabytes: Documenting the ‘Digital Age’ and Huge Growth in Computing Capacity.

Washington Post (02/10/11) Brian Vastag

The global capacity to store digital information totaled 276 exabytes in 2007, according to a University of Southern California (USC) study. However, that data is not distributed equally, with a distinct line dividing rich and poor countries in the digital world, says USC’s Martin Hilbert. In 2007, people in developed countries had access to about 16 times greater bandwidth than those in poor countries. “If we want to understand the vast social changes underway in the world, we have to understand how much information people are handling,” Hilbert says. The study found that 2002 marked the first year that worldwide digital storage capacity was greater than total analog capacity. “You could say the digital age started in 2002,” Hilbert says. “It continued tremendously from there.” Digital media accounted for 25 percent of all information stored in the world in 2000, but just seven years later 94 percent of all the information storage capacity on Earth was digital, with the remaining six percent comprised of books, magazines, video tapes, and other non-digital media forms. The study found that digital storage capacity grew 23 percent a year from 1986 to 2007, while computing power increased 58 percent a year during the same period. Hilbert notes that people generate 276 exabytes of digital data every eight weeks, but much of that information is not stored long term.

-MORE-

Advertisements

100-Petaflop Supercomputers Predicted By 2017.

InformationWeek (02/08/11) Antone Gonsalves

University of Tennessee’s Jack Dongarra predicts that there will be 100-petaflop supercomputer systems by 2017, and exascale systems, which could be 1,000 times faster than petascale systems, by 2020. “This will happen if the funding is in place for those machines,” says Dongarra, who helped design the testing system used to generate the Top500 supercomputing list. IBM plans to build 10-petaflop computer systems at both the Argonne National Laboratory and the University of Illinois at Urbana-Champaign in the next year, as well as a 20-petaflop system at the Lawrence Livermore National Laboratory in 2012. Cray also is ready to launch a 10-petaflop system. “Everything is moving along according to Moore’s law, so things are doubling every 18 months, roughly,” Dongarra says. IBM’s new systems will feature Blue Gene supercomputers, called Blue Gene/Q, and will run on specially designed systems-on-a-chip. The Argonne System will be used for industry, academic, and government work, while the Livermore system will be used for advanced uncertainty quantification simulations and weapons development.

-MORE-

Next-Generation Supercomputers.

IEEE Spectrum (02/11) Peter Kogge

Supercomputing performance upgrades are unlikely to be as spectacular in the next decade as they were in the last two, writes University of Notre Dame professor Peter Kogge. The U.S. Defense Advanced Research Projects Agency hoped that an exaflops-class supercomputer would be practically realizable by 2015, but a panel Kogge organized to debate this question concluded that such a breakthrough requires a complete rethinking of supercomputer construction in order to dramatically minimize power consumption. An additional challenge is keeping a massive number of microprocessor cores busy at the same time. Kogge says that “unless memory technologies emerge that have greater densities at the same or lower power levels than we assumed, any exaflops-capable supercomputer that we sketch out now will be memory starved.” Another daunting challenge is providing long-term storage with sufficient speed and density to retain checkpoint files, while reducing the operating voltage would make the transistors susceptible to new and more frequent faults. Nevertheless, Kogge thinks exaflop systems are attainable, but creating such a supercomputer “will demand a coordinated cross-disciplinary effort carried out over a decade or more, during which time device engineers and computer designers will have to work together to find the right combination of processing circuitry, memory structures, and communications conduits.”

-MORE-

The Battle of Cloud APIs Heats Up.

 Rackspace Cloud Computing and Hosting  (June 15, 2010), Angela Bartels 

Rackspace, a web-hosting company that made its reputation on excellent technical/customer support, was one of the first legitimate entrants into cloud provisioning. Its VMs are cheaper and more capable that Amazon EC2 and not as limited as Google App Engine. Last year, Rackspace released a REST-based API to its cloud and provided it as open source. Here’s how the API matches up with Amazon EC2.

University Gives Java Parallelism a Boost

InfoWorld (12/17/10) Paul Krill

University of Illinois at Urbana-Champaign (UIUC) computer scientists recently released DPJizer, an interactive tool designed to simplify the writing process in Deterministic Parallel Java (DPJ), a Java-based type-and-effect system developed by the university earlier this year. The developers say the DPJizer Eclipse plug-in is the first interactive, practical type-and-effect inference tool for a modern object-oriented system. DPJizer can save time by automatically analyzing a whole program with DPJ annotations. “DPJizer increases the productivity of programmers in writing safe and deterministic-by-default parallel programs for multi-core systems,” says UIUC’s Mohsen Vakilian. The goal of the DPJ project is to provide deterministic-by-default semantics for an object-oriented, imperative parallel language using mostly compile-time checking. The system also features a compiler, runtime, and other components of open source software.

-MORE-