• November 2009
    M T W T F S S
    « Oct   Dec »
     1
    2345678
    9101112131415
    16171819202122
    23242526272829
    30  

IBM Announces Advances Toward a Computer that Works Like a Human Brain.

San Jose Mercury News (CA) (11/18/09) Bailey, Brandon

Researchers from IBM’s Almaden Research Center and the Lawrence Berkeley National Laboratory have performed a computer simulation that matches the scale and complexity of a cat’s brain, while researchers from IBM and Stanford University say they have developed an algorithm for mapping the human brain in unprecedented detail. The researchers say these efforts could help build a computer that replicates the complexity of the human brain. In the first project, an IBM supercomputer at the Lawrence Livermore Lab was used to model the movement of data through a structure with 1 billion neurons and 10 trillion synapses, enabling researchers to observe how information “percolates” through a system similar to a feline cerebral cortex. The research is part of IBM project manager Dharmendra Modha’s efforts to design a new computer by first better understanding how the brain works. “The brain has awe-inspiring capabilities,” Modha says. “It can react or interact with complex, real-world environments, in a context-dependent way. And yet it consumes less power than a light bulb and it occupies less space than a two-liter bottle of soda.” Modha says a major difference between the brain and traditional computers is that current computer are designed on a model that differentiates between processing and storing data, which can lead to a lag in updating information. However, the brain can integrate and react to a constant stream of sights, sounds, and sensory information. Modha imagines a cognitive computer capable of analyzing a constant stream of information from global trading floors, banking institutions, and real estate markets to identify key trends and their consequences; or a computer capable of evaluating pollution using real-time sensors from around the world.

-More-

Advertisements

There’s No Business Like Grid Business.

ICT Results (11/16/09)

The European Union-funded GRid enabled access to rich mEDIA (GREDIA) content project has developed a platform that makes the grid’s resources available to business users. “Many business applications need to work fast and need to work with huge amounts of data,” says GREDIA coordinator Nikos Sarris. “The grid is ideal for that, but software developers don’t use it because they don’t know how.” Sarris says the GREDIA platform will help business application developers exploit the grid without requiring them to become grid technology experts. He says the system is reliable because it is distributed across numerous machines, and it optimizes business transactions using algorithms that make the most of the grid’s distributed resources. The project developed and demonstrated two business services: one allows any number of sources using almost any kind of device to be used as a news-gathering team; a second is designed for the banking industry. The banking applications enable lenders to use their home computers or handheld devices to securely provide information. The program authenticates information, combines it into a profile, and calculates credit rankings using a protocol specified by the lender.

-More-

Jaguar Supercomputer Races Past Roadrunner in Top500.

CNet (11/15/09) Ogg, Erica

The Jaguar Cray XT5 supercomputer can process 1.75 petaflops and is now the fastest computer in the world. The Jaguar supercomputer switches places with IBM’s Roadrunner, which saw its processing speed decline to 1.04 petaflops from 1.105 petaflops, apparently due to a repartitioning of the system, and is now in second place on the Top500 list of supercomputers. The list, which is compiled two times a year, will be unveiled Tuesday at the SC09 conference in Portland, Ore. The Kraken, another Cray XT5 system, has risen to third from fifth place by posting a processing performance speed of 832 teraflops, while IBM’s BlueGene/P, at Forschungszentrum Juelich in Germany, is No. 4 with 825.5 teraflops. The Tianhe-1, at No. 5, marks the highest ranking ever for a Chinese supercomputer. Sandia National Laboratories’ Red Sky, a Sun Blade system with a LINPACK performance of 423 teraflops, is the newcomer to the top 10. Hewlett-Packard accounted for 210 of the 500 fastest supercomputers, followed by IBM with 185. Eighty percent used Intel processors, and 90 percent used the Linux operating system.

-More-

Tough Choices for Supercomputing’s Legacy Apps.

ZDNet UK (11/12/09) Jones, Andrew

The future of supercomputing holds several significant software challenges, writes Numerical Algorithms Group’s Andrew Jones. The first challenge is the rapidly increasing degree of concurrency required. A complex hierarchy of parallelism, from vector-like parallelism at the local level through multithreading to multi-level, and massive parallel processing across numerous nodes, also present unique challenges, Jones says. Additionally, supercomputing will have to handle a new wave of verification, validation, and resilience issues. Although petaflop and exascale computing holds much promise, Jones says experts question whether some current applications will still be usable. Experts argue that some legacy applications are coded in certain ways that make evolution impossible, and that code refactoring and algorithm development would be more difficult than starting from scratch. However, Jones notes that disposing of old code also throws away extremely valuable scientific knowledge. Ultimately, he says that two classes of applications may emerge–programs that will never be able to exploit future high-end supercomputers but are still used while their successors develop comparable scientific maturity, and programs that can operate in the exascale and petascale arena. Jones says that developing and sustaining these two fields will require a well-balanced approach among researchers, developers, and funding agencies, who will have to continue to provide investments in scaling, optimization, algorithm evolution, and scientific advancements in existing code while diverting sufficient resources to the development of new code.

-More-

How Secure Is Cloud Computing?

Technology Review (11/16/09) Talbot, David

The recent ACM Cloud Computing Security Workshop, which took place Nov. 13 in Chicago, was the first event devoted specifically to the security of cloud computing systems. Speaker Whitfield Diffie, a visiting professor at Royal Holloway, University of London, says that although cryptography solutions for cloud computing are still far-off, much can be done in the short term to help make cloud computing more secure. “The effect of the growing dependence on cloud computing is similar to that of our dependence on public transportation, particularly air transportation, which forces us to trust organizations over which we have no control, limits what we can transport, and subjects us to rules and schedules that wouldn’t apply if we were flying our own planes,” Diffie says. “On the other hand, it is so much more economical that we don’t realistically have any alternative.” He says current cloud computing techniques negate any economic benefit that would be gained by outsourcing computing tasks. Diffie says a practical near-term solution will require an overall improvement in computer security, including cloud computing providers choosing more secure operating systems and maintaining a careful configuration on the systems. Security-conscious computing services providers would have to provision each user with their own processors, caches, and memory at any given moment, and would clean systems between users, including reloading the operating system and zeroing all memory.

-More-

Supercomputers With 100 Million Cores Coming By 2018.

Computerworld (11/16/09) Thibodeau, Patrick

A key topic at this week’s SC09 supercomputing conference, which takes place Nov. 14-20 in Portland, Ore., is how to reach the exascale plateau in supercomputing performance. “There are serious exascale-class problems that just cannot be solved in any reasonable amount of time with the computers that we have today,” says Oak Ridge Leadership Computing Facility project director Buddy Bland. Today’s supercomputers are still well short of exascale performance. The world’s fastest system, Oak Ridge National Laboratory’s Jaguar, reaches a peak performance of 2.3 petaflops. Bland says the U.S. Department of Energy (DOE) is holding workshops on building a system 1,000 times more powerful. The DOE, which is responsible for funding many of the world’s fastest systems, wants two machines to reach approximately 10 petaflops by 2011 to 2013, says Bland. However, the next major milestone currently receiving the most attention is the exaflop, or a million trillion calculations per second. Exaflop computing is expected to be achieved around 2018, according to predictions largely based on Moore’s Law. However, problems involved in reaching exaflop computing are far more complicated than advancements in chips. For example, Jaguar uses 7 megawatts of power, but an exascale system that uses CPU processing cores alone could take 2 gigawatts, says IBM’s Dave Turek. “That’s roughly the size of medium-sized nuclear power plant,” he says. “That’s an untenable proposition for the future.” Finding a way to reduce power consumption is key to developing an exascale computer. Turek says future systems also will have to use less memory per core and will require greater memory bandwidth.

-More-

Unlimited Compute Capacity Coming, IBM Says!

Computerworld Canada (11/03/09) Ruffolo, Rafael

IBM Canada Lab director Martin Wildberger predicts that unlimited computing capacity will become a reality in the near future, putting the power of modern mainframes in devices such as smartphones. Wildberger, speaking at the recent IBM-sponsored Center for Advanced Studies Conference in Toronto, said the world is becoming increasingly digitized, and sensors and radio-frequency identification technologies are becoming more “abundant, pervasive, and ubiquitous.” Simultaneously, the world is becoming more interconnected through mobile phones and increasing online access, which has raised the awareness and expectations of consumers and forced businesses to react faster. These trends have made an unlimited amount of data available to businesses, and the ability to use that data has become an important challenge. Wildberger noted, for example, that automotive companies are looking at driving pattern information to develop a real-time system capable of detecting if a driver is falling asleep. Despite such possibilities, Wildberger said that IBM data shows that 85 percent of computing capacity is idle, and 70 cents of every dollar spent on information technology goes toward maintaining systems instead of taking advantage of new data. He said the companies that invest in becoming smarter and successfully capitalizing on the data created in a world with unlimited computing capacity will be the most successful.

-More-