• October 2009
    M T W T F S S
    « Sep   Nov »
     1234
    567891011
    12131415161718
    19202122232425
    262728293031  

Google CEO Imagines Era of Mobile Supercomputers.

InformationWeek (10/28/09) Claburn, Thomas

Google CEO Eric Schmidt believes the future of computing lies in smart mobile devices and data centers. “A billion people on the planet are carrying supercomputers in their hands,” Schmidt says. “Now you think of them as mobile phones, but that’s not what they really are. They’re video cameras. They’re GPS devices. They’re powerful computers. They have powerful screens. They can do many, many different things.” Schmidt says over the next few years mobile technology will continue to advance and consumers will be exposed to new applications that are unimaginable now. For example, Google’s Android phone division is working on an application that can take pictures of bar codes, identify the corresponding product, and compare prices online. Another Android application can translate a picture of a menu written in a foreign language. Cloud computing will provide the computational muscle for many of these future services, which Schmidt says is probably the next big wave in computing. He also believes that computing will continue to bring major changes to our society. “We’re going from a model where the information we had was pretty highly controlled by centralized media operatives to a world where most of our information will come from our friends, from our peers, from user-generated content,” Schmidt says. “These changes are profound in the society of America, in the social life, and all the ways we live.”

-More-

Advertisements

Professor Working to Advance Computing as a Science .

UA News (AZ) (10/28/09) Everett-Haynes, La Monica

University of Arizona professor Richard T. Snodgrass has received a U.S. National Science Foundation grant to promote computation as a true science. Snodgrass, an ACM Fellow, says the process of computational thinking is universal and highly valued in subjects such as physics, biology, and chemistry. “The problem with computer science is that a few people think it equals programming,” he says. “But that doesn’t emphasize the great ideas behind computer science, and that’s what we want to bring out in this grant.” Snodgrass and Peter Denning, director of the Cebrowski Institute at the Naval Postgraduate School in California, will use the three-year, $800,000 grant to elevate the status of computing and encourage students, particularly girls and women, at the K-12 level to enter the field. The grant will enable them to develop and organize the “Field Guide to the Science of Computation.” The guide will feature various levels, from beginner to graduate students and professionals, and provide an organized body of information on computing, including theoretical frameworks and models related to automation, communication, evaluation, design, and other topics. ACM’s education board and the Computer Science Teachers Association also will collaborate on the three-year project. Snodgrass said the grant came just before the U.S. House of Representatives passed a resolution endorsing the need to support computer science education at the K-12 level. The resolution designated the week of Dec. 7 as National Computer Science Education Week.

-More-

Science at the Petascale: Roadrunner Results Unveiled

Los Alamos National Laboratory News (10/26/09) Roark, Kevin N.

Roadrunner, housed at Los Alamos National Laboratory (LANL), recently completed its initial shakedown phase while performing accelerated petascale computer modeling and simulations for several unclassified science projects. The completion of the shakedown will allow Roadrunner, the world’s fastest supercomputer, to begin its transition to classified computing. Scientists used the 10 unclassified projects to optimize how large codes run on the machine. The 10 test projects were chosen from academic and research institutions across the United States. Some of the projects include research into dark matter and dark energy, creating a HIV evolutionary tree to help researchers focus on potential vaccines, nonlinear physics in high-powered lasers, modeling minuscule nanowires over long time periods, and exploring how shock waves cause materials to fail. Roadrunner, developed by IBM along with LANL and the National Nuclear Security Administration, uses a hybrid design to achieve its record-setting performance. Each compute node in a cluster contains two AMD Opteron dual-core processors and four PowerXCell 8i processors that act as computational accelerators. Roadrunner will now be used to perform classified advanced physics and predictive simulations.

-More-

Parallel course

MIT News (10/23/09) Hardesty, Larry

Researchers at the Massachusetts Institute of Technology’s (MIT’s) Computer Science and Artificial Intelligence Lab are helping make programmers’ move to parallel programming less onerous as computer chip manufacturers produce multicore technology to upgrade performance. “Just writing anything parallel doesn’t mean that it’s going to run fast,” says MIT professor Saman Amarasinghe. “A lot of parallel programs will actually run slower, because you parallelize the wrong place.” Amarasinghe also thinks that computers are capable of automatically determining when to parallelize as well as which cores to assign which jobs. His group’s multicore computing effort is split along two lines–tools to ease programmers’ switch to parallel programming and tools to optimize programs’ performance once that switch has been accomplished. Amarasinghe and two graduate students have designed a system to increase the predictability of multicore programs by assigning a core attempting to access a shared resource a priority not according to the time of its request but according to the number of tasks it has performed. Amarasinghe’s lab has several projects focusing on parallel program optimization, one of which helps programs adjust to changing conditions on the spur of the moment. His group has devised a language that asks the developer to specify different techniques for executing a given computational job. When the program is operational, the computer automatically identifies the method with maximum efficiency.

-More-

Why Desktop Multiprocessing Has Speed Limits?

Computerworld (10/05/09) Vol. 43, No. 30, P. 24; Wood, Lamont

Despite the mainstreaming of multicore processors for desktops, not every desktop application can be rewritten for multicore frameworks, which means some bottlenecks will persist. “If you have a task that cannot be parallelized and you are currently on a plateau of performance in a single-processor environment, you will not see that task getting significantly faster in the future,” says analyst Tom Halfhill. Adobe Systems’ Russell Williams points out that performance does not scale linearly even with parallelization on account of memory bandwidth issues and delays dictated by interprocessor communications. Analyst Jim Turley says that, overall, consumer operating systems “don’t do anything smart” with multicore architecture. “We have to reinvent computing, and get away from the fundamental premises we inherited from von Neumann,” says Microsoft technical fellow Burton Smith. “He assumed one instruction would be executed at a time, and we are no longer even maintaining the appearance of one instruction at a time.” Analyst Rob Enderle notes that most applications will operate on only a single core, which means that the benefits of a multicore architecture only come when multiple applications are run. “What we’d all like is a magic compiler that takes yesterday’s source code and spreads it across multiple cores, and that is just not happening,” says Turley. Despite the performance issues, vendors prefer multicore processors because they can facilitate a higher level of power efficiency. “Using multiple cores will let us get more performance while staying within the power envelope,” says Acer’s Glenn Jystad.

-More-

Computers Have Speed Limit as Unbreakable as Speed of Light, Say Physicists.

ZDNet (10/15/09) Jablonski, Chris

Boston University physicists Lev Levitin and Tommaso Toffoli have demonstrated that if processors continue to improve in accordance with Moore’s Law, an unbreakable speed barrier will be reached in approximately 75 years. Even with new technologies, there will still be an absolute ceiling for computing speed, no matter how small components get, according to Levitin and Toffoli. The two physicists have created an equation for the minimum amount of time it takes for a single computation to occur, which establishes the speed limit for all possible computers. Using the equation, Levitin and Toffoli calculated that, for every unit of energy, a perfect quantum computer produces 10 quadrillion more operations each second than today’s fastest processors. However, if following Moore’s Law, it would take about 75 to 80 years to achieve this quantum limit, and no system can overcome that limit. “It doesn’t depend on the physical nature of the system or how it’s implemented, what algorithm you use for computation,” Levitin says. “This bound poses an absolute law of nature, just like the speed of light.” The physicists note that technological barriers may slow down Moore’s Law as technology approaches the limit.

-More-

Quantum Computers Could Tackle Enormous Linear Equations.

Science News (10/16/09) Sanders, Laura

Aram Harrow of the University of Bristol in England along with the Massachusetts Institute of Technology’s Avinatan Hassidim and Seth Lloyd believe that encoding large datasets of linear equations in quantum forms will enable quantum computers to quickly solve problems with billions or even trillions of variables. The team’s new quantum algorithm could potentially enable quantum computers to be used for a wider range of applications. Complex processes such as image and video processing, genetic analyses, and Internet traffic control require enormous linear equations. “Solving these gigantic equations is a really huge problem,” Lloyd says. “Even though there are good algorithms for doing it, it still takes a very long time.” A classical computer might need at least 100 trillion steps to solve a problem with a trillion variables, while the newly proposed algorithm would enable a quantum computer to solve the problem in a few hundred steps, according to the researchers. They plan to test the algorithm in the lab by having a quantum computer solve a set of linear equations with four variables, among other problems.

-More-