• October 2009
    M T W T F S S
     1234
    567891011
    12131415161718
    19202122232425
    262728293031  

Google CEO Imagines Era of Mobile Supercomputers.

InformationWeek (10/28/09) Claburn, Thomas

Google CEO Eric Schmidt believes the future of computing lies in smart mobile devices and data centers. “A billion people on the planet are carrying supercomputers in their hands,” Schmidt says. “Now you think of them as mobile phones, but that’s not what they really are. They’re video cameras. They’re GPS devices. They’re powerful computers. They have powerful screens. They can do many, many different things.” Schmidt says over the next few years mobile technology will continue to advance and consumers will be exposed to new applications that are unimaginable now. For example, Google’s Android phone division is working on an application that can take pictures of bar codes, identify the corresponding product, and compare prices online. Another Android application can translate a picture of a menu written in a foreign language. Cloud computing will provide the computational muscle for many of these future services, which Schmidt says is probably the next big wave in computing. He also believes that computing will continue to bring major changes to our society. “We’re going from a model where the information we had was pretty highly controlled by centralized media operatives to a world where most of our information will come from our friends, from our peers, from user-generated content,” Schmidt says. “These changes are profound in the society of America, in the social life, and all the ways we live.”

-More-

Professor Working to Advance Computing as a Science .

UA News (AZ) (10/28/09) Everett-Haynes, La Monica

University of Arizona professor Richard T. Snodgrass has received a U.S. National Science Foundation grant to promote computation as a true science. Snodgrass, an ACM Fellow, says the process of computational thinking is universal and highly valued in subjects such as physics, biology, and chemistry. “The problem with computer science is that a few people think it equals programming,” he says. “But that doesn’t emphasize the great ideas behind computer science, and that’s what we want to bring out in this grant.” Snodgrass and Peter Denning, director of the Cebrowski Institute at the Naval Postgraduate School in California, will use the three-year, $800,000 grant to elevate the status of computing and encourage students, particularly girls and women, at the K-12 level to enter the field. The grant will enable them to develop and organize the “Field Guide to the Science of Computation.” The guide will feature various levels, from beginner to graduate students and professionals, and provide an organized body of information on computing, including theoretical frameworks and models related to automation, communication, evaluation, design, and other topics. ACM’s education board and the Computer Science Teachers Association also will collaborate on the three-year project. Snodgrass said the grant came just before the U.S. House of Representatives passed a resolution endorsing the need to support computer science education at the K-12 level. The resolution designated the week of Dec. 7 as National Computer Science Education Week.

-More-

Science at the Petascale: Roadrunner Results Unveiled

Los Alamos National Laboratory News (10/26/09) Roark, Kevin N.

Roadrunner, housed at Los Alamos National Laboratory (LANL), recently completed its initial shakedown phase while performing accelerated petascale computer modeling and simulations for several unclassified science projects. The completion of the shakedown will allow Roadrunner, the world’s fastest supercomputer, to begin its transition to classified computing. Scientists used the 10 unclassified projects to optimize how large codes run on the machine. The 10 test projects were chosen from academic and research institutions across the United States. Some of the projects include research into dark matter and dark energy, creating a HIV evolutionary tree to help researchers focus on potential vaccines, nonlinear physics in high-powered lasers, modeling minuscule nanowires over long time periods, and exploring how shock waves cause materials to fail. Roadrunner, developed by IBM along with LANL and the National Nuclear Security Administration, uses a hybrid design to achieve its record-setting performance. Each compute node in a cluster contains two AMD Opteron dual-core processors and four PowerXCell 8i processors that act as computational accelerators. Roadrunner will now be used to perform classified advanced physics and predictive simulations.

-More-

Parallel course

MIT News (10/23/09) Hardesty, Larry

Researchers at the Massachusetts Institute of Technology’s (MIT’s) Computer Science and Artificial Intelligence Lab are helping make programmers’ move to parallel programming less onerous as computer chip manufacturers produce multicore technology to upgrade performance. “Just writing anything parallel doesn’t mean that it’s going to run fast,” says MIT professor Saman Amarasinghe. “A lot of parallel programs will actually run slower, because you parallelize the wrong place.” Amarasinghe also thinks that computers are capable of automatically determining when to parallelize as well as which cores to assign which jobs. His group’s multicore computing effort is split along two lines–tools to ease programmers’ switch to parallel programming and tools to optimize programs’ performance once that switch has been accomplished. Amarasinghe and two graduate students have designed a system to increase the predictability of multicore programs by assigning a core attempting to access a shared resource a priority not according to the time of its request but according to the number of tasks it has performed. Amarasinghe’s lab has several projects focusing on parallel program optimization, one of which helps programs adjust to changing conditions on the spur of the moment. His group has devised a language that asks the developer to specify different techniques for executing a given computational job. When the program is operational, the computer automatically identifies the method with maximum efficiency.

-More-

Why Desktop Multiprocessing Has Speed Limits?

Computerworld (10/05/09) Vol. 43, No. 30, P. 24; Wood, Lamont

Despite the mainstreaming of multicore processors for desktops, not every desktop application can be rewritten for multicore frameworks, which means some bottlenecks will persist. “If you have a task that cannot be parallelized and you are currently on a plateau of performance in a single-processor environment, you will not see that task getting significantly faster in the future,” says analyst Tom Halfhill. Adobe Systems’ Russell Williams points out that performance does not scale linearly even with parallelization on account of memory bandwidth issues and delays dictated by interprocessor communications. Analyst Jim Turley says that, overall, consumer operating systems “don’t do anything smart” with multicore architecture. “We have to reinvent computing, and get away from the fundamental premises we inherited from von Neumann,” says Microsoft technical fellow Burton Smith. “He assumed one instruction would be executed at a time, and we are no longer even maintaining the appearance of one instruction at a time.” Analyst Rob Enderle notes that most applications will operate on only a single core, which means that the benefits of a multicore architecture only come when multiple applications are run. “What we’d all like is a magic compiler that takes yesterday’s source code and spreads it across multiple cores, and that is just not happening,” says Turley. Despite the performance issues, vendors prefer multicore processors because they can facilitate a higher level of power efficiency. “Using multiple cores will let us get more performance while staying within the power envelope,” says Acer’s Glenn Jystad.

-More-

Computers Have Speed Limit as Unbreakable as Speed of Light, Say Physicists.

ZDNet (10/15/09) Jablonski, Chris

Boston University physicists Lev Levitin and Tommaso Toffoli have demonstrated that if processors continue to improve in accordance with Moore’s Law, an unbreakable speed barrier will be reached in approximately 75 years. Even with new technologies, there will still be an absolute ceiling for computing speed, no matter how small components get, according to Levitin and Toffoli. The two physicists have created an equation for the minimum amount of time it takes for a single computation to occur, which establishes the speed limit for all possible computers. Using the equation, Levitin and Toffoli calculated that, for every unit of energy, a perfect quantum computer produces 10 quadrillion more operations each second than today’s fastest processors. However, if following Moore’s Law, it would take about 75 to 80 years to achieve this quantum limit, and no system can overcome that limit. “It doesn’t depend on the physical nature of the system or how it’s implemented, what algorithm you use for computation,” Levitin says. “This bound poses an absolute law of nature, just like the speed of light.” The physicists note that technological barriers may slow down Moore’s Law as technology approaches the limit.

-More-

Quantum Computers Could Tackle Enormous Linear Equations.

Science News (10/16/09) Sanders, Laura

Aram Harrow of the University of Bristol in England along with the Massachusetts Institute of Technology’s Avinatan Hassidim and Seth Lloyd believe that encoding large datasets of linear equations in quantum forms will enable quantum computers to quickly solve problems with billions or even trillions of variables. The team’s new quantum algorithm could potentially enable quantum computers to be used for a wider range of applications. Complex processes such as image and video processing, genetic analyses, and Internet traffic control require enormous linear equations. “Solving these gigantic equations is a really huge problem,” Lloyd says. “Even though there are good algorithms for doing it, it still takes a very long time.” A classical computer might need at least 100 trillion steps to solve a problem with a trillion variables, while the newly proposed algorithm would enable a quantum computer to solve the problem in a few hundred steps, according to the researchers. They plan to test the algorithm in the lab by having a quantum computer solve a set of linear equations with four variables, among other problems.

-More-

Volunteering Computers for Science.

Wall Street Journal (10/20/09) P. D2; Singer-Vine, Jeremy

To aid in the number-crunching needed to process ever-growing volumes of data in biomedical and other types of scientific research, researchers are recruiting citizen volunteers to contribute the power of their idle household computers. This is possible thanks to a massive network that allows scientists to parcel out the work in small chunks. The volunteers download an application onto their system that connects to a network that includes other citizen volunteers and researchers. The network assigns each system a tiny piece of a project’s puzzle to solve, and sends the results back to the network’s server when complete. Volunteer computing efforts are usually founded on the open source software known as the Berkeley Open Infrastructure of Network Computing (Boinc). University of California, Berkeley scientist David Anderson, who created Boinc, says two key security precautions have been implemented to mitigate the open network’s security risk. One precaution uses digital signatures to prevent hackers from hijacking an existing project’s network. The second precaution blocks off all Boinc activity from the rest of a host computer, which prevents any malicious code from causing significant damage. Recent advances in Internet speeds and personal computer power have helped to triple the combined power of volunteer computing efforts over the past two years, according to boincstats.com. Currently, four million computers owned by almost two million users provide approximately 60 projects using Boinc with access to about 2,500 teraflops of processing power. Projects that take advantage of such volunteer number-crunching include IBM’s nonprofit World Community Grid, which lends research support to various medical and humanitarian studies.

-More-

Crystals Hold Super Computer Key.

BBC News (10/18/09)

University of Edinburgh researchers used low-energy lasers to make salt crystals in gel, which could make it possible to store a terabyte of data in a space the size of a sugar cube within the next 10 years. The researchers focused two overlapping low-energy laser beams on a salt solution, which provided the exact right amount of energy to form a temporary crystal. Edinburgh professor Andy Alexander says the process could be used to improve on traditional methods of optical data storage such as CDs. In comparison to the two-dimensional surface of a CD, three-dimensional (3D) optical data storage contains far more layers, and tiny crystals could act as storage points. Information would be stored by making marks in a pattern and read using light. Alexander says that 3D, crystal-based devices could be available within 10 years and would enable users to easily store, access, and move massive amounts of data. “This research builds on a discovery that was made by accident many years ago, when it was found that light can be used to trigger crystal formation,” he says. “We have refined this technique and now we can create crystals on demand. There is much work to be done before these crystals can be used in practical applications such as optical storage, but we believe they have significant potential.”

-More-

Vulnerability Seen in Amazon’s Cloud-Computing.

Technology Review (10/23/09) Talbot, David

A new study by researchers from the Massachusetts Institute of Technology (MIT) and the University of California, San Diego (UCSD) suggests that leading cloud-computing services may be vulnerable to eavesdropping and malicious attacks. The study found that it may be possible for attackers to accurately map where a target’s data is physically located within the cloud and use various strategies to collect data. MIT postdoctoral researcher Eran Tromer says the vulnerabilities uncovered in the study, which only tested Amazon.com’s Elastic Computer Cloud (EC2) service, are likely present in current virtualization technology and will affect other cloud providers. The attack used in the study involves first determining which physical servers a victim is using within a cloud, implanting a virus on those servers, and then attacking the victim. The researchers demonstrated that once the malicious virtual machine is on the target’s server, the malware can carefully monitor how access to resources fluctuates, potentially allowing the attacker to glimpse sensitive information about the victim. The attack capitalizes on the fact that virtual machines still have IP addresses visible to anyone within the cloud. The researchers found that nearby addresses often share the same physical hardware within the cloud, so an attack can set up numerous virtual machines, look at their IP addresses, and determine which ones share a server as the target. It may even be possible to detect the victim’s passwords using a keystroke attack, Tromer says. Amazon’s Kay Kinton says that Amazon has deployed safeguards that prevent attackers from using the techniques described in the study.

-More-