• March 2014
    M T W T F S S
    « Feb   Apr »
     12
    3456789
    10111213141516
    17181920212223
    24252627282930
    31  

‘Nobel Prize in Computing’ Goes to Distributed Computing Wrangler Leslie Lamport

Network World (03/18/14) Bob Brown 

Microsoft Research principal Leslie Lamport will receive the 2013 ACM A.M. Turing Award, widely considered the Nobel Prize in computing, for his breakthrough work in “imposing clear, well-defined coherence on the seemingly chaotic behavior of distributed computing systems, in which several autonomous computers communicate with each other by passing messages,” according to ACM. Lamport’s algorithms, models, and verification systems have given distributed computer systems major roles throughout the data center, security, and cloud computing environments. “By finding useful ways to write specifications and prove correctness of realistic algorithms, assuring a strong foundation for complex computing operations, [Lamport] helped to move verification from an academic discipline to a practical tool,” says ACM president Vint Cerf. Lamport is being honored for milestones that include the concept of Byzantine failure, and temporal logic language. Lamport worked at Digital Equipment Corp. and SRI International before joining Microsoft in 2001. His 1978 paper, “Time, Clocks, and the Ordering of Events in a Distributed System,” is one of the most highly cited papers in computer science.

MORE

Advertisements

New Model Reduces Data Access Delay, Could Increase Speed by Up to 100x

II Today (03/11/14) 

Illinois Institute of Technology professor Xian-He Sun has developed Concurrent Average Memory Access Time (C-AMAT), a new mathematical model for reducing data access delay that promises to cut the penalty associated with accessing data and increase speed by up to 100 times through parallel memory access. “There’s no question the primary limits on computing performance–from mobile phones to supercomputers–are the costs associated with data movement,” says University of Chicago professor Andrew A. Chien. “Dr. Sun’s work attacks the critical problem of understanding and modeling data movement costs and systems performance and, thus, may enable better performing software [today] and improved hardware designs in the future.” C-AMAT is the first formal mathematical model to promote and evaluate the concept of parallel memory for reducing data access delay via explicit parallel data access. In addition, C-AMAT can mitigate the memory-wall effect and improve memory system performance. “The most profound research is not the design of the fastest algorithm for a given problem; it is revealing a fundamental computing property so hundreds or even thousands of algorithms can be developed upon it,” says Sun, creator of Sun-Ni’s law–one of three scalable computing laws along with Amdahl’s law and Gustafson’s law.

MORE

Storage-Happy Petabyte Capacity Supercomputers Due in 2015

IDG News Service (03/13/14) Agam Shah 

The Wrangler supercomputer at Texas Advanced Computing Center at the University of Texas at Austin and the Comet supercomputer at the San Diego Supercomputer Center at the University of California, San Diego are currently being re-engineered with a new design that includes high levels of storage relative to the number of processors in the system. The new supercomputers will provide better throughput, in-memory, and caching features, which could offer a faster and more efficient way to solve complex problems, according to the U.S. National Science Foundation (NSF). NSF wants more sophisticated supercomputing designs so data travels faster between processing elements, says consultant Dan Olds. “It has to do with the changing nature of high-performance computing,” Olds says. “They want to control massive data streams instead of handling batch [jobs].” The Comet supercomputer is more suitable for both high throughput and data-intensive computing because its heterogeneous configuration will support not only complex simulations, but also advanced analytics and visualization of output. Meanwhile, Wrangler will have 3,000 processing cores dedicated to data analysis, and flash storage layers for analytics.

MORE

What the Internet of 2025 Might Look Like

The Wall Street Journal (03/11/14) Brian R. Fitzgerald 

As the Internet approaches its 25-year anniversary, the Pew Research Center has released responses from science and technology experts about what the future Internet might look like. Pew had asked a group of experts in various fields what impact they thought the Internet would have in 2025 on social, political, and economic processes. Experts predict the Internet will be thoroughly embedded in homes and integrated into people’s daily lives, with some noting a rise in wearable technology, massive open online courses, and business model changes. “We may literally be able to adjust both medications and lifestyle changes on a day-by-day basis or even an hour-by-hour basis, thus enormously magnifying the effectiveness of an ever more understaffed medical delivery system,” predicts University of California, Berkeley software developer Aron Roberts. Massachusetts Institute of Technology senior research scientist David Clark says devices will become increasingly autonomous. “More and more, humans will be in a world in which decisions are being made by an active set of cooperating devices,” Clark says. Google chief Internet evangelist and ACM president Vint Cerf says business models will need to adapt to the economics of digital communication and storage. He also says, “We may finally get to Internet voting, but only if we have really strong authentication methods available.”

MORE

Stanford Engineers Create a Software Tool to Reduce the Cost of Cloud Computing

Stanford Report (CA) (02/28/14) Tom Abate 

Stanford University professor Christos Kozyrakis and doctoral student Christina Delimitrou have developed Quasar, a cluster management system they say can triple server efficiency and provide reliable service at all times, thereby reducing data center operating expenses. Data centers run applications such as search services for consumers or data analysis for businesses, placing varying demands on the data center and using different amounts of server capacity. Cluster management tools apportion the workload and assign applications to specific servers, based on input from software developers about the capacity the applications will require. Developers often overestimate when reserving server capacity, with the cumulative impact leading to significant excess capacity. Quasar begins by asking about the performance that applications require, then ensures adequate server capacity to meet various requirements. “We want to switch from a reservation-based cluster management to a performance-based allocation of data center resources,” Kozyrakis says. Quasar uses an algorithm similar to popular movie and book recommendation algorithms to suggest the minimum number of servers for each application and which applications can run together most effectively. Boosting data center efficiency will be essential for the growth of cloud computing, because electricity demands that threaten to overwhelm power plants preclude the option of continually adding servers.

MORE

Ford Invites Open Source Community to Tinker Away

EE Times (02/27/14) Junko Yoshida 

People will be able to customize and tinker with their Ford Motor vehicles using the automaker’s OpenXC open source platform. OpenXC works like an application programming interface for cars and combines open source hardware and software in a way that will enable enthusiasts to extend their vehicles with custom applications and pluggable modules. OpenXC uses standard, well-known tools to open up a wealth of data from the vehicle. The idea is to make the car as easy to program as a smartphone. Ford engineer Zachary Nelson has used OpenXC to re-task the motor from a Microsoft Xbox 360 game controller to create a shift knob that vibrates to signal gear shifts in a standard-transmission Mustang. The prototype uses the OpenXC research platform to link devices to the car via Bluetooth, and shares vehicle data from the on-board diagnostics port. “We designed the platform such that people can have real-time access to the vehicle data and they can do whatever they want with that data,” Nelson says. He also notes that people with smartphones can use OpenXC to connect with real-time vehicle data.

MORE

Livermore Joins With Oak Ridge and Argonne to Develop Next Supercomputers

LLNL News Center (02/26/14) Donald B. Johnston 

The collaboration of Oak Ridge, Argonne, and Livermore (CORAL) national laboratories plans to develop next-generation supercomputers capable of performing at up to 200 peak petaflops. CORAL is currently evaluating responses to a joint request for proposals for procurement issued in early January. The new supercomputers would be about 10 times faster than today’s most powerful high-performance computing systems, and would support the research missions at their respective labs. At Livermore, a system called Sierra would serve the National Nuclear Security Administration’s Advanced Simulation and Computing Program for stockpile stewardship. Oak Ridge and Argonne will utilize systems that fulfill the needs of their Department of Energy Office of Science missions under the Advanced Scientific Computing Research program. The technological innovations required for CORAL systems call for a deliberate and strategic investment plan. There are technological challenges to building the systems, such as containing power requirements, making sure it is reliable and resilient, and ensuring memory bandwidth is sufficient. The final decision on developing the supercomputers will be based on the small prototype systems that are built by the selected vendors. CORAL, which wants to produce three systems by 2017-2018, says the project could be key to the development of exascale systems.

MORE