• November 2013
    M T W T F S S

The Internet of Things Needs a Lot of Work

IDG News Service (11/12/13) Stephen Lawson

Mobile connected devices present too many challenges for users, said industry leaders during a panel at the recent Open Mobile Summit. Frog Design’s Mark Rolston notes that users have to link devices, enter passwords, manage home Wi-Fi, and deal with corporate IT departments at work, and are near their limit for babysitting devices all day. The experts say the whole premise of mobile interfaces is wrong, noting devices should be asking users what they want and learning from prior events rather than forcing users to ask. “There’s just a million use cases you can think of where today there’s [an] interface to try to understand what the user wants, and in the future there should just be action that does the right thing,” says Rick Osterloh at Google’s Motorola Mobility subsidiary. He says a car should automatically connect to the Internet by itself and automatically turn on the light when the driver reaches home. Rolston also notes that rather than using a phone to control devices in the home, the many connected appliances together should form a computer of their own. “The computer is not this box in the corner, or box in your pocket, it’s something you are surrounded by,” Rolston says.


SDSC Uses Meteor Raspberry Pi Cluster to Teach Parallel Computing

UCSD News (CA) (11/12/13) Jan Sverina

University of California, San Diego (UCSD) researchers have build Meteor, a Linux cluster using 16 Raspberry Pi computers as part of a program to teach children and adults the basics of parallel computing. Meteor is a complement to Comet, a new supercomputer to be deployed in early 2015 thanks to a $12 million U.S. National Science Foundation grant. “The goal of Meteor is to educate kids and adults about parallel computing by providing an easy-to-understand, tangible model of how computers can work together,” says the San Diego Supercomputing Center’s (SDSC) Rick Wagner. The researchers already have started developing a curriculum for high school students as part of SDSC’s education program. The researchers also have worked with UCSD undergraduates on projects supported by Meteor, with students creating games that operate across the cluster. This fall, Wagner started teaching a visualization and computing course using Meteor to help visualize data generated by SDSC’s high-performance computing systems. “This kind of development and learning is what the Raspberry Pi is ideal for: taking a complex problem and allowing someone to solve it in a simple, unconstrained environment, as well as encouraging students to design new hardware that we haven’t yet imagined,” Wagner says.


New Supercomputer Uses SSDs Instead of DRAM and Hard Drives

IDG News Service (11/04/13) Agam Shah

Lawrence Livermore National Laboratory (LLNL) this month is deploying Catalyst, a new supercomputer that uses solid-state drive (SSD) storage as an alternative to dynamic random access memory and hard drives, and delivers a peak performance of 150 teraflops. Catalyst has 281 terabytes of total SSD storage and is configured as a cluster broken into 324 computing units, each of which has two 12-core Xeon E5-2695v2 processors, totaling 7,776 central processing unit cores. Catalyst is built around the Lustre file system, which helps break bottlenecks and improves internal throughput in distributed computing systems. “As processors get faster with every generation, the bottleneck gets more acute,” says Intel’s Mark Seager. He notes that Catalyst offers a throughput of 512GB per second, which is the same as LLNL’s Sequoia, the world’s third-fastest supercomputer. Although Catalyst’s peak performance is nowhere close to the world’s fastest high-performance computers, its use of SSD technology is noteworthy. Experts say SSDs are poised for widespread enterprise adoption as they consume less energy and are becoming more reliable. For example, faster SSDs increasingly are replacing hard drives in servers to improve data access rates, and they also are being used in some servers as cache, where data is temporarily stored for quicker processing.


The Status of Moore’s Law: It’s Complicated

IEEE Spectrum (10/28/13) Rachel Courtland

As computer chips grow denser, it becomes increasingly difficult to measure the progression of Moore’s Law. Exacerbating this situation is the mutability of the definition of node names, especially as manufacturers prepare to launch 14nm and 16nm chips. Some analysts imply that, irrespective of the next chip generation, the migration from old to new no longer guarantees the kind of price or performance improvements that it once did. The breakdown between performance and node name began around the mid-1990s as chipmakers not only continued to use lithography to pattern circuit components and wires on the chip, but also started etching away the ends of the transistor gate to make the devices shorter and faster. Eventually, “there was no one design rule that people could point to and say, ‘That defines the node name,'” says Intel fellow Mark Bohr. Despite this change in the transistor measurement trend, manufacturers persisted in packing the devices closer and closer together, assigning each successive chip generation a number about 70 percent that of the previous one. Today the node names are no longer consistent with the size of any specific chip dimension. No matter what definition is applied, numbers in node names have steadily declined, as have the distance between transistor gates.