• April 2011
    M T W T F S S
    « Mar   May »
     123
    45678910
    11121314151617
    18192021222324
    252627282930  

IBM Shows Smallest, Fastest Graphene Processor.

IDG News Service (04/07/11) Agam Shah

IBM researchers say they have developed a graphene transistor that can complete 155 billion cycles per second. The researchers say it is the smallest transistor IBM has ever created, with a gate length of just 40 nm. Graphene-based transistors can be produced at low cost using standard semiconductor materials, says IBM researcher Yu-Ming Lin. The transistor was developed through a joint IBM and U.S. Defense Advanced Research Projects Agency initiative to create radio frequency transistors. The graphene transistors use a new kind of substrate called diamond-like carbon, according to IBM. “The performance of these graphene devices exhibited excellent temperature stability … a behavior that largely benefited from the use of a novel substrate of diamond-like carbon,” the company says. IBM fellow Phaedon Avouris says commercialized graphene transistors will improve performance in applications related to wireless communications, networking, radar, and imaging.

MORE

Supercomputers Let Up on Speed.

Chronicle of Higher Education (04/03/11) Jeffrey R. Young

Smarter rather than faster design appears to be coming into vogue as a gauge of a supercomputer’s success. A federal report from the President’s Council of Advisers on Science and Technology urges the provision of a more balanced portfolio of U.S. supercomputing development, and cautions against excessive emphasis on speed rankings. The study warns that “engaging in such an ‘arms race’ could be very costly, and could divert resources away from basic research aimed at developing the fundamentally new approaches” to supercomputing. One supercomputer that favors smarter design over faster is Blue Waters from the University of Illinois at Urbana-Champaign. Blue Waters features some memory that resides along the pathways between processors. Future supercomputers may employ a system that operates in the manner of a search engine, distributing computational problems among processors distributed across an expansive physical network. The Graph500 supercomputer ranking unveiled in November does not measure processing speed, but rather how fast supercomputers solve complex problems related to randomly generated graphs.

MORE

UT Debuts Its Newest Supercomputer.

Austin American-Statesman (TX) (04/04/11) Kirk Ladendorf

The University of Texas at Austin (UT), along with the Texas A&M University, Texas Tech, and the University of Texas System, among others, has built the Lonestar 4 supercomputer, which contains 1,888 Dell blade servers, each with two Intel Xeon 5600 processors. The new supercomputer is expected to support more than 1,000 research projects over the next four years. The Lonestar 4 will perform 8 million trillion computer computations over its projected four-year lifespan. Although Lonestar 4 does not have the total computer power of UT’s Ranger, it could be faster because it uses more advanced processor chips and a faster network. University of Tokyo researchers are using the computer to model the recent earthquake and tidal wave, as well as where radioactive water from the Fukushima Daiichi nuclear plant has dispersed in the ocean and the atmosphere. UT president William Powers Jr. says high-performance computing as “the fuel on which much of the modern research university runs.”

MORE

Cloud Computing, Data Policy on Track to ‘Democratize’ Satellite Mapping.

South Dakota State University (03/24/11)

New U.S. Geological Survey data policies and advances in cloud computing are leading to the democratization of satellite mapping, which could lead to wider access to information about the earth through platforms such as the Google Earth Engine. “This is an incredible advantage in terms of generating the value-added products that we create for quantifying deforestation, natural hazards, cropland area, urbanization, you name it,” says South Dakota State University (SDSU) professor Matt Hansen. He says free satellite images, coupled with the cloud computing capability offered by Google and similar organizations, is making it possible for ordinary users to analyze satellite imagery without costly hardware. Hansen and SDSU postdoctoral researcher Peter Potapov collaborated with Google to help process more than 50,000 images in order to generate a detailed map of Mexico to demonstrate the technology’s potential. Enhanced publicly available processing tools will democratize satellite data processing as more people become engaged in working with the data. However, Hansen notes that this will entail greater collaboration between academics, government scientists, and private industry in processing and characterizing the satellite data sets.

MORE

Researchers in Taiwan to Use Volunteer Computing to Visualize Earthquakes.

AlphaGalileo : E-Science Talk (03/28/11)

Researchers in Taiwan have set up Shakemovie@home in an attempt to reduce the amount of time it takes to create animations that simulate the motion of earthquakes. Shake movies take several hours to create because intensive calculations need to be performed on the models of earthquakes as well as the earth’s structure. However, with Shakemovie@home, researchers at the Institute of Earth Sciences at Academia Sinica will only use the computers of volunteers to retrieve essential functions that depend on the earth’s model. Researchers will compute, save, and store these elements, called Green’s functions, in advance, and retrieve them as they are needed. The retrieval process will be farmed out to volunteer computers. Researchers will be able to make a new shake movie in just minutes because they will not have to calculate Green’s functions every time. “By distributing this task to volunteers, to computers at home, we can get a better and faster way of making shake movies,” says Academia Sinica professor Li Zhao. “Now we have shake movies in a few hours, but with volunteer computing we could have it in minutes.”

MORE

Multicore Coding Standards Aim to Ease Programming.

IDG News Service (03/29/11) Agam Shah

The Multicore Association has established specifications for a programming model designed to make it easier to write software for multicore chips, particularly for those used in smartphones, tablets, and embedded systems. The association is developing a set of foundation application programming interfaces (APIs) to standardize communication, resource sharing, and virtualization. The association has completed the multicore communication API (MCAPI) and the multicore resource API (MRAPI), and is working to develop more tools and APIs involving virtualization. “The primary goal for all parties is to establish portability,” says Multicore Association president Markus Levy. He says that a consistent programming model will make it easier to reuse applications on different platforms. “By using MCAPI, the embedded applications code does not need to be aware of the inter-core communications method,” says Mentor Graphics’ Colin Walls. MCAPI allows programmers to enable applications for multicore once and reuse that code on multiple products in a product line and for next-generation devices, says PolyCore Software CEO Sven Brehmer. MCAPI will be used in telecom and data communications infrastructures, in addition to medical devices, high-performance computing, and military and aeronautics equipment, Brehmer says.

MORE