• March 2015
    M T W T F S S
    « Feb   Apr »

One-Atom-Thin Silicon Transistors Hold Promise for Super-Fast Computing

University of Texas at Austin (02/03/15)

A major advance involving the world’s thinnest silicon material could lead to dramatically faster, smaller, and more efficient computer chips. Researchers at the University of Texas at Austin have demonstrated silicene can be made into transistors. Although the material, made of a one-atom-thick layer of silicon atoms, holds promise for commercial adaptation, silicene is difficult to create and work with because of its complexity and instability when exposed to air. Professor Deji Akinwande and his team worked with Alessandro Molle at Italy’s Institute for Microelectronics and Microsystems to address these issues. The researchers developed a new method for fabricating the silicene that reduces its exposure to air. They allowed hot vapor of silicon atoms to condense onto a crystalline block of silver in a vacuum chamber, then formed a silicene sheet on a thin layer of silver and added a nanometer-thick layer of alumina. Their process enabled them to peel off the material and transfer it silver-side-up to an oxidized-silicon substrate. Then they scraped some of the silver to leave behind two islands of metal as electrodes, with a strip of silicene between them. Because of its close chemical affinity to silicon, Akinwande says silicene could offer “an opportunity in the road map of the semiconductor industry.”


Building Trustworthy Big Data Algorithms

Northwestern University Newscenter (01/29/15) Emily Ayshford

Northwestern University researchers recently tested latent Dirichlet allocation, which is one of the leading big data algorithms for finding related topics within unstructured text, and found it was neither as accurate nor reproducible as a leading topic modeling algorithm should be. Therefore, the researchers developed a new topic modeling algorithm they say has shown very high accuracy and reproducibility during tests. The algorithm, called TopicMapping, begins by preprocessing data to replace words with their stem. It then builds a network of connecting words and identifies a “community” of related words. The researchers found TopicMapping was able to perfectly separate the documents according to language and was able to reproduce its results. Northwestern professor Luis Amaral says the results show the need for more testing of big data algorithms and more research into making them more accurate and reproducible. “Companies that make products must show that their products work,” Amaral says. “They must be certified. There is no such case for algorithms. We have a lot of uninformed consumers of big data algorithms that are using tools that haven’t been tested for reproducibility and accuracy.”