• July 2010
    M T W T F S S

New Languages, and Why We Need Them.

Technology Review (07/26/10) Pavlus, John

The creators of 24 new programming languages, including hobbyists, academics, and corporate researchers, recently presented their work at the Emerging Languages Camp. “There’s a renaissance in language design at the moment, and the biggest reason for it is that the existing mainstream languages just aren’t solving the problems people want solved,” says Google’s Rob Pike. Google’s Go language was designed to manage the complexity of distributed, multicore computing platforms such as data centers and cloud networks. Go reduces redundancies in the compiling process, which means that “programs can be ready to execute in a matter of seconds,” Pike says. Vrije Universiteit Brussel’s researcher Tim Van Cutsem presented AmbientTalk, an experimental language based on ambient-oriented programming, which departs from traditional computing by not relying on central infrastructure and by assuming that network connections are volatile and unpredictable. “AmbientTalk is smart enough to buffer messages so that when the connection drops, they’re not lost, and when the connection is restored, it sends the messages through as if nothing happened,” Van Cutsem says. Microsoft’s Matt MacLaurin developed Kodu, a language designed to get young people interested in programming. “Our working theory is that programming is intrinsically fascinating and fun, like crosswords or sudoku,” MacLaurin says.


XML Pioneer Pitches Functional Programming for Concurrency.

InfoWorld (07/26/10) Krill, Paul

XML co-inventor Tim Bray says that functional programming, rather than threads, is the best option for programmers developing code for multicore processors. Programming for multicore chips requires developers to deal with concurrency, which presents its own issues, Bray says. “It involves a lot of problems that are very difficult to think about and reason about and understand,” he says. However, functional programming, made possible with languages such as Erlang and Clojure, offers a way to handle concurrency. Erlang was originally designed for programming massive telephone switches with thousands of processors. Bray says that although it has no classes, objects, or variables, it is “bulletproof and reasonably high-performance.” Clojure is a Lisp, runs on the Java Virtual Machine, and compiles to straight Java bytecode, which makes it very fast, Bray notes. “This is a super, super high-performance language,” he says.


Supercomputer Reproduces a Cyclone’s Birth, May Boost Forecasting.

NASA News (07/21/10) Cook-Anderson, Gretchen

University of Maryland (UMD) research scientist Bo-wen Shen used the National Aeronautics and Space Administration’s Pleiades supercomputer and atmospheric data to create the first computer model to replicate the formation of a tropical cyclone five days in advance. Shen’s computer model could improve the understanding of the predictability of tropical cyclones. “To do hurricane forecasting, what’s really needed is a model that can represent the initial weather conditions–air movements and temperatures, and precipitation–and simulate how they evolve and interact globally and locally to set a cyclone in motion,” Shen says. He used actual data from the 2008 tropical cyclone Nargis, along with the new model to develop insights into the dynamics of weather conditions over time and across different areas that generate storms. “In the last few years, high-resolution global modeling has evolved our understanding of the physics behind storms and its interaction with atmospheric conditions more rapidly than in the past several decades combined,” Shen says.


Advance Made Toward Communication, Computing at “Terahertz” Speeds.

Oregon State University News (07/19/10) Stauth, David

Scientists at Oregon State University (OSU), the University of Iowa, and Philipps University in Germany have developed a method for using a gallium arsenide nanodevice as a signal processor at terahertz speeds, which they say is a key advance for optical communication and computing. The method includes a way for nanoscale devices based on gallium arsenide to respond to strong terahertz pulses in an extremely short period, controlling the electrical signal in a semiconductor. “Electrons and wires are too slow, they’re a bottleneck,” says OSU professor Yun-shik Lee. “The future is in optical switching, in which wires are replaced by emitters and detectors that can function at terahertz speeds.” The scientists found that the gallium arsenide devices used in their research can achieve that goal. “We were able to manipulate and observe the quantum system, basically create a strong response and the first building block of optical signal processing,” Lee says. The first applications of the technology will likely be in optical communications, but the ultimate application could be quantum computing, Lee says.


Building Skills That Count.

University of Texas at Austin (07/16/10) Fidelman, Laura

The Texas Advanced Computing Center (TACC) has created a supercomputing curriculum designed to teach advanced computing skills to undergraduate and graduate students at the University of Texas at Austin. TACC scientists and researchers are teaching students how to make use of special-purpose, high-end computer systems to solve computational problems beyond the capabilities of typical desktop computers. The majority of students have backgrounds in chemistry, biology, computer science, geosciences, mathematics, and physics. The program starts by providing students with the basics of programming in the FORTRAN and C++ computer languages, which dominate supercomputing, and leads to classes in which students write complex programs that run efficiently on supercomputers, including TACC’s Ranger supercomputer. Undergraduate students can complete coursework to earn a Certificate of Scientific Computation, while graduate students complete a Portfolio in Scientific Computation.


Predicting Success With NIWA Supercomputer New Zealand.

Dominion Post (07/22/10) Chapman, Katie

New Zealand’s National Institute of Water & Atmospheric Research (NIWA) announced that it has launched the most powerful computer in the southern hemisphere. The new supercomputer can perform 34 trillion calculations a second and can store 5 petabytes on tape. It is 100 times faster and has 500 times more disk space than the current model. NIWA says scientists will use the supercomputer to forecast the impact of severe weather events, such as flooding, storm surge, and inundation; and model climate change, river flow, ocean levels, and wave patterns. In addition, bioengineers at Auckland University will use the supercomputer to create computer models of the human body, which could lead to new approaches to diagnosing and treating patients as well as in developing new medicines. Phase two of the installment will be completed next year, which will double the speed and the disk space of the supercomputer.


Protein From Poplar Trees Can Be Used to Greatly Reduce Size of Memory Elements and Increase the Density of Computer Memory.

Hebrew University of Jerusalem (07/21/10)

Genetically engineered poplar-derived protein complexes have the potential to increase the memory capacity of future computers. Scientists from the Hebrew University of Jerusalem have combined protein molecules obtained from the poplar tree with memory units based on silica nanoparticles. The team genetically engineered the poplar protein to develop a hybrid silicon nanoparticle. Attached to the inner pore of a stable, ring-like protein, the hybrids are arranged in a large array of very close molecular memory units. Professor Danny Porath and graduate student Izhar Medalsy have successfully demonstrated the approach. They say genetically engineered poplar-derived protein complexes could lead to systems that would need much less space for memory and functional logic elements. The researchers say the approach to miniaturizing memory elements is cost-effective and could replace standard fabrication techniques.


Neurons to Power Future Computers.

BBC News (07/23/10)

University of Plymouth computer scientists led by Thomas Wennekers are developing novel computers that mimic the way neurons are built and how they communicate. Neural-based computers could lead to improvements in visual and audio processing. “We want to learn from biology to build future computers,” Wennekers says. “The brain is much more complex than the neural networks that have been implemented so far.” The researchers are collecting data about neurons and how they are connected in one part of the brain. The project is focusing on the laminar microcircuitry of the neocortex, which is involved in higher brain functions such as seeing and hearing. Meanwhile, Manchester University professor Steve Furber is using the neural blueprint to produce new hardware. Furber’s project, called Spinnaker, is developing a computer optimized to run like biology does. Spinnaker aims to develop innovative computer processing systems and insights into the way that several computational elements can be connected. “The primary objective is just to understand what’s happening in the biology,” Furber says. “Our understanding of processing in the brain is extremely thin.”


‘Condor’ Brings Genome Assembly Down to Earth.

University of Wisconsin-Madison (07/19/10) Barncard, Chris

Researchers from the University of Wisconsin-Madison (UWM) and the University of Maryland (UMD) have assembled a full human genome from millions of pieces of data using a network of computers instead of a supercomputer. UMD professors Mihai Pop and Michael Schatz combined their Conrail genome assembly software with UWM’s Condor distributing computing program. Condor, developed at UWM’s Center for High Throughput Computing, breaks up long lists of heavy computing tasks and distributes them across networked computer workstations. The UWM team added features from another distributed-computing tool, called Hadoop, to manage both the complex workflow chain and the large data management problems involved with the billions of letters taken from human DNA by a sequencing machine. “By running them together, we’re able to efficiently run this biological application–efficient not just in terms of computer time, but efficient in terms of dollars,” says UWM’s Greg Thain. “Because Condor could efficiently schedule the work, Maryland didn’t have to buy a multimillion-dollar disk cluster.”


The Trouble with Multicore.

Chipmakers are busy designing microprocessors that most programmers can’t handle

By David Patterson  /  IEEE Spectrum July 2010

The semiconductor industry took a big gamble when it switched from making microprocessors run faster to putting more of them on a chip, doing so without any clear notion of how such devices would generally be programmed. The hope is that someone will be able to figure out how to do that. At the moment, however, the ball is still in the air.