• January 2011
    M T W T F S S
    « Dec   Feb »
     12
    3456789
    10111213141516
    17181920212223
    24252627282930
    31  

Linux Skills Are Hot in an Improving IT Hiring Front

PC World (01/24/11) Katherine Noyes

The hiring environment for information technology (IT) professionals last year was the best it’s been since 2000, according to a recent Challenger, Gray & Christmas report, indicating that the technology industry has been more resilient than most during the weak economy. “These firms are definitely on the leading edge of the recovery, as companies across the country and around the globe begin to upgrade and reinvest in their technology,” says Challenger, Gray & Christmas CEO John A. Challenger. The proliferation of smartphones and tablets has played a large part in keeping the tech industry thriving. Forrester Research predicts that 2011 technology spending will increase 7.5 percent in the U.S. and 7.1 percent globally. Skills in Linux are in particular demand, according to Dice research. “More and more devices and systems and services are built based on Linux, and therefore, more and more manufacturers and vendors are looking for Linux talent,” says Intel’s Dirk Hohndel. Linux professionals also tend to get as much as 10 percent more in salary than other IT workers, according to Dice.

-MORE-

Advertisements

Supercomputers Increase Research Competitiveness.

University of Arkansas (AK) (01/24/11) Matt McGowan

Research competitiveness for U.S. academic institutions is boosted by consistent investment in high-performance computing, according to a new study by University of Arkansas researchers. “Even at modest levels, such investments, if consistent from year to year, strongly correlate to new [U.S. National Science Foundation (NSF)] funding for science and engineering research, which in turn leads to more published articles,” says Amy Apon, director of the Arkansas High Performance Computing Center. Among the factors the researchers considered to ascertain an investment’s impact on competitiveness were ranking on the Top 500 list, number of published articles, total NSF funding, overall total of federal funding, and total funding from specific federal entities. A pair of statistical models was used to measure and analyze data from these factors, and Apon and colleagues observed an economically and statistically significant impact on higher NSF funding and published articles by researchers at investing institutions. The researchers determined that an initial or one-time investment in supercomputing loses value quickly if investments are not sustained. “Our results suggest that institutions that have attained significant returns from investment in high-performance computing in the past cannot rest on laurels,” Apon says.

-MORE-

Scientists Squeeze More Than 1,000 Cores on to Computer Chip.

 University of Glasgow (United Kingdom) (12/29/10) Stuart Forsyth

A field programmable gate array (FPGA) chip has been used to create an ultra-fast 1,000-core computer processor. Researchers from the University of Glasgow and the University of Massachusetts-Lowell divided up the chip’s transistors into small groups and gave each a task to perform. The creation of more than 1,000 mini-circuits effectively turned the FPGA chip into a 1,000-core processor–each working with its own instructions. They used the FPGA chip to process an algorithm that is key to the MPEG movie format at 5 Gbps, or about 20 times faster than current top-end desktop computers. “FPGAs are not used within standard computers because they are fairly difficult to program, but their processing power is huge while their energy consumption is very small because they are so much quicker–so they are also a greener option,” says Glasgow’s Wim Vanderbauwhede. The researchers dedicated memory to each core to make the processor faster. “This is very early proof-of-concept work where we’re trying to demonstrate a convenient way to program FPGAs so that their potential to provide very fast processing power could be used much more widely in future computing and electronics,” Vanderbauwhede says.

-MORE-

Computing on multiple graphic cards accelerates numerical simulations by orders of magnitude.

Fraunhofer SCAI (01/03/11) Michael Krapp

The Fraunhofer Institute for Algorithms and Scientific Computing (SCAI) and the University of Bonn have been chosen as one of the first German NVIDIA CUDA research centers. The researchers will focus on the development of paralleled multi-graphics processing units (GPUs) software for numerical simulation. “Our vision is to develop a massively parallel, completely multi-GPU-based high-performance molecular dynamics software package, as well as a massively parallel, completely multi-GPU-based high-performance fluid dynamics code,” says SCAI professor Michael Griebel. Numerical simulations can take days to compute, but SCAI’s research could significantly shorten that time. The CUDA parallel computing architecture uses a GPU’s massive computing power to increase computing performance. The researchers want to modify the Tremolo-X software package, which is used for the molecular dynamics of atoms or molecules, for use on multiple graphics cards. Tremolo-X simulates materials at the nano scale, making it possible to efficiently design new and innovative materials. The GPUs also are much more energy efficient than standard CPUs.

-MORE-

Fujitsu accelerates verification of Java software through parallel processing.

PhysOrg.com (12/24/10)

 Fujitsu Laboratories has developed a way to verify Java software using parallel processing by using cloud computing services to shorten the time need for verification. The technique expands on symbolic execution, which automatically executes tests on Java programs, making it possible to process character string data. In a recent experiment using processing nodes, the technique outperformed existing technology about tenfold. The program utilizes parallelization that divides the symbolic execution processing among multiple processing nodes, which accelerates the testing. The technique also uses dynamic load balancing, which redistributes the processing of overloaded nodes to idle nodes. If a certain node takes too long to process the data, the program moves some of the data to other nodes that are finished processing, thereby achieving the faster total processing time through parallelization.

-MORE-