• October 2016
    M T W T F S S
    « Sep    

New Hikari Supercomputer Starts Solar HVDC

Texas Advanced Computing Center (09/14/16) Jorge Salazar

The Hikari computing system at the Texas Advanced Computing Center (TACC) in Austin, TX, is the first supercomputer in the U.S. to use solar and high-voltage direct current (HVDC) for power. Launched by the New Energy and Industrial Technology Development Organization in Japan, NTT FACILITIES, and the University of Texas at Austin, the project aims to demonstrate the potential of HVDC, which allows for ease of connection to renewable energy sources, including solar, wind, and hydrogen fuel cells. During the day, solar panels shading a TACC parking lot provide nearly all of Hikari’s power, up to 208 kilowatts, and at night the microgrid connected to the supercomputer switches back to conventional AC power from the utility grid. The Hikari power feeding system, which is expected to save 15 percent on energy consumption compared to conventional systems, could change how data centers power their systems. The new supercomputer came online in late August, and it consists of 432 Hewlett Packard Enterprise (HPE) Apollo 8000 XL730f servers coupled with HPE DL380 and DL360 nodes interconnected with a first-of-its-kind Mellanox End-to-End EDR InfiniBand system operating at 100 Gbps. More than 10,000 cores from Intel “Haswell” Xeon processors will deliver more than 400 teraflops.


Reconfigurable Chaos-Based Microchips Offer Possible Solution to Moore’s Law

NCSU News (09/20/16) Tracey Peake

Nonlinear, multi-functional integrated circuits could lead to novel computer architectures that can do more with fewer transistors, according to researchers at North Carolina State University (NCSU). As the number of transistors on integrated circuits increases to keep up with processing demands, the semiconductor industry is seeking new ways to create computer chips without continually shrinking the size of individual transistors. The NCSU researchers utilized chaos theory to leverage a circuit’s nonlinearity and enable transistors to be programmed to perform different tasks. “In current processors you don’t utilize all the circuitry on the processor all the time, which is wasteful,” says NCSU researcher Behnam Kia. “Our design allows the circuit to be rapidly morphed and reconfigured to perform a desired digital function in each clock cycle.” Kia and NCSU professor William Ditto developed the design and fabrication of the integrated circuit chip, which is compatible with existing technology and utilizes the same processes and computer-aided design tools as existing computer chips. Ditto says the design is nearing commercial size, power, and ease of programming and could be of commercial relevance within a few months.


Faster Parallel Computing

MIT News (09/13/16) Larry Hardesty

Researchers from the Massachusetts Institute of Technology’s (MIT) Computer Science and Artificial Intelligence Laboratory (CSAIL) this week are presenting Milk, a new programming language, at the 25th International Conference on Parallel Architectures and Compilation Techniques in Haifa, Israel. With Milk, application developers can handle memory more efficiently in programs that manage scattered datapoints in large datasets. Tests on several common algorithms showed programs written in Milk topped the speed of those written in existing languages by a factor of four, and the CSAIL researchers think additional work will boost speeds even higher. MIT professor Saman Amarasinghe says existing memory management methods run into problems with big datasets because with big data, the scale of the solution does not necessarily rise in proportion to the scale of the problem. Amarasinghe also notes modern computer chips are not optimized for this “sparse data,” with cores designed to retrieve an entire block of data from main memory based on locality, instead of individually retrieving a single data item. With Milk, a coder inserts a few additional lines of code around any command that iterates via a large dataset looking for a comparatively small number of items. The researchers say Milk’s compiler then determines how to manage memory accordingly.


Revealed: Google’s Plan for Quantum Computer Supremacy

New Scientist (08/31/16) Jacob Aron

Google expects to have the world’s largest working quantum computer ready soon, as researchers say the company is on the verge of a breakthrough. Hints were dropped in July when Google published a study in which it announced a plan to achieve “quantum supremacy” by building the first quantum computer that can perform a task beyond the capabilities of classic computers. Google publicly announced a 9-quantum-bit (qubit) system, but its goal is a 50-qubit computer that can model the behavior of a random arrangement of quantum circuits. “They’re doing a quantum version of chaos,” says Simon Devitt at Japan’s RIKEN Center for Emergent Matter Science. After pushing classical computing to its limit in the simulation of quantum circuit behavior on the Edison supercomputer to set the goal it hopes to achieve, Google hired University of California, Santa Barbara professor John Martinis to design superconducting qubits. Devitt thinks quantum supremacy could be achieved by the end of 2017, although meeting the challenge even within the next five years would still be a major accomplishment. Building a 50-qubit quantum device would be the first step toward a fully scalable machine, which Devitt says will indicate the technology is “ready to move out of the labs.”


Transistors Will Stop Shrinking in 2021, Moore’s Law Roadmap Predicts

IEEE Spectrum (07/22/16) Rachel Courtland

The 2015 International Technology Roadmap for Semiconductors (ITRS) predicts the transistor could stop shrinking in only five years. The report predicts that after 2021, it will no longer be economically feasible for companies to continue to shrink the dimensions of transistors in microprocessors. Transistor miniaturization was still a part of the long-term forecast as recently as 2014, but three-dimensional (3D) concepts have gained momentum. A company could continue to make transistors smaller well into the 2020s, but the industry wanted to send the message that it is now more economic to go 3D, says ITRS chair Paolo Gargini. In the years before 3D integration is adopted, ITRS predicts leading-edge chip companies will seek to boost density by turning the transistor from a horizontal to a vertical geometry and building multiple layers of circuitry, one on top of another. The report also predicts the traditional silicon channel will be made with alternative materials. The changes will enable companies to pack more transistors in a given area, but keeping to the spirit of Moore’s Law is another matter.


Texas Goes Big With 18-Petaflop Supercomputer

Computerworld (06/02/16) Patrick Thibodeau

The Stampede 2 supercomputer to be set up at the Texas Advanced Computer Center (TACC) is designed to replace and approximately double the performance of Stampede, its 9-petaflop predecessor. The new system, which is scheduled to be available for research by next June, is being funded by a $30-million grant from the U.S. National Science Foundation. Stampede 2 will utilize Dell servers and Intel chips, and TACC also is upgrading Stampede with the addition of 500 Knights Landing-based Xeon Phi systems, which can support as many as 72 cores to raise its aggregate performance above 10 petaflops. The new supercomputer will incorporate 3D XPoint non-volatile memory technology, which is 1,000 times faster than NAND flash. “We anticipate [Stampede 2] will be the biggest machine in a U.S. university by next year,” says TACC executive director Dan Stanzione. He notes although Stampede has managed 7 million jobs since its inception, TACC still gets five times as many requests for time on the system as it can deliver. Stanzione says Stampede 2 will help fulfill this backlog, while higher resolutions and more accurate modeling for large runs will be among its advantages, along with faster completion times for smaller jobs. TACC says Stampede and Stampede 2 will use about the same number of nodes, and it expects each of the 6,000 nodes to be capable of approximately 3 teraflops.


Report Says Computer Science Should Be Treated Like Core Science Subjects in K-12 Classrooms

Education World (06/01/16) Nicole Gorman

Only a fraction of U.S. schools offer computer science and most lack the ability to teach students the core principles of the subject, according to a new report from the Information Technology & Innovation Foundation (ITIF). The report says curriculum and standards focus on using, instead of understanding, technology. The study notes there should be significant changes to how computer science is taught in grades K-12, considering how in-demand computer science majors are and will be in the future. “In 2011, Code.org projected that the economy would add 1.4 million computing jobs by 2020, but educate just 400,000 computer science students by then,” the study says. ITIF says a curriculum overhaul would optimize students’ success. “To maintain the field’s current momentum, the perception of computer science needs to shift from its being considered a fringe, elective offering or a skills-based course designed to teach basic computer literacy or coding alone,” the report says. First and foremost, the report recommends the U.S. train and develop 10,000 additional teachers to teach computer science. There also needs to be a focus on creating innovative education policy that favors teaching computer science principles in both K-12 and university classrooms.