• December 2017
    M T W T F S S
    « Aug    
     123
    45678910
    11121314151617
    18192021222324
    25262728293031

China’s Policing Robot: Cattle Prod Meets Supercomputer

Computerworld (10/31/16) Patrick Thibodeau

Chinese researchers have developed AnBot, an “intelligent security robot” deployed in a Shenzhen airport. The backend of AnBot is linked to China’s Tianhe-2 supercomputer, where it has access to cloud services. AnBot uses these technologies to conduct patrols, recognize threats, and identify people with multiple cameras and facial recognition. The cloud services give the robots petascale processing power, well beyond the processing capabilities in the robot itself. The supercomputer connection enhances the intelligent learning capabilities and human-machine interface of the devices, according to a U.S.-China Economic and Security Review report that focuses on China’s autonomous systems development efforts. The report found the ability of robotics to improve depends on the linking of artificial intelligence (AI), data science, and computing technologies. In addition, the report notes simultaneous development of high-performance computing systems and robotic mechanical manipulation give AI the potential to unleash smarter robotic devices that are capable of learning as well as integrating inputs from large databases. The report says the U.S. government should increase its own efforts in developing manufacturing technology in critical areas, as well as monitoring China’s growing investments in robotics and AI companies in the U.S.

MORE

Advertisements

Faster Parallel Computing

MIT News (09/13/16) Larry Hardesty

Researchers from the Massachusetts Institute of Technology’s (MIT) Computer Science and Artificial Intelligence Laboratory (CSAIL) this week are presenting Milk, a new programming language, at the 25th International Conference on Parallel Architectures and Compilation Techniques in Haifa, Israel. With Milk, application developers can handle memory more efficiently in programs that manage scattered datapoints in large datasets. Tests on several common algorithms showed programs written in Milk topped the speed of those written in existing languages by a factor of four, and the CSAIL researchers think additional work will boost speeds even higher. MIT professor Saman Amarasinghe says existing memory management methods run into problems with big datasets because with big data, the scale of the solution does not necessarily rise in proportion to the scale of the problem. Amarasinghe also notes modern computer chips are not optimized for this “sparse data,” with cores designed to retrieve an entire block of data from main memory based on locality, instead of individually retrieving a single data item. With Milk, a coder inserts a few additional lines of code around any command that iterates via a large dataset looking for a comparatively small number of items. The researchers say Milk’s compiler then determines how to manage memory accordingly.

MORE

Microsoft Forges Ahead With ‘Prajna’ Big-Data Analytics Framework for Cloud Services

ZDNet (09/15/15) Mary Jo Foley

Microsoft Research’s Cloud Computing and Storage (CCS) group is developing Prajna, an open source distributed analytics platform designed for building distributed cloud services that utilize big-data analytics. “Prajna can be considered as a set of [software development kits] on top of .Net that can assist a developer to quickly prototype cloud service, and write his/her own mobile apps against the cloud service,” according to CCS’ Web page. “It also has interactive, in-memory distributed big-data analytical capability similar to [Apache] Spark.” Microsoft researchers say although Prajna is a distributed functional programming platform, it goes further than Spark by “enabling multi-cluster distributed programming, running both managed code and unmanaged code, in-memory data sharing across jobs, push data flow, etc.” The “functional programming” element of Prajna is associated with the F# .Net functional programming language. “Prajna…offers additional capability to allow programmers to easily build and deploy cloud services, and consume the services in mobile apps, and build distributed application with state [e.g., a distributed in-memory key-value store],” notes the Web posting. Prajna head researcher Jin Li believes the platform has greater flexibility and extensibility than Spark, and could revolutionize the construction of high-performance distributed programs.

MORE

China Building One of the World’s Fastest Astronomical Computers to Power Giant, Alien-Seeking Telescope

South China Morning Post (Hong Kong) (07/28/15) Stephen Chen

China’s new Sky Eye 1 supercomputer is expected to be the fastest astronomical supercomputer in the world, topping Japan’s Aterui. Sky Eye 1’s peak performance will exceed 1,000 teraflops, according to Ren Jingyang, vice president of the Chinese high-performance computer (HPC) company Sugon. China plans to connect the supercomputer to the largest radio telescope in history, the 500-meter aperture spherical telescope (FAST), which will search for alien life and investigate dark matter. Larger than 30 football fields, FAST’s enormous dish will collect a level of data that would overload an ordinary computer. The supercomputer will be hosted at a facility near the telescope in Guizhou with a high-speed data link connecting them that is capable of transmitting up to 100 gigabytes of data per second. The calculation demands of the FAST telescope are expected to exceed 200 teraflops per day, says Zhang Peiheng, director of the HPC research center at the Chinese Academy of Sciences.

MORE

Reducing Big Data Using Ideas From Quantum Theory Makes It Easier to Interpret

Queen Mary, University of London (04/23/15) Will Hoyles

Researchers from Queen Mary University of London (QMUL) and Rovira i Virgili University have developed a new method that simplifies the way big data is represented and processed. Borrowing ideas from quantum theory, the team implemented techniques used to understand the difference between two quantum states. The researchers applied the quantum mechanical method to several large publicly available data sets, and were better able to understand which relationships in a system are similar enough to be considered redundant. The researchers say their method can significantly reduce the amount of information that has to be displayed and analyzed separately and make it easier to understand. Moreover, the approach reduces the computing power needed to process large amounts of multidimensional relational data. “We’ve been trying to find ways of simplifying the way big data is represented and processed and we were inspired by the way that the complex relationships in quantum theory are understood,” says QMUL’s Vincenzo Nicosia. “With so much data being gathered by companies and governments nowadays, we hope this method will make it easier to analyze and make sense of it, as well as reducing computing costs by cutting down the amount of processing required to extract useful information.”

MORE

Building Trustworthy Big Data Algorithms

Northwestern University Newscenter (01/29/15) Emily Ayshford

Northwestern University researchers recently tested latent Dirichlet allocation, which is one of the leading big data algorithms for finding related topics within unstructured text, and found it was neither as accurate nor reproducible as a leading topic modeling algorithm should be. Therefore, the researchers developed a new topic modeling algorithm they say has shown very high accuracy and reproducibility during tests. The algorithm, called TopicMapping, begins by preprocessing data to replace words with their stem. It then builds a network of connecting words and identifies a “community” of related words. The researchers found TopicMapping was able to perfectly separate the documents according to language and was able to reproduce its results. Northwestern professor Luis Amaral says the results show the need for more testing of big data algorithms and more research into making them more accurate and reproducible. “Companies that make products must show that their products work,” Amaral says. “They must be certified. There is no such case for algorithms. We have a lot of uninformed consumers of big data algorithms that are using tools that haven’t been tested for reproducibility and accuracy.”

MORE

Stanford Researchers Use Big Data to Identify Patients at Risk of High-Cholesterol Disorder

Stanford University (01/29/15) Tracie White

Stanford University researchers have launched a project designed to identify hospital patients who may have a genetic disease that causes a deadly buildup of cholesterol in their arteries. The project uses big data and software that can learn to recognize patterns in electronic medical records and identify patients at risk of familial hypercholesterolemia (FH), which often goes undiagnosed until a heart attack strikes. The project is part of a larger initiative called Flag, Identify, Network, Deliver FH, which aims to use innovative technologies to identify individuals with the disorder who are undiagnosed, untreated, or undertreated. For the project, researchers will teach a program how to recognize a pattern in the electronic records of Stanford patients diagnosed with FH. The program then will be directed to analyze Stanford patient records for signs of the pattern, and the researchers will report their findings to the patients’ personal physicians, who can encourage screening and therapy. “These techniques have not been widely applied in medicine, but we believe that they offer the potential to transform healthcare, particularly with the increased reliance on electronic health records,” says Stanford professor Joshua Knowles. If the project is successful at Stanford, it will be tested at other academic medical centers.

MORE