• December 2017
    M T W T F S S
    « Aug    
     123
    45678910
    11121314151617
    18192021222324
    25262728293031

China’s Policing Robot: Cattle Prod Meets Supercomputer

Computerworld (10/31/16) Patrick Thibodeau

Chinese researchers have developed AnBot, an “intelligent security robot” deployed in a Shenzhen airport. The backend of AnBot is linked to China’s Tianhe-2 supercomputer, where it has access to cloud services. AnBot uses these technologies to conduct patrols, recognize threats, and identify people with multiple cameras and facial recognition. The cloud services give the robots petascale processing power, well beyond the processing capabilities in the robot itself. The supercomputer connection enhances the intelligent learning capabilities and human-machine interface of the devices, according to a U.S.-China Economic and Security Review report that focuses on China’s autonomous systems development efforts. The report found the ability of robotics to improve depends on the linking of artificial intelligence (AI), data science, and computing technologies. In addition, the report notes simultaneous development of high-performance computing systems and robotic mechanical manipulation give AI the potential to unleash smarter robotic devices that are capable of learning as well as integrating inputs from large databases. The report says the U.S. government should increase its own efforts in developing manufacturing technology in critical areas, as well as monitoring China’s growing investments in robotics and AI companies in the U.S.

MORE

Advertisements

Future Directions for NSF Advanced Computing Infrastructure Report Now Available

National Science Foundation (05/04/16) Aaron Dubrow

A just-issued report commissioned by the U.S. National Science Foundation (NSF) and conducted by National Academies of Sciences, Engineering, and Medicine studies priorities and associated trade-offs for advanced computing investments and strategy. The Future Directions for NSF Advanced Computing Infrastructure to Support U.S. Science and Engineering in 2017-2020 report used community input from more than 60 individuals, research groups, and organizations. NSF’s study request in 2013 prompted the Computer Science and Telecommunications Board to organize a committee to make recommendations on establishing a framework to position the U.S. for continued science and engineering leadership, guarantee resources fulfill community needs, help the scientific community keep pace with the revolution in computing, and sustain the advanced computing infrastructure. NSF has funded the enablement of a cyberinfrastructure ecosystem combining superfast and secure networks, cutting-edge parallel computing, efficient software, state-of-the-art scientific instruments, and massive datasets with expert staff across the country. The agency requested $227 million for its 2016 advanced cyberinfrastructure budget, up from $211 million in 2014. “[The report’s] timing and content give substance and urgency to NSF’s role and plans in the National Strategic Computing Initiative,” says NSF’s Irene Qualters.

MORE

Chameleon: Why Computer Scientists Need a Cloud of Their Own

HPC Wire (05/05/16) Tiffany Trader

The U.S. National Science Foundation-funded Chameleon cloud testbed in less than a year of operation has contributed to innovative research in high-performance computing (HPC) containerization, exascale operating systems, and cybersecurity. Chameleon principal investigator Kate Keahey, a Computation Institute fellow at the University of Chicago, describes the tool as “a scientific instrument for computer science where computer scientists can prove or disprove hypotheses.” Co-principal investigator Dan Stanzione, executive director of the Texas Advanced Computing Center at the University of Texas at Austin, says Chameleon can meet the oft-denied request from the software or computer science research community to make fundamental changes to the way the machine operates. With Chameleon, users can configure and test distinct cloud architectures on various problems, such as machine learning and adaptive operating systems, climate modeling, and flood prediction. Keahey says support for research at multiple scales was a key design element of the instrument. One project using Chameleon involves comparing performance between containerization and virtualization as they apply to HPC applications. Keahey says it is “a good example of a project that really needs access to scale.” Another major Chameleon user is the Argo Project, an initiative for designing and prototyping an exascale operating system and runtime.

MORE

 

Microsoft Forges Ahead With ‘Prajna’ Big-Data Analytics Framework for Cloud Services

ZDNet (09/15/15) Mary Jo Foley

Microsoft Research’s Cloud Computing and Storage (CCS) group is developing Prajna, an open source distributed analytics platform designed for building distributed cloud services that utilize big-data analytics. “Prajna can be considered as a set of [software development kits] on top of .Net that can assist a developer to quickly prototype cloud service, and write his/her own mobile apps against the cloud service,” according to CCS’ Web page. “It also has interactive, in-memory distributed big-data analytical capability similar to [Apache] Spark.” Microsoft researchers say although Prajna is a distributed functional programming platform, it goes further than Spark by “enabling multi-cluster distributed programming, running both managed code and unmanaged code, in-memory data sharing across jobs, push data flow, etc.” The “functional programming” element of Prajna is associated with the F# .Net functional programming language. “Prajna…offers additional capability to allow programmers to easily build and deploy cloud services, and consume the services in mobile apps, and build distributed application with state [e.g., a distributed in-memory key-value store],” notes the Web posting. Prajna head researcher Jin Li believes the platform has greater flexibility and extensibility than Spark, and could revolutionize the construction of high-performance distributed programs.

MORE

Dew Helps Ground Cloud Computing

EurekAlert (09/15/15)

A “cloud-dew” architecture could enable cloud users to maintain access to their data when they lose their Internet connection, says University of Prince Edward Island professor Yingwei Wang. He notes the architecture follows the conventions of cloud architecture, but in addition to the cloud servers there are “dew” servers held on the local system that act as a buffer between the local user and the cloud servers.  Wang says this configuration prevents data from becoming desynchronized, which happens if one reverts back to the old-school approach of holding data only on the local server whether or not it is networked.  “The dew server and its related databases have two functions: first, it provides the client with the same services as the cloud server provides; second, it synchronizes dew server databases with cloud server databases,” Wang notes.  The dew server retains only a copy of the given user’s data, and the lightweight local server makes the data available with or without an Internet connection and synchronizes with the cloud server once the connection is established.  Wang says the architecture could be used to make websites available offline.

MORE

Building Trustworthy Big Data Algorithms

Northwestern University Newscenter (01/29/15) Emily Ayshford

Northwestern University researchers recently tested latent Dirichlet allocation, which is one of the leading big data algorithms for finding related topics within unstructured text, and found it was neither as accurate nor reproducible as a leading topic modeling algorithm should be. Therefore, the researchers developed a new topic modeling algorithm they say has shown very high accuracy and reproducibility during tests. The algorithm, called TopicMapping, begins by preprocessing data to replace words with their stem. It then builds a network of connecting words and identifies a “community” of related words. The researchers found TopicMapping was able to perfectly separate the documents according to language and was able to reproduce its results. Northwestern professor Luis Amaral says the results show the need for more testing of big data algorithms and more research into making them more accurate and reproducible. “Companies that make products must show that their products work,” Amaral says. “They must be certified. There is no such case for algorithms. We have a lot of uninformed consumers of big data algorithms that are using tools that haven’t been tested for reproducibility and accuracy.”

MORE

Stanford Researchers Use Big Data to Identify Patients at Risk of High-Cholesterol Disorder

Stanford University (01/29/15) Tracie White

Stanford University researchers have launched a project designed to identify hospital patients who may have a genetic disease that causes a deadly buildup of cholesterol in their arteries. The project uses big data and software that can learn to recognize patterns in electronic medical records and identify patients at risk of familial hypercholesterolemia (FH), which often goes undiagnosed until a heart attack strikes. The project is part of a larger initiative called Flag, Identify, Network, Deliver FH, which aims to use innovative technologies to identify individuals with the disorder who are undiagnosed, untreated, or undertreated. For the project, researchers will teach a program how to recognize a pattern in the electronic records of Stanford patients diagnosed with FH. The program then will be directed to analyze Stanford patient records for signs of the pattern, and the researchers will report their findings to the patients’ personal physicians, who can encourage screening and therapy. “These techniques have not been widely applied in medicine, but we believe that they offer the potential to transform healthcare, particularly with the increased reliance on electronic health records,” says Stanford professor Joshua Knowles. If the project is successful at Stanford, it will be tested at other academic medical centers.

MORE