• January 2020
    M T W T F S S
    « Dec    
     12345
    6789101112
    13141516171819
    20212223242526
    2728293031  

Python Programming Language, AWS Skills Demand Has Exploded

ZDNet
Liam Tung
November 20, 2019

Analysis of Indeed.com job search engine listings over the last five years found explosive growth in demand for skills in Python, showing the coding language is the most popular one, or on its way to becoming most popular. Job listings mentioning Python climbed from 8% in September 2014 to 18% in September 2019, with the upsurge often credited to growth in data science and interest in machine learning and artificial intelligence, helped by abundant third-party Python packages and developer tools. Indeed also revealed skyrocketing demand for developers with Amazon Web Services (AWS) skills, with about 14% of current listings calling for AWS knowledge. Indeed Hiring Lab economist Andrew Flowers said, “A big reason behind the exceptional growth of Python and AWS is that the underlying tech job mix is changing in ways that favor these programming languages.”

MORE

Machine Learning Advances Tool to Fight Cybercrime in the Cloud Purdue University News

Chris Adam
November 5, 2019

Purdue University researchers used machine learning to develop a cloud forensic model that collects digital evidence associated with illegal activities in cloud storage applications. The system deploys deep learning models to classify child exploitation, illegal drug trafficking, and illegal firearms transactions uploaded to cloud storage applications, and to automatically report detection of any such illegal activities via a forensic evidence collection system. The researchers tested the system on more than 1,500 images, and found that the model accurately classified an image about 96% of the time. Said Purdue’s Fahad Salamh, “It is important to automate the process of digital forensic and incident response in order to cope with advanced technology and sophisticated hiding techniques and to reduce the mass storage of digital evidence on cases involving cloud storage applications.”

MORE

China’s Policing Robot: Cattle Prod Meets Supercomputer

Computerworld (10/31/16) Patrick Thibodeau

Chinese researchers have developed AnBot, an “intelligent security robot” deployed in a Shenzhen airport. The backend of AnBot is linked to China’s Tianhe-2 supercomputer, where it has access to cloud services. AnBot uses these technologies to conduct patrols, recognize threats, and identify people with multiple cameras and facial recognition. The cloud services give the robots petascale processing power, well beyond the processing capabilities in the robot itself. The supercomputer connection enhances the intelligent learning capabilities and human-machine interface of the devices, according to a U.S.-China Economic and Security Review report that focuses on China’s autonomous systems development efforts. The report found the ability of robotics to improve depends on the linking of artificial intelligence (AI), data science, and computing technologies. In addition, the report notes simultaneous development of high-performance computing systems and robotic mechanical manipulation give AI the potential to unleash smarter robotic devices that are capable of learning as well as integrating inputs from large databases. The report says the U.S. government should increase its own efforts in developing manufacturing technology in critical areas, as well as monitoring China’s growing investments in robotics and AI companies in the U.S.

MORE

Future Directions for NSF Advanced Computing Infrastructure Report Now Available

National Science Foundation (05/04/16) Aaron Dubrow

A just-issued report commissioned by the U.S. National Science Foundation (NSF) and conducted by National Academies of Sciences, Engineering, and Medicine studies priorities and associated trade-offs for advanced computing investments and strategy. The Future Directions for NSF Advanced Computing Infrastructure to Support U.S. Science and Engineering in 2017-2020 report used community input from more than 60 individuals, research groups, and organizations. NSF’s study request in 2013 prompted the Computer Science and Telecommunications Board to organize a committee to make recommendations on establishing a framework to position the U.S. for continued science and engineering leadership, guarantee resources fulfill community needs, help the scientific community keep pace with the revolution in computing, and sustain the advanced computing infrastructure. NSF has funded the enablement of a cyberinfrastructure ecosystem combining superfast and secure networks, cutting-edge parallel computing, efficient software, state-of-the-art scientific instruments, and massive datasets with expert staff across the country. The agency requested $227 million for its 2016 advanced cyberinfrastructure budget, up from $211 million in 2014. “[The report’s] timing and content give substance and urgency to NSF’s role and plans in the National Strategic Computing Initiative,” says NSF’s Irene Qualters.

MORE

Chameleon: Why Computer Scientists Need a Cloud of Their Own

HPC Wire (05/05/16) Tiffany Trader

The U.S. National Science Foundation-funded Chameleon cloud testbed in less than a year of operation has contributed to innovative research in high-performance computing (HPC) containerization, exascale operating systems, and cybersecurity. Chameleon principal investigator Kate Keahey, a Computation Institute fellow at the University of Chicago, describes the tool as “a scientific instrument for computer science where computer scientists can prove or disprove hypotheses.” Co-principal investigator Dan Stanzione, executive director of the Texas Advanced Computing Center at the University of Texas at Austin, says Chameleon can meet the oft-denied request from the software or computer science research community to make fundamental changes to the way the machine operates. With Chameleon, users can configure and test distinct cloud architectures on various problems, such as machine learning and adaptive operating systems, climate modeling, and flood prediction. Keahey says support for research at multiple scales was a key design element of the instrument. One project using Chameleon involves comparing performance between containerization and virtualization as they apply to HPC applications. Keahey says it is “a good example of a project that really needs access to scale.” Another major Chameleon user is the Argo Project, an initiative for designing and prototyping an exascale operating system and runtime.

MORE

 

Microsoft Forges Ahead With ‘Prajna’ Big-Data Analytics Framework for Cloud Services

ZDNet (09/15/15) Mary Jo Foley

Microsoft Research’s Cloud Computing and Storage (CCS) group is developing Prajna, an open source distributed analytics platform designed for building distributed cloud services that utilize big-data analytics. “Prajna can be considered as a set of [software development kits] on top of .Net that can assist a developer to quickly prototype cloud service, and write his/her own mobile apps against the cloud service,” according to CCS’ Web page. “It also has interactive, in-memory distributed big-data analytical capability similar to [Apache] Spark.” Microsoft researchers say although Prajna is a distributed functional programming platform, it goes further than Spark by “enabling multi-cluster distributed programming, running both managed code and unmanaged code, in-memory data sharing across jobs, push data flow, etc.” The “functional programming” element of Prajna is associated with the F# .Net functional programming language. “Prajna…offers additional capability to allow programmers to easily build and deploy cloud services, and consume the services in mobile apps, and build distributed application with state [e.g., a distributed in-memory key-value store],” notes the Web posting. Prajna head researcher Jin Li believes the platform has greater flexibility and extensibility than Spark, and could revolutionize the construction of high-performance distributed programs.

MORE

Dew Helps Ground Cloud Computing

EurekAlert (09/15/15)

A “cloud-dew” architecture could enable cloud users to maintain access to their data when they lose their Internet connection, says University of Prince Edward Island professor Yingwei Wang. He notes the architecture follows the conventions of cloud architecture, but in addition to the cloud servers there are “dew” servers held on the local system that act as a buffer between the local user and the cloud servers.  Wang says this configuration prevents data from becoming desynchronized, which happens if one reverts back to the old-school approach of holding data only on the local server whether or not it is networked.  “The dew server and its related databases have two functions: first, it provides the client with the same services as the cloud server provides; second, it synchronizes dew server databases with cloud server databases,” Wang notes.  The dew server retains only a copy of the given user’s data, and the lightweight local server makes the data available with or without an Internet connection and synchronizes with the cloud server once the connection is established.  Wang says the architecture could be used to make websites available offline.

MORE