7 min read • Dec 11, 2020
Susnigdha Tripathy is a Singapore-based passionate freelance writer and editor who specializes in writing about technology, remote work, and productivity.
Machine Learning continues to be a heavy theme during this year's re:Invent conference, as it reflects the advancements of the industry at large and shapes how engineers build on the cloud.
Week two at AWS re:Invent opened with an in-depth Machine Learning keynote by VP of Amazon AI, Swami Sivasubramanian. This was the first time a re:Invent keynote was held on ML, signifying Amazon's emphasis on this leading technology. Highlights from Swami's Keynote
Sivasubramanian's keynote was based on five key tenets:
He dove right in by talking about ML frameworks, AI Services, and SageMaker. He remarked that SageMaker is the "most complete end-to-end" machine learning platform that organizations choose for their AI needs.
Sivasubramanian said that SageMaker now supports data parallelism, a form of parallel computing that can simplify training on large data sets and help builders get models into production faster than ever before. He pointed out that by using distributed parallelism, Amazon could reduce training times for sizeable deep learning networks by forty percent.
One question that Sivasubramanian posed during the presentation was, "How can we bring machine learning to this large and growing group of database developers and data analysts?" Amazon believes that when builders have the right tools, they can drive innovation across the organization. Speaking to the pace of innovations in ML, Sivasubramanian said Amazon had added 250+ features in the last year alone. This addition is made possible by making AI and ML accessible to everyone in the organization.
As expected, every re:Invent keynote has come jampacked with a ton of announcements of updates and new product launches coming in 2021. No exception, the Machine Learning keynote included a host of Sagemaker updates to look forward to.
Amazon SageMaker Data Wrangler – With over 300 built-in data transformations and an easy-to-use interface, SageMaker Data Wrangler simplifies the process of data preparation, cutting the time it takes for data aggregation and preparation from weeks down to minutes.
Amazon SageMaker Edge Manager - With the edge manager, ML engineers can now optimize, secure, monitor, and maintain machine learning models on a fleet of edge devices.
Amazon SageMaker Feature Store - SageMaker Feature Store, organizes features into groups, making it easier for ML engineers to store, retrieve, and share data quickly. Multiple teams can also re-use and share features, reducing the cost of development.
Distributed Training on Amazon SageMaker – SageMaker Distributed Training makes use of partitioned algorithms to train complex and deep learning models in a shorter span of time as compared to other approaches.
Amazon SageMaker JumpStart – SageMaker Jumpstart speed-ups ML workflows by providing one-click access to a set of solutions, also known as "model zoos."
Amazon SageMaker Clarify –SageMaker Clarify helps developers detect bias in machine learning models before and after training by providing them with a diverse set of statistical metrics.
Deep Profiling for Amazon SageMaker Debugger - Deep Profiling for SageMaker Debugger monitors machine learning training performance and helps developers train models faster. For improved efficiency, it visualizes resources used in training and sends alerts when anomalies are detected.
Amazon SageMaker Pipelines - SageMaker Pipelines automates different steps of the ML workflow, enabling developers to build dozens of ML models in a week and also manage massive volumes of data. Workflows can also be re-used or shared between teams.
Amazon Lookout for Metrics – Lookout for Metrics is a software that uses ML to detect anomalies in metrics automatically. This helps businesses to monitor their health and diagnose problems quickly.
Amazon Redshift ML – Powered by SageMaker, Redshift ML makes it possible for analysts and developers to make ML models using SQL commands without having to learn new skills or move data.
Amazon Neptune ML - Neptune ML uses graph neural networks (GNNs) to improve the speed and accuracy of predictions for graphs.
Matt Wood, VP of Artificial Intelligence, came onstage to unveil Amazon HealthLake, a HIPAA-enabled service that aggregates structured (lab results) and unstructured (doctor's notes) data and adds them to a centralized AWS data lake. With HealthLake, updating and analyzing patients' health information can be managed quickly and efficiently.
Amazon Monitron – Monitron is an industrial ML monitoring service enabling businesses, both big and small, to detect potential machinery failures that can lead to a breakdown. This allows for implementing predictive maintenance instead of time-based maintenance resulting in savings.
Panorama Appliance - AWS Panorama appliance is a device that organizations can use to bring computer vision to their on-premises cameras located on the edge.
Peter DeSantis, SVP, AWS Infrastructure & Support, delivered the Infrastructure keynote where he talked about Graviton2, custom chips, AZs, regions, sustainability, and much more.
DeSantis mentioned how Graviton2 has helped customers achieve remarkable results in terms of improving performance and lowering costs. Based on Arm's 64-bit Neoverse microarchitecture, Graviton2 is a custom-built server chip that can deliver up to 7x performance of the A1 instances, 2x larger caches, and 5x faster memory. Compared with other instances, Graviton2-based Elastic Compute Cloud (EC2) instances offer 40% better price-to-performance for a broad range of workloads.
Referencing COVID-19 and its impact, he said, "to deal with the real world the best protection is engineering your supply chain with as much geographic and supplier diversity as you can." Today, Amazon's global supply chain consists of 86 suppliers spread across seven countries. Because of this diversity, Amazon's customers were able to keep scaling without interruption, despite a challenging year.
DeSantis declared AWS goes to extreme lengths to protect its infrastructure and data from natural and human disasters. Hence its regions are entirely isolated from each other. This helps the company to attain higher fault-tolerance. "For most customers, properly designed availability zones provide a powerful tool to cost-effectively achieve a very high availability." Amazon now has 24 independent regions, each with multiple well-designed availability zones and a plan to open new zones in Switzerland, Indonesia, and Spain.
Amazon has dabbled in custom silicon chips since 2014, when it produced its first custom Nitro chips. Nitro chips are the base blocks of Nitro controller, specialized hardware that can turn any server into an EC2 instance. Running instances on Nitro control rather than servers provides greater security and results in more innovation.
Last year Amazon released its first ML chip, AWS Inferentia, which provides up to 45% lower cost per inference as compared to GPUs leading to greater customer satisfaction. DeSantis remarked that although Amazon is excited by their customers' interest in migrating to Inferentia "Our investment in ML chips is just beginning."
The neural team at AWS has developed a software called Neuron that allows ML developers to use Inferentia as a target for popular frameworks, including PyTorch and TensorFlow. With Neuron, they can take advantage of the cost savings of Inferentia with little to no change in the ML code, all the while maintaining support for other ML processes.
By Susnigdha Tripathy