Introduction

At its most basic, machine learning (ML) is designed to provide digital tools and services to learn from data, identify patterns, make predictions, and then act on those predictions. Almost all artificial intelligence (AI) systems today are created using ML. ML uses large amounts of data to create and validate decision logic. This decision logic forms the basis of the AI "model".

A fast-growing subset of machine learning is generative AI, which is powered by large models that are pretrained on a vast set of data - commonly referred to as foundation models (FMs). AWS services based on generative AI include:

  • Amazon Bedrock (which provides a way for you to build and scale generative AI-based applications using FMs) and,
  • Amazon CodeWhisperer (an AI coding companion that generates code suggestions in real-time based on your comments in natural language and any prior code in your integrated development environment).

In less than two minutes, Dr. Werner Vogels, Amazon CTO explains how generative AI works and how you might use it. This video is part of a longer discussion between Dr. Vogels and Swami Sivasubramanian, AWS vice-president of database, analytics, and ML, about the broad landscape of generative AI, why it’s not hype, and how AWS is democratizing access to large language and foundation models.

Time to read

25 minutes

Purpose

Help determine which AWS ML services are the best fit for your needs.

Level

Beginner

Last updated

July 26, 2023

Understand

As organizations continue to adopt AI  and ML technologies, the importance of understanding and choosing among AWS ML services cannot be overstated.
 
AWS provides a range of ML services designed to help organizations to build, train, and deploy ML models quickly and easily. These services can be used to solve a wide range of business problems such as customer churn prediction, fraud detection, and image and speech recognition.
AWS approaches ML as a series of technology layers that build on one another.
 
At the top layer are our AI services. This is where AWS embeds ML into different use cases, such as personalization, forecasting, anomaly detection, and speech transcription.
 
In the middle layers are AWS ML services, including Amazon SageMaker., as well as deep learning (DL) technologies. This is where AWS builds its ML infrastructure, so that customers can focus just on the differentiated work of building ML models.
 
Then come generative AI services. Generative AI can create new content, including conversations, stories, images, videos, and music. Like all AI, generative AI is powered by ML models - very large models that are pre-trained on vast amounts of data and commonly referred to as foundation models.

In addition, AWS offers specialized, accelerated hardware for high performance ML training and inference.
 
Amazon EC2 P4d instances are equipped with NVIDIA A100 Tensor Core GPUs, which are well-suited for both training and inference tasks in machine learning. AWS Trainium is the second-generation ML accelerator that AWS has purpose-built for DL training of 100B+ parameter models.

Meanwhile, AWS Inferentia2-based Amazon EC2 Inf2 instances are designed to deliver high performance at the lowest cost in Amazon EC2 for your DL inference and generative AI applications.

Consider

When solving a business problem with AWS ML services, consideration of several key criteria can help ensure success.

  • The first step in the ML lifecycle is to frame the business problem. Understanding the problem you are trying to solve is essential for choosing the right AWS ML service, as different services are designed to address different problems. It is also important to determine whether ML is an appropriate solution for the business problem at hand.

    Once you have clearly articulated the business problem, you may start by choosing choosing from among a range of purpose-built AWS AI services (in areas such as speech, vision and documents).

    Amazon SageMaker provides fully managed infrastructure if you need to build and train your own models. AWS also offers an array of advanced ML frameworks and infrastructure choices for the cases where you require highly customized and specialized ML models.

    And finally, AWS offers a broad set of popular foundation models for building new applications with generative AI.

  • AWS offers a spectrum of ML services that cater to varying levels of management overhead, depending on how much control and customization you need.

    At one end of the spectrum are the fully managed AI services. They require minimal management overhead as AWS handles all aspects of the ML process, from data processing to model training and deployment. These services are ideal for incorporating ML into your applications without having to worry about the underlying infrastructure or complex ML algorithms.

    In the middle are the Amazon SageMaker services, which balance customization and management overhead. These services allow you to build, train, and deploy your own custom ML models on a managed infrastructure provided by AWS. While you retain control over the development of your models, you don't have to worry about the underlying infrastructure, as it is managed by AWS.

    At the other end of the spectrum are the lower-level ML frameworks and infrastructure services, which require the most management overhead. You have full control over your ML  process, from data preprocessing to model development and deployment.

    These services require you to manage your own infrastructure and handle all aspects of the ML process. They are ideal when you need a high degree of customization and control over your ML models and when you have the technical expertise to manage your own infrastructure.

    Finally, AWS offers tools to help you easily build generative AI applications. Amazon Bedrock gives you access to foundation models from the AI startup model providers, including AI21, Anthropic, and Stability AI, and exclusive access to the Titan family of foundation models developed by AWS. Amazon CodeWhisperer uses generative AI under the hood to provide code suggestions in real time, based on your comments and prior code.

  • Choose the ML algorithm for the problem you are trying to solve. This depends on the type of data you are working with, as well as the desired outcomes. Here is how each of the major AWS AI/ML service categories empowers you to work with its algorithms:

    • Specialized AI services: These services offer a limited ability to customize the ML algorithm, as they are pre-trained models optimized for specific tasks. Customers can typically customize the input data and some parameters, but do not have access to the underlying ML models or the ability to build their own models.
    • Amazon SageMaker: This service provides the most flexibility and control over the ML algorithm. Customers can use SageMaker to build custom models using their own algorithms and frameworks, or use pre-built models and algorithms provided by AWS. This allows for a high degree of customization and control over the ML process.
    • Lower-level ML frameworks and infrastructure: These services offer the most flexibility and control over the ML algorithm. Customers can use these services to build highly customized ML models using their own algorithms and frameworks. However, using these services requires significant ML expertise and may not be feasible for all customers.

       
  • If you need a private endpoint in your VPC, your options depend on the layer of AWS ML services you are using. These include:

    • Specialized AI services: Most specialized AI services do not currently support private endpoints in VPCs. However, Amazon Rekognition Custom Labels and Amazon Comprehend Custom can be accessed using VPC endpoints.
    • Core AI services: Amazon Translate, Amazon Transcribe, and Amazon Comprehend all support VPC endpoints.
    • Amazon SageMaker: SageMaker provides built-in support for VPC endpoints, allowing you to deploy their trained models as an endpoint accessible only from within their VPC.
    • Lower-level ML frameworks and infrastructure: You can deploy your models on Amazon EC2 instances or in containers within your VPC, providing complete control over the networking configuration.
  • Higher-level AI services, such as Amazon Rekognition and Amazon Transcribe, are designed to handle a wide variety of use cases. They typically offer high performance in terms of speed but may not meet certain latency requirements.

    Using Amazon SageMaker is generally faster than building custom models on lower-level ML frameworks and infrastructure due to its fully managed service and optimized deployment options. While a highly optimized custom model may outperform SageMaker, it requires significant expertise and resources to build.

  • The accuracy of AWS ML services varies based on the specific use case and level of customization required. Higher-level AI services, such as Amazon Rekognition, are built on pre-trained models that have been optimized for specific tasks and offer high accuracy in many use cases.

    In some cases, you may choose to use Amazon SageMaker, which provides a more flexible and customizable platform for building and training custom ML models. By building your own models, you may be able to achieve even higher accuracy than what is possible with pre-trained models.

    At the lowest level, you can use ML frameworks and infrastructure, such as TensorFlow and Apache MXNet, to build highly customized models that offer the highest possible accuracy for your specific use case.

  • AWS builds foundation models (FMs) with responsible AI in mind at each stage of its development process. Throughout design, development, deployment, and operations AWS considers a range of factors including:

    1. Accuracy (how closely a summary matches the underlying document; whether a biography is factually correct);
    2. Fairness, (whether outputs treat demographic groups similarly);
    3. Intellectual property and copyright considerations;
    4. Appropriate usage (filtering out user requests for legal advice, or medical diagnoses, or illegal activities);
    5. Toxicity (hate speech, profanity, and insults);
    6. Privacy, (protecting personal information and customer prompts).

    AWS builds solutions to address these issues into the processes used for acquiring training data, into the FMs themselves, and into the technology used to pre-process user prompts and post-process outputs.

Choose

Now that you know the criteria by which you will be evaluating your ML service options, you are ready to choose which AWS ML service is right for your organizational needs.

The following table highlights which ML services are optimized for which circumstances. Use it to help determine the AWS ML service that is the best fit for your use case.

 

AI/ML services and supporting technologies
When would you use it?
What is it optimized for?
Close

Amazon Comprehend

Amazon Comprehend allows you to do natural language processing tasks, such as sentiment analysis, entity recognition, topic modeling, and language detection, on your text data.

Close

Amazon Lex

Amazon Lex helps you build chatbots and voice assistants that can interact with users in a natural language interface. It provides pre-built dialog management, language understanding, and speech recognition capabilities.

Close

Amazon Polly

Use Amazon Polly to convert text into lifelike speech, making it easier to create voice-enabled applications and services.

Close

Amazon Rekognition

Amazon Rekognition is designed to allow you add image and video analysis to your applications. You provide an image or video to the Amazon Rekognition API, and the service can identify objects, people, text, scenes, and activities. It can detect any inappropriate content as well.

Close

Amazon Textract

Amazon Textract helps you extract text and data from scanned documents, forms, and tables, making it easier to store, analyze, and manage such data.

Close

Amazon Transcribe

Amazon Transcribe allows customers to automatically transcribe audio and video recordings into text. This can save time and effort compared to manual transcription.

Close

Amazon Translate

Use this service to translate text from one language to another in real-time. This is particularly helpful if your business operates in multiple countries or needs to communicate with non-native speakers.

Use the core AI services provided by AWS when you require specific, pre-built functionalities to be integrated into your applications, without the need for extensive customizations or machine learning expertise.
These services are designed to be easy to use and do not require much coding, configuration, or ML expertise.
Close

Amazon Bedrock

Amazon Bedrock is a fully managed service that makes foundation models from leading AI startups and Amazon available via an API, so you can choose from a wide range of foundation models to find the model that is best suited for you.

Use Amazon Bedrock to get access to foundation models from leading AI startups and Amazon via an API.
Amazon Bedrock is optimized for flexibility – letting you choose from a range of FMs to find the best model for your needs.
Close

Amazon CodeWhisperer

Amazon CodeWhisperer is a real-time AI coding companion that helps with creating code for routine or time-consuming, undifferentiated tasks, working with unfamiliar APIs or SDKs, making correct and effective use of AWS APIs, and other common coding scenarios such as reading and writing files, image processing, and writing unit tests.

Use Amazon CodeWhisperer when you need ML-powered code recommendations in real time.
Amazon CodeWhisperer is optimized for providing you with real-time, useful suggestions based on your existing code and comments.
Close

SageMaker Autopilot

Amazon SageMaker Autopilot is designed to reduce the heavy lifting of building ML models. You simply provide a tabular dataset and select the target column to predict, and SageMaker Autopilot will automatically explore different solutions to find the best model. You then can directly deploy the model to production with just one click or iterate on the recommended solutions to further improve the model quality.

Close

SageMaker Canvas

Amazon SageMaker Canvas gives you the ability to use machine learning to generate predictions without needing to write any code.

Close

SageMaker Data Wrangler

Amazon SageMaker Data Wrangler reduces the time it takes to aggregate and prepare tabular and image data for ML. With SageMaker Data Wrangler, you can simplify the process of data preparation and feature engineering, and complete each step of the data preparation workflow (including data selection, cleansing, exploration, visualization, and processing at scale) from a single visual interface. 

Close

SageMaker Ground Truth

SageMaker Ground Truth is a managed service for labeling data to train and improve machine learning models. It provides a highly accurate and efficient way to label large datasets by using a combination of human annotators and machine learning algorithms. SageMaker Ground Truth supports a wide range of data types, including text, image, video, and audio, and integrates seamlessly with other SageMaker services for end-to-end machine learning workflows.

Close

SageMaker JumpStart

SageMaker JumpStart provides pretrained, open-source models for a wide range of problem types to help you get started with machine learning. You can incrementally train and tune these models before deployment. JumpStart also provides solution templates that set up infrastructure for common use cases, and executable example notebooks for machine learning with SageMaker.

Close

SageMaker Pipelines

Using Amazon SageMaker Pipelines, you can create ML workflows with a Python SDK, and then visualize and manage your workflow using Amazon SageMaker Studio. Amazon SageMaker Pipelines allows you to store and reuse the workflow steps you create in SageMaker Pipelines.

Close

SageMaker Studio

A fully integrated development environment (IDE) that enables developers to build, train, and deploy machine learning models at scale. It provides a single web-based interface to manage the entire machine learning lifecycle, from data preparation and model training to deployment and monitoring. SageMaker Studio also supports popular tools such as Jupyter notebooks, Git, and TensorFlow, and offers a suite of pre-built algorithms for common use cases.

Close

SageMaker Studio Lab

Amazon SageMaker Studio Lab is a cloud-based IDE for learning and experimenting with machine learning using pre-built Jupyter notebooks. It includes a range of pre-built notebooks covering topics such as image recognition, natural language processing, and anomaly detection.

Use these services when you need more customized machine learning models or workflows that go beyond the pre-built functionalities offered by the core AI services.
These services are optimized for building and training custom machine learning models, large-scale training on multiple instances or GPU clusters, more control over machine learning model deployment, real-time inference, and for building end-to-end workflows.
Close

Apache MxNet

Apache MXNet is an open-source deep learning framework that supports multiple programming languages, including Python, Scala, and R. It is known for its scalability and speed, and offers a range of high-level APIs for building and training neural networks, as well as low-level APIs for advanced users.

Close

Hugging Face on Amazon SageMaker

Hugging Face on Amazon SageMaker is an open-source library for natural language processing (NLP) that provides a wide range of pre-trained models and tools for working with text data. It is known for its ease of use and high performance, and is widely used for tasks such as text classification, sentiment analysis, and language translation.

Close

Pytorch on AWS

PyTorch on AWS is an open-source machine learning framework that offers dynamic computation graphs and automatic differentiation for building and training neural networks. PyTorch is known for its ease of use and flexibility, and has a large and active community of developers contributing to its development.

Close

TensorFlow on AWS

TensorFlow is an open-source machine learning framework developed by Google that is widely used for building and training neural networks. It is known for its scalability, speed, and flexibility, and supports a range of programming languages including Python, C++, and Java. TensorFlow offers a wide range of pre-built models and tools for image and text processing, as well as low-level APIs for advanced users who require greater control over their models.

Use the ML frameworks and infrastructure provided by AWS when you require even greater flexibility and control over your machine learning workflows, and are willing to manage the underlying infrastructure and resources yourself.
These services are optimized to provide specific custom hardware configurations, access to deep learning frameworks not offered by SageMaker, more control over your data storage and processing, and custom algorithms and models.
Close

AWS Inferentia and AWS Inferentia2

The first-generation AWS Inferentia accelerator powers Amazon Elastic Compute Cloud (Amazon EC2) Inf1 instances, which deliver up to 2.3x higher throughput and up to 70% lower cost per inference than comparable Amazon EC2 instances. AWS Inferentia2 accelerator improves on first-generation AWS Inferentia. Inferentia2 by delivering up to 4x higher throughput and up to 10x lower latency compared to Inferentia.

Close

AWS Trainium

AWS Trainium is the second-generation machine learning (ML) accelerator that AWS purpose built for deep learning training of 100B+ parameter models. Each Amazon Elastic Compute Cloud (EC2) Trn1 instance deploys up to 16 AWS Trainium accelerators for use in deep learning (DL) training in the cloud.

Use accelerated hardware when you want to achieve the lowest cost for training models and need to run inference in the cloud.
Accelerated hardware is optimized for supporting the cost-effective deployment of foundation models at scale.

Use

Now that you have learned about the criteria you need to apply in choosing an AWS ML service, we hope you have been able to select which AWS AI/ML service(s) are optimized for your organizational needs.

To explore how to use and learn more about your chosen service, we have provided three sets of pathways to explore how each service works. The first set of pathways provides in-depth documentation, hands-on tutorials, and resources to get started with Amazon Comprehend, Amazon Textract, Amazon Translate, Amazon Lex, Amazon Polly, Amazon Rekognition, and Amazon Transcribe.

  • Amazon Comprehend
  • Get started with Amazon Comprehend

    This exercise uses the Amazon Comprehend console to create and run an asynchronous entity detection job. It assumes that you are familiar with Amazon Simple Storage Service (Amazon S3).

    Do the exercise »

    Analyze insights in text with Amazon Comprehend

    In this step-by-step tutorial, you learn how to use Amazon Comprehend to analyze and derive insights from text. In this scenario, you’re planning a trip and want to find helpful travel books.

    Use the tutorial »

    Amazon Comprehend Pricing

    This short guide provides information on Amazon Comprehend pricing, along with examples.

    Get pricing guidance »

  • Amazon Textract
  • Getting Started with Amazon Textract

    This guide explains how Amazon Textract can be used with formatted text to detect words and lines of words that are located close to  each other, as well as analyze a document for items such as related text, tables, key-value pairs, and selection elements.

    Read the user guide »

    Extract text and structured data with Amazon Textract

    In this tutorial, you learn how to use Amazon Textract to extract text and structured data from a document.

    Use the tutorial »

    AWS Power Hour: Machine Learning

    Dive into Amazon Textract in this episode, spend time in the AWS Management Console, and review code samples that will help you understand how to make the most of service APIs.

    Watch the video »

  • Amazon Translate
  • Getting started with Amazon Translate using the console

    The easiest way to get started with Amazon Translate is to use the console to translate some text. You can translate up to 10,000 characters using the console. This guide shows you how.

    Read the guide »

    Translate Text Between Languages in the Cloud

    In this tutorial example, as part of an international luggage manufacturing firm, you need to understand what customers are saying about your product in reviews in the local market language - French.  

    Use the tutorial »

    Amazon Translate pricing

    Your guide to Amazon Translate pricing, including Free Tier - which provides 2 million characters per month for 12 months.

    Review Amazon Translate pricing »

    Accelerate multilingual workflows with a customizable translation solution

    This blog shows how to build a unified translation solution with customization features using Amazon Translate and other AWS services.

    Read the blog »

  • Amazon Lex
  • Amazon Lex V2 Developer Guide

    This is a guide to using Amazon Lex V2, an AWS service for building conversational interfaces for applications using voice and text. It combine natural language understanding (NLU) and automatic speech recognition (ASR) capabilities to allow you to build more lifelike, conversational user experiences, and create new categories of products.

    Read the guide »

    Introduction to Amazon Lex

    This course introduces you to the Amazon Lex conversational service, including examples that show you how to create a bot and deploy it to different chat services.

    Take the course »

    Exploring Generative AI in conversational experiences: An Introduction with Amazon Lex, Langchain, and SageMaker Jumpstart

    This blog post discusses the use of generative AI in conversation experiences.

    Read the blog post »

  • Amazon Polly
  • What is Amazon Polly?

    This developer guide offers you a complete overview of the cloud service that converts text into lifelike speech, and can be used to develop applications to increase your customer engagement and accessibility.

    Use the guide »

    Highlight text as it’s being spoken using Amazon Polly

    This blog post introduces you to approaches for highlighting text as it’s being spoken to add visual capabilities to audio in books, websites, blogs, and other digital experiences.

    Read the blog »

    Create audio for content in multiple languages with the same TTS voice persona in Amazon Polly

    This blog post explains Neural Text-to-Speech (NTTS) and discusses how a broad portfolio of available voices, providing a range of distinct speakers in supported languages, can work for you.

    Read the blog »

  • Amazon Rekognition
  • What is Amazon Rekognition?

    This developer guide shows you how to use this service to add image and video analysis to your applications.  

    Use the guide »

    Hands-on Rekognition: Automated Image and Video Analysis

    This course shows you how facial recognition works with streaming video, along with code examples and key points at a self-guided pace.  

    Use the tutorial »

    Amazon Rekognition FAQs

    Learn the basics of Amazon Rekognition and how it can help you improve your deep learning and visually analyze your applications.

    Read the FAQs »

  • Amazon Transcribe
  • Transcribe

    What is Amazon Transcribe?

    This developer guide explores the AWS automatic speech recognition service using ML to convert audio to text. It shows you how to use this service as a standalone transcription or add speech-to-text capability to any application.

    Read the guide »

    Amazon Transcribe Pricing

    This resource offers an introduction to the AWS pay-as-you-go transcription, including custom language model options. It also covers the Amazon Transcribe Free Tier.

    Review the pricing »

    Create an audio transcript with Amazon Transcribe

    This tutorial shows you how to use Amazon Transcribe to create a text transcript of recorded audio files. It provides a real-world use case scenario for testing against your needs.

    Use the tutorial »

    Build an Amazon Transcribe streaming app

    This developer guide contains code that helps you build an app to record, transcribe, and translate live audio in real-time, with results emailed directly to you.

    Use the guide »

The second set of AI/ML AWS service pathways provide in-depth documentation, hands-on tutorials, and resources to get started with the services in the Amazon SageMaker family.

  • SageMaker
  • How Amazon SageMaker works

    This guide provides an overview of machine learning and explains how SageMaker works.

    Read the guide >>

    Getting started with Amazon SageMaker

    Use this guide to setup SageMaker. It will show you how to join an Amazon SageMaker Domain, giving you access to Amazon SageMaker Studio and RStudio on SageMaker. 

    Read the guide >>
     

    Use Apache Spark with Amazon SageMaker

    This guide is for developers who want to use Apache Spark for preprocessing data and SageMaker for model training and hosting. 

    Read the guide >>
     

    Use Docker containers to build models

    Amazon SageMaker makes extensive use of Docker containers for build and runtime tasks. It provides pre-built Docker images for its built-in algorithms and the supported deep learning frameworks used for training and inference. This guide show how to deploy these containers. 

    Read the guide >>
     

    Machine learning frameworks and languages

    You can use Python and R natively in Amazon SageMaker notebook kernels. There are also kernels that support specific frameworks. This guide explores how to get started with SageMaker using the Amazon SageMaker Python SDK.

    Read the guide >>

  • SageMaker Autopilot
  • Create an Amazon SageMaker Autopilot experiment for tabular data

    This guide shows you how to create an Amazon SageMaker Autopilot experiment (that is, start an Autopilot job in SageMaker) to explore, pre-process, and train various model candidates on a tabular dataset.

    Use the guide >>
     

    Automatically create machine learning models

    This tutorial explains how to use Amazon SageMaker Autopilot to automatically build, train, and tune a ML model, and deploy the model to make predictions.

    Use the tutorial >>

    Explore modeling with Amazon SageMaker Autopilot with these example notebooks

    Amazon SageMaker Autopilot provides example notebooks for direct marketing, customer churn prediction and how to bring your own data processing code to Amazon SageMaker Autopilot.

    Explore the notebooks >>
     

  • SageMaker Canvas
  • Get started using Amazon SageMaker Canvas

    This guide tells you how to get started with using SageMaker Canvas.

    Explore the guide >>


     

    Generate machine learning predictions without writing code

    This tutorial explains how to use Amazon SageMaker Canvas to build ML models and generate accurate predictions without writing a single line of code.

    Use the tutorial >>
     

    Dive deeper into SageMaker Canvas 

    This blog post offers an in-depth look at SageMaker Canvas and its visual, no code ML capabilities.

    Go to the blog post >>
     

    Use Amazon SageMaker Canvas to make your first ML Model

    This lab demonstrates how to use Amazon SageMaker Canvas to create an ML model to assess customer retention, based on an email campaign for new products and services.

    Go to the lab >>
     

  • SageMaker Data Wrangler
  • Getting started with Amazon SageMaker Data Wrangler

    This guide explains how set up SageMaker Data Wrangler and then provides a walkthrough using an existing example dataset.

    Go to the guide >>
     

    Prepare training data for machine learning with minimal code

    This tutorial explains how to prepare data for ML using Amazon SageMaker Data Wrangler.

    Use the tutorial >>

    SageMaker Data Wrangler deep dive workshop

    This workshop shows you how to apply appropriate analysis types on your dataset to detect anomalies and issues, use the derived results/insights to formulate remedial actions in the course of transformations on your dataset, and test the right choice and sequence of transformations using quick modeling options provided by SageMaker Data Wrangler.

    Go to the workshop >>

     

  • SageMaker Ground Truth/Ground Truth Plus
  • Getting Started with Amazon Groud Truth

    This guide explains how to use the console to create a labeling job, assign a public or private workforce, and send the labeling job to your workforce. It also shows how to monitor the progress of a labeling job.

    Go to the guide >>
     

    Label Training Data for Machine Learning

    In this tutorial, learn how to set up a labeling job in Amazon SageMaker Ground Truth to annotate training data for your ML model. A labeled dataset is critical to supervised training of an ML model.

    Use the tutorial »

    Getting started with Amazon Ground Truth Plus

    The guide explores how to complete the necessary steps to start an Amazon SageMaker Ground Truth Plus project, review labels, and satisfy SageMaker Ground Truth Plus prerequisites.  

    Use the guide >>
     

    Get started with Amazon Ground Truth

    This short (9:37) video shows you how to get started with labeling your data in minutes through the SageMaker Ground Truth console.

    Watch the video >>
     

    Amazon SageMaker Ground Truth Plus – create training datasets without code or in-house resources

    This blog post introduces Ground Truth Plus, a turn-key service that uses an expert workforce to deliver high-quality training datasets fast, and reduces costs by up to 40 percent.

    Read the blog post >>
     

  • SageMaker JumpStart
  • Get started with machine learning with SageMaker JumpStart

    SageMaker JumpStart provides pretrained, open-source models for a wide range of problem types to help you get started with machine learning. You can incrementally train and tune these models before deployment. JumpStart also provides solution templates that set up infrastructure for common use cases, and executable example notebooks for machine learning with SageMaker.

    Explore SageMaker JumpStart >>
     

    Get Started with your machine learning project quickly using Amazon SageMaker JumpStart

    This tutorial explains how to fast-track your ML project using pretrained models and prebuilt solutions offered by Amazon SageMaker JumpStart. You can then deploy the selected model through Amazon SageMaker Studio notebooks.

    Use the tutorial >>

    Get hands-on with Amazon SageMaker JumpStart with this Immersion Day workshop

    This workshop shows you how the low-code ML capabilities found in Amazon SageMaker Data Wrangler, Autopilot and Jumpstart, make it easier to experiment faster and bring highly accurate models to production.

    Go to the workshop >>
     

  • SageMaker Pipelines
  • Getting Started with Amazon SageMaker Pipelines

    This getting started guide shows you how to create end-to-end workflows that manage and deploy SageMaker jobs. SageMaker Pipelines comes with SageMaker Python SDK integration, so you can build each step of your pipeline using a Python-based interface.

    Go to the guide >>
     

    Automate machine learning workflows

    In this tutorial, learn how to create and automate end-to-end machine learning (ML) workflows using Amazon SageMaker Pipelines, Amazon SageMaker Model Registry, and Amazon SageMaker Clarify.

    Use the tutorial >>

    How to create fully automated ML workflows with Amazon SageMaker Pipelines

    In this session from re:Invent 2020, learn about Amazon SageMaker Pipelines, the world’s first ML CI/CD service designed to be accessible for every developer and data scientist. SageMaker Pipelines brings CI/CD pipelines to ML, reducing the coding time required.

    Watch the video >>
     

  • SageMaker Studio
  • Build and train a machine learning model locally

    In this tutorial, you learn how to build and train a ML model locally within your Amazon SageMaker Studio notebook.

    Use the tutorial >>

    SageMaker Studio integration with EMR workshop

    In this workshop, you'll learn how to utilize distributed processing at scale to prepare data and subsequently train machine learning models.  

    Go to the workshop >>

The third set of AI/ML AWS service pathways provide in-depth documentation, hands-on tutorials, and resources to get started with Amazon Bedrock, Amazon CodeWhisperer, AWS Trainium, AWS Inferentia, and Amazon Titan.

  • Amazon Bedrock
  • Overview of Amazon Bedrock

    Amazon Bedrock is a fully managed service that makes foundation models from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that's best suited for your use case. The overview provides details how and when it can be used.

    Read the overview »

    Announcing new tools for building with generative AI on AWS

    This blog post provides background on the development of Amazon Bedrock, how it fits with the broader AWS approach to AI and ML - and provides an overview of potential uses for AWS generative AI services.

    Read the blog »

    Demystifying generative AI

    In this video, Dr. Werner Vogels, Amazon CTO and Swami Sivasubramanian, AWS VP of database, analytics, and ML, sit down to discuss the broad landscape of generative AI, why it’s not hype, and how AWS is democratizing access to large language and foundation models.

    Watch the video »

  • Amazon CodeWhisperer
  • Getting started with Amazon CodeWhisperer

    Learn how to set up CodeWhisperer for use with each of four possible IDEs: AWS Toolkit for JetBrains, AWS Toolkit for Visual Studio Code, Lambda, and AWS Cloud9.

    Read the guide »

    What is Amazon CodeWhisperer?

    This blog explains that CodeWhisperer is designed to help with creating code for routine or time-consuming,  undifferentiated tasks, working with unfamiliar APIs or SDKs, making correct and effective use of AWS APIs, and other common coding scenarios such as reading and writing files, image processing, and writing unit tests.

    Read the blog »

    Amazon CodeWhisperer Workshop

    In this workshop, you get to build a full-fledged, event-driven, serverless application for image recognition. With the aid of Amazon CodeWhisperer, you'll write your own code that runs on top of AWS Lambda to interact with Amazon Rekognition, Amazon DynamoDB, Amazon SNS, Amazon SQS, Amazon S3, third-party HTTP APIs to perform image recognition.

    Use the workshop »

  • AWS Trainium
  • Scaling distributed training with AWS Trainium and Amazon EKS

    In late 2022, AWS announced the general availability of Amazon EC2 Trn1 instances powered by AWS Trainium—a purpose-built ML accelerator optimized to provide a high-performance, cost-effective, and massively scalable platform for training deep learning models in the cloud. This blog post explores how you can benefit from it.

    Read the blog »

    Overview of AWS Trainium

    This is an overview of AWS Trainium, the second-generation machine learning (ML) accelerator that AWS purpose built for deep learning training of 100B+ parameter models. Each Amazon Elastic Compute Cloud (EC2) Trn1 instance deploys up to 16 AWS Trainium accelerators to deliver a high-performance, low-cost solution for deep learning (DL) training in the cloud.

    Read the overview »

    Recommended Trainium Instances

    The developer guide explores how AWS Trainium instances are designed to provide high performance and cost efficiency for deep learning model inference workloads.

    Read the guide »

  • AWS Inferentia
  • Overview of AWS Inferentia

    This overview of AWS Inferentia explains how accelerators are designed by AWS to deliver high
    performance at the lowest cost for your deep learning (DL) inference applications.  

    Read the overview »

    AWS Inferentia2 builds on AWS Inferentia1 by delivering 4x higher throughput and 10x lower latency

    This blog post explains what AWS Inferentia2 is optimized for - and explores how it was designed from the ground up to deliver higher performance while lowering the cost of LLMs and generative AI inference.

    Read the blog »

    Machine learning inference using AWS Inferentia

    Learn how to create an Amazon EKS cluster with nodes running Amazon EC2 Inf1 instances and (optionally) deploy a sample application. Amazon EC2 Inf1 instances are powered by AWS Inferentia chips, which are custom built by AWS to provide high performance and lowest cost inference in the cloud.

    Read the guide »

     

  • Amazon Titan
  • Overview of Amazon Titan

    This overview discusses how Amazon Titan FMs are pretrained on large datasets, making them powerful, general-purpose models. It also looks at how you can use them as is - or privately - to customize them with your own data for a particular task without annotating large volumes of data.

    Read the overview »

Explore

AI/ML architecture diagrams

These reference architecture diagrams show examples of AWS AI and ML services in use.

Explore architecture diagrams »

AI/ML whitepapers

Explore whitepapers to help you get started and learn best practices in choosing and using AI/ML services.

Explore whitepapers »

AI/ML solutions

Explore vetted solutions and architectural guidance for common use cases for AI and ML services.

Explore solutions »

Was this page helpful?