Jump to Content
AI & Machine Learning

Take your ML models from prototype to production with Vertex AI

September 13, 2022
https://storage.googleapis.com/gweb-cloudblog-publish/images/AIML_VbefCPO.max-2600x2600.jpg
Nikita Namjoshi

Developer Advocate

You’re working on a new machine learning problem, and the first environment you use is a notebook. Your data is stored on your local machine, and you try out different model architectures and configurations, executing the cells of your notebook manually each time. This workflow is great for experimentation, but you quickly hit a wall when it comes time to elevate your experiments up to production scale. Suddenly, your concerns are more than just getting the highest accuracy score.

Sound familiar?

Developing production applications or training large models requires additional tooling to help you scale beyond just code in a notebook, and using a cloud service provider can help. But that process can feel a bit daunting. 

To make things a little easier for you, we’ve created the Prototype to Production video series, which covers all the foundational concepts you’ll need in order to build, train, scale, and deploy machine learning models on Google Cloud using Vertex AI.

Let’s jump in and see what it takes to get from prototype to production!

Getting started with Notebooks for machine learning

Episode one of this series shows you how to create a managed notebook using Vertex AI Workbench. With your environment set up, you can explore data, test different hardware configurations, train models, and interact with other Google Cloud services.

Video Thumbnail

Storing data for machine learning

When working on machine learning problems, it’s easy to be laser focused on model training. But the data is where it all really starts.

If you want to train models on Vertex AI, first you need to get your data into the cloud. In episode 2, you’ll learn the basics of storing unstructured data for model training and see how to access training data from Vertex AI Workbench.

Video Thumbnail

Training custom models on Vertex AI

You might be wondering, why do I need a training service when I can just run model training directly in my notebook? Well, for models that take a long time to train, a notebook isn’t always the most convenient option. And if you’re building an application with ML, it’s unlikely that you’ll only need to train your model once. Over time, you’ll want to retrain your model to make sure it stays fresh and keeps producing valuable results. 

Manually executing the cells of your notebook might be the right option when you’re getting started with a new ML problem. But when you want to automate experimentation at scale, or retrain models for a production application, a managed ML training option will make things much easier.

Episode 3 shows you how to package up your training code with Docker and run a custom container training job on Vertex AI. Don’t worry if you’re new to Docker! This video and the accompanying codelab will cover all the commands you’ll need.

CODELAB: Training custom models with Vertex AI
Video Thumbnail

How to get predictions from an ML model 

Machine learning is not just about training. What’s the point of all this work if we don’t actually use the model to do something? 

Just like with training, you could execute predictions directly from a notebook by calling model.predict. But when you want to get predictions for lots of data, or get low latency predictions on the fly, you’re going to need something more than a notebook. When you're ready to use your model to solve a real world problem with ML, you don’t want to be manually executing notebook cells to get a prediction.

In episode 4, you’ll learn how to use the Vertex AI prediction service for batch and online predictions.

CODELAB: Getting predictions from custom trained models
Video Thumbnail

Tuning and scaling your ML models

By this point, you’ve seen how to go from notebook code, to a deployed model in the cloud. But in reality, an ML workflow is rarely that linear. A huge part of the machine learning process is experimentation and tuning. You’ll probably need to try out different hyperparameters, different architectures, or even different hardware configurations before you figure out what works best for your use case.

Episode 5, covers the Vertex AI features that can help you with tuning and scaling your ML models. Specifically, you’ll learn about hyperparameter tuning, distributed training, and experiment tracking.

CODELAB: Hyperparameter tuning on Vertex AI
CODELAB: Distributed Training on Vertex AI

Video Thumbnail

We hope this series inspires you to create ML applications with Vertex AI! Be sure to leave a comment on the videos if you’d like to see any of the concepts in more detail, or learn how to use the Vertex AI MLOps tools. 

If you’d like try all the code for yourself, check out the following codelabs:

Posted in