Getting the Most From Your Google Vertex ML Architecture

Wallaroo.AI
4 min readMay 19, 2022

--

Google Vertex workbench is a new machine learning platform that is meant to make it easier for data scientists to deploy and maintain their AI models. Vertex AI makes it easier to utilize Google cloud services for building ML inside one UI and API. Google Vertex is a great tool for addressing the initial steps of the ML model lifecycle like Data Load/Prep and Model Development. You can train and compare your models using a standard framework or through custom code instead. There is also the ability to leverage a native Google API for image, video, and text processing.

Enterprises are choosing Vertex as their ML training platform for several reasons, including:

  • Ability to train models specific to business needs with minimal ML expertise
  • Support for custom frameworks and standard frameworks (SK-learn, Pytorch, TF, XGBoost, R) for model development
  • Accelerated training with integrated APIs for image, video, and text processing
  • Integrated Jupyter notebooks
  • Easy feature engineering and management
  • Easy experiment tracking and tuning

However, the true return of ML model training is how your model performs on real-world production data When it comes to deploying models live and the ongoing management of models in production, Google Vertex has several shortcomings:

  • Limited model observability in production so data scientists and ML engineers have difficulty tracking the ongoing performance and accuracy of live models
  • Runtime is compute-intensive, particularly for complex models
  • Can’t deploy ML into non-GCP clouds, on-prem or at the edge without significant re-engineering

Advantages to using Wallaroo to complement your Vertex architecture

Wallaroo is a purpose-built enterprise platform for deploying and managing your ML models in production. We take the ML models you developed in Google Vertex (or any tool you’re using for model development) and help deploy them easily in any enterprise production environment, whether on-prem, at the edge, or in any cloud.

As a result, you can continue using Vertex for what it does well (making it easy for feature development on your data and building and training models) and then use Wallaroo for that last mile to industrialize ML for your enterprise.

  • Observability: An interface that allows your team to define metrics and analytics to measure, track, and improve your ML’s performance. Model observability insights provide visibility into how your model’s behavior might be affecting your business outcomes. Automated notifications and alerts for drifts and anomalies allow data science teams to supervise models at scale with no operational constraints.
  • Deploy Anywhere: Whether you’re planning to deploy in an environment that’s mostly connected to the cloud, at the edge or on-prem, Wallaroo can handle the work. Wallaroo is designed to seamlessly connect into your existing ecosystem with a standardized process for deploying your models across various platforms, clouds and environments. Not only that but the variety of connectors Wallaroo offers will seamlessly connect our platform with your production data so you’re up and running in minutes.
  • High Performance Inferencing: The success of your ML deployment is based on the performance of your ML models when generating inferences on production data. When measuring model performance, you should take into account the latency, throughput and computational efficiency of your inferences. For running ML models live in production, Wallaroo generates 10x more inferences per second than Vertex while requiring 90% less compute (see below).

Metrics for Selecting the Right Deployment Platform

Wallaroo supports ML industrialization and production AI, leading to its stellar performance in model deployment. For model management and observability, the platform has a production-grade experimentation framework (A/B testing, canary deploys, deploy in dark, blue/green deploys, shadow deploys) along with batch and live inference serving. Integration and security can leverage native MLOps APIs that work with any training/registries and install in any cloud. All of this served on an infrastructure that can boast industry-low compute costs. Combining the ease of model development in Vertex AI with the strength of Wallaroo’s model deployment will allow you to get the most value out of your ML models to drive key strategic outcomes for your business.

--

--

Wallaroo.AI
Wallaroo.AI

Written by Wallaroo.AI

90% of AI projects fail to deliver ROI. We change that. Wallaroo solves operational challenges for production ML so you stay focused on business outcomes.

No responses yet