Machine Learning on Google Cloud

GCP offers various tools that can be utilized to develop and deploy ML applications. Most notably, Google offers Vertex AI, a PaaS service used for development of ML applications.

Vertex AI

Vertex AI is Google’s flagship PaaS for development of ML products. It includes fully-managed ML tools to simplify ML model creation, deployment, and evaluation. For example, Vertex AI includes AutoML as a no-code solution for automating training ML models and performing predictions. Furthermore, it includes includes Vertex AI Vision, to simplify building computer vision applications by combining components from the ingestion of real-time video and image streams, storage and low-code ML model creation for analysis.

However, my favorite feature of Vertex AI is the fully fledged containerized development environment, called Vertex AI Workbench. The AI workbench leverages Jupyter Noteboks with integrated libraries – for example, TensorFlow, Pytorch, and scikit-learn. Moreover, it allows for easy provisioning with compute capacity and libraries required to accomplish the desired task. This is great for data science workflows, as it is easy to set up and configure your working environments, without worry about compute capacity (unless you can’t affort to spend the $$) and doesn’t conflict with other development environments, as each can be set up to run independently. In my opinion, using Vertex AI is the best way to advantage of the cloud computing power when running ML applications on GCP.

Placeholder

ApplicationComplexity
AutoMLNo-code
Vertex AI Applications

Evaluation Tools

  • What-If-Tool (WIT): No-code analysis of ML Models. It allows you to visually probe behaviour of ML models to test performance in hypothetical situations. You can set the size of the input data, and model the behaviour across multiple models. It takes the output scores from classification or regression models as input, then inspect the performance. It is also useful as a way to evaluate ML model fairness.
  • Continuous evaluation: A way to regularly label prediction inputs and outputs from trained ML models, e.g. for determining Ground truth labels. This process requires a human input. When you start creating an evaluation job using continuous evaluation, you can decide to either use the Data Labeling Service to assign human reviewers, or provide the ground truth labels yourself. This is helpful to gain feedback on how your model performs over time, by comparing your model’s predictions with the assigned ground truth labels.

Read More: https://cloud.google.com/vertex-ai

Recommendations AI

Placeholder

TensorFlow

See https://www.ansol.se/2023/01/13/tensorflow-on-google-cloud/

ML Workflow

The ML Step contains two parts: Training and inference.

See https://cloud.google.com/ai-platform/docs/ml-solutions-overview

Leave a comment