Monitoring Machine Learning

Educational information and resources on machine learning monitoring topics

Model Drift | Model Performance | Model Outliers | ML Data Quality

Join the ML Ops Community

The ML Ops community is a great resource for information and discussion on all things related to ML Ops. Check out their website and Slack channel.

Data and Prediction Drift

The distribution of predictions of a model can drift drastically from training.

Prediction Drift

In cases where no ground truth exists prediction drift can be the main proxy metric for performance degradation.

Data Drift

Features should be monitored on a periodic basis to insure they have not drifted from training or validation tests.

Concept Drift

The relationship between a system’s inputs and outputs can change over time causing concept drift.

Model Performance

In general, model performance analysis will depend on receiving ground truth or a proxy back in production.

Ground Truth

The linkage of predictions and ground truth allows you to analyze performance and determine if predictions match the actual outcomes.

Model Outliers

Outliers representing multivariate changes across features that are distinctly different than training should be tracked as outliers.

Accounting for Outliers

Out of distribution points can be useful to understand how your model handles unexpected inputs. However, outliers should be tracked, grouped and analyzed distinctly.

ML Data Quality

Machine learning models are only as good as the data feeding into them.

Out of distribution points can be useful to understand how your model handles unexpected inputs. However, outliers should be tracked, grouped and analyzed distinctly.

Request a demo