Monitoring Machine Learning
Educational information and resources on machine learning monitoring topics
Model Drift | Model Performance | Model Outliers | ML Data Quality
Educational information and resources on machine learning monitoring topics
Model Drift | Model Performance | Model Outliers | ML Data Quality
The ML Ops community is a great resource for information and discussion on all things related to ML Ops. Check out their website and Slack channel.
Once a model is trained there can be a number of ways data and a model can drift from its intended purposes
The distribution of predictions of a model can drift drastically from training.
In cases where no ground truth exists prediction drift can be the main proxy metric for performance degradation.
Features should be monitored on a periodic basis to insure they have not drifted from training or validation tests.
The relationship between a system’s inputs and outputs can change over time causing concept drift.
In general, model performance analysis will depend on receiving ground truth or a proxy back in production.
The linkage of predictions and ground truth allows you to analyze performance and determine if predictions match the actual outcomes.
Outliers representing multivariate changes across features that are distinctly different than training should be tracked as outliers.
Out of distribution points can be useful to understand how your model handles unexpected inputs. However, outliers should be tracked, grouped and analyzed distinctly.
Machine learning models are only as good as the data feeding into them.
Out of distribution points can be useful to understand how your model handles unexpected inputs. However, outliers should be tracked, grouped and analyzed distinctly.