Nuno Chicória

ngpc

Data Scientist - Xpand IT

Guide for monitoring machine learning models

5 SECONDS-SUMMARY:
  • This content is a continuation of the article: “Data Science Assessment: how to create machine learning models”.
  • Continuous model monitoring is essential for ensuring sustained success and optimal performance in machine learning models, involving the observation of a model’s behavior over time and the tracking of key metrics to ensure accuracy and reliability.
  • Various open-source platforms simplify the machine learning lifecycle by providing tools for experiment tracking, model versioning through registries, and seamless deployment with integrated monitoring, empowering data scientists to navigate model management complexities for sustained success.

In the dynamic landscape of data science, building and deploying machine learning models is just the beginning. To ensure sustained success and optimal performance, continuous monitoring of these models is crucial. Model monitoring in the data science pipeline involves tracking, evaluating, and managing the performance of both experimental models and those deployed in production.

In this blog post, we’ll delve into the significance of model monitoring and explore how tools like MLflow can empower data scientists to keep a close eye on their experiments and deployed models.

Monitoring machine learning models

Model monitoring refers to the ongoing process of observing a machine learning model’s behaviour over time, both during the development phase and after deployment. It involves tracking various metrics to ensure that the model continues to deliver accurate and reliable predictions as data distributions evolve.

Key aspects of model monitoring

Performance Metrics

Monitoring the performance of your models involves tracking key metrics such as accuracy, precision, recall, F1 score, and more. These metrics provide insights into how well the model is generalizing to new data and whether any degradation in performance has occurred.

Data Drift Detection

Data distributions in real-world scenarios are rarely static. Monitoring for data drift involves comparing the distribution of incoming data with the data the model was trained on. Monitoring tools allow you to set up automated processes to detect and alert when significant drift occurs.

Model Drift Detection

Similar to data drift, model drift involves tracking changes in the model’s predictions over time. Monitoring tools enable you to log and compare model performance, helping you identify if the model’s effectiveness has degraded.

How Model Monitoring Tools Facilitate Model Monitoring

Various open-source platforms simplify the machine learning lifecycle. One key approach is the ability to track and manage experiments. Here’s how these tools assist in keeping your models in check:

Experiment Tracking

These platforms allow you to log and organize experiments, making it easy to compare different runs and identify the most successful models. They record parameters, metrics, and artefacts, providing a comprehensive overview of your model development process.

Model Registry

Model Registries act as central hubs for managing and versioning models. This ensures that every deployment is based on a specific version of the model, facilitating easy rollback in case issues arise.

Model Deployment and Monitoring

These platforms simplify the deployment process, making it seamless to transition from experimenting with models to deploying them in production. Additionally, they provide integrations with monitoring tools, allowing you to keep a close eye on the deployed model’s performance.

Final Thoughts

Model monitoring is an integral part of the data science pipeline that ensures the continued effectiveness of machine learning models. Various tools, with MLflow as an example, emerge as powerful allies, offering features that streamline experiment tracking, model versioning, and deployment monitoring. By leveraging these tools, data scientists can confidently navigate the complexities of model management and monitoring, contributing to the sustained success of their machine learning endeavours.

Nuno ChicóriaGuide for monitoring machine learning models
read more

Five everyday problems MLFlow solves

At Xpand, we take pride in our XP4DS workflow and like to surround ourselves with the best tools to make our work easier and results better. Among those technologies and tools, there is a special place reserved for MLFlow.

If you haven’t heard about MLFlow, turn off your phone and connect your modem because it’s time to catch up with [modern solutions/the technological world]!

MLFlow is an open-source platform that helps you manage your machine learning life cycle from the first model you train to that amazing model you will deploy to solve all your problems.

It covers your problems under 3 main topics:

  • Tracking: Record and query experiments (code, data, config and results)
  • Projects: Packaging format for reproducible runs on any platform.
  • Models: General format for sending models to diverse deployment tools.

MLFlow is library-agnostic. You can use it with any machine learning library, and in any programming language, since all functions are accessible through a REST API and CLI. For convenience, the project also includes a Python API, R API, and Java API.

1. Do you recall with precision the ROC AUC? (Metrics + Parameters Logging)

We’ve all been there in the past. It’s your 1st iteration and you train a model with good accuracy values. You continue your iterations in the hope of finding a better set of hyperparameters, only to discover that your best model was an earlier one. You can no longer remember that combination of hyperparameters. With MLFlow, you don’t have this problem! With model logging, you can get information on all your models in one place. From metrics to hyperparameters, you can even add your own tags. In the API, you will be able to compare all the trained models, sort them by any metric or tag and select the model of your choice.

2. It works on my machine ¯_(ツ)_/¯ (Model + Environment Logging)

Once again, as an incredible data scientist, you create an amazing model that solves the problem you need it to. Nevertheless, when you hand it over to your colleagues it does not work. It may be that a library needs to be updated or that some sorcery in the background does not work. With MLFlow this will no longer be a problem. Parallel to the metrics logging, you can save your (trained) model, conda environment and any other file that you deem important. This way, your colleagues can seamlessly replicate your conda environment and execute your trained model without issues.

3. Logging beyond experiments (Model Registry)

The same way that you can save models per experiment, every model that was once in production will also be saved. Through the MLFlow UI you can access all the previous versions of the deployed model. More importantly, when you decide on the best model, you can register it so everyone in the team knows that’s the model that will follow staging and production.

4. There is no “I” In MLFlow (Teamwork)

MLFlow ups teamwork to the next level by improving interoperability between teams. With your DS team, you can all submit and see each other’s models, compare them with yours and even import them so you can work on them too. Then, as a team, you can also push certain models for staging and deployment. These will have to be approved by the team responsible for those tasks. And thus, the whole DS pipeline is present in the MLFlow UI.

5. Model is ready for delivery (Deployment for Production)

You’re nearing the end of the project, you can see light at the end of the tunnel and all your hard work is paying off. All that’s left is to deploy the model and, you guessed it, MLFlow has you covered. With MLFlow Models you are ready to send your trained model for deployment in a vast array of platforms. This, combined with its logging tool, makes it perfect for the constant monitoring of the model’s performance over time so you can improve it if needed!

Conclusion

You will have realised by now that MLFlow is a tool that tries to and succeeds in solving many of the problems that a data scientist faces along the data science pipeline. From the moment you start training your model to the model you deploy into production, you can always rely on MLFlow to track your progress and make the data science process much easier. An open-source tool, MLFlow is an ever-evolving must-have tool for the 21st century data scientist.

Nuno ChicóriaFive everyday problems MLFlow solves
read more