top of page

Enhancing and Automating Machine Learning Projects in Production with MLFlow

Abhishek Sheth

Discover how to streamline your machine learning projects with MLOps. Our guide covers the details on automating ML Lifecycle with MLFlow.


Automated ML Lifecycle

In the world of data science and machine learning, managing and tracking experiments efficiently is crucial for success. MLFlow emerges as a game-changer, offering a comprehensive platform to streamline the entire ML project lifecycle.


From tracking experiments to managing models and deploying them into production, MLFlow provides a unified solution to simplify complex workflows and drive better outcomes.

Learning Objectives:


We'll explore the key capabilities of MLFlow and how it can revolutionise your machine learning projects:


MLFlow Tracking: Leveraging built-in functions to track and evaluate experiment performance.

Model Registry: Managing ML model versions, modifying lifecycle stages, and deploying models to production seamlessly.

HyperOpt: Tuning hyperparameters efficiently for MLFlow experiments to enhance model performance.



Automated MLOps LifeCycle


Automated MLOps Lifecycle


CAPABILITIES OF MLFLOW


MLFlow

MLFlow offers a handful of features to streamline the ML lifecycle, including:


  • Open Source Platform: Providing a standardized approach for managing and tracking experiments.

  • Experiment Tracking: Keeping tabs on experiment progress and comparing multiple models effortlessly.

  • Model Packaging and Deployment: Standardizing the packaging and deployment of models for seamless integration into production systems.


MLFLOW TRACKING

MLFlow's tracking functionality captures crucial information such as parameters, metrics, artifacts, and source code from each run.


What Gets Tracked?

1. Parameters: Key-value pairs of parameters (e.g. hyper parameters)

2. Metrics: Evaluation metrics (ea. RMSE-Root Mean Square Error)

3. Artifacts: Arbitrary output files (ea. images, pickled models, data files)

4. Source: The source code from the run


ML flow is open-sourced, and gaining momentum


1.  Logging plugins for common ML frameworks such as Scikit-learn, XGBoost, LightGBM. Example: mlflow.lightgbm, miflow.xgboost

2. Log params and metric. Example: mlflow.log_params, miflow.log_metrics

3. Log model and various artifacts. Example: mlflow.sklearn.log_model(), mlflow.lightgbm.log_model(), mlflow.log_artifact()



ARTIFACTS FROM IN-BUILT FUNCTIONS



ARTIFACTS FROM IN-BUILT FUNCTIONS


MODEL REGISTRY

  • The model registry feature in MLFlow serves as a centralized repository for storing and versioning trained ML models.

  • It simplifies the process of tracking models throughout their lifecycle, from training to production deployment.

  • With lineage tracking, engineering teams can seamlessly hand off models for deployment, reducing time and effort significantly.


AUTOMATE HYPER-PARAMETER OPTIMIZATION WITH HYPEROPT:


  • Grid Search is an exhaustive way as it evaluates every combination of hyper parameters for the model and takes a long time to run when there are a lot of hyper parameter combinations to compare.


  • Random Search picks a fixed number of hyper parameter combinations randomly, so not every single combination is evaluated. The downside is that sometimes the random selection may not include top performance hyper parameter combinations.


  • Grid and random search are completely uninformed by past evaluations, and as a result, often spend a significant amount of time evaluating "bad" hyper parameters.


  • HyperOpt: Hyperparameter Tuning based on Bayesian Optimization meaning it is not merely randomly searching or searching a grid, but keeps track of the results of previous evaluations to choose the next set of hyperparameter values to be evaluated.


  • Bayesian Optimization provides a probabilistically principled method for optimization and requires less iterations to get to the optimal set of hyper parameter values. Most notably because it disregards those areas of the parameter space that it believes won't bring anything to the table.



PROS AND CONS OF USING MLFLOW


pros and cons of mlflow

Advantages:

  1. Inbuilt Features - Easy to compare performance metrics or model parameters in the Ul.

  2. Mislabeled model datasets and artifacts - Keeping track of which artifacts (files) came from which training job can be difficult. Also, keeping track of which datasets were used for what, teams will have no idea which can be deleted and which can not.

  3. Missing source code or unknown versions - Even good models will at times produce surprising or erroneous results. Without care, it is easy to lose track of the source code or know which version was used to train the model. This can lead to duplicated effort as the previous model is invalidated and a new one is trained to help understand the issue.

  4. Undocumented model performance -  While model tuning we may end up with many versions of a model for a particular task. If model performance results are stored across different locations, it can be hard to compare difference versions.


Disadvantages:

  1. Tracking Limitations: ML flow Is great for running experiments but does not track some ML lifecycle like exploratory data analysis or results exploration.

  2. Manual Checks: Some of the unsuccessful runs also gets recorded with few successful ones which have to be manually cleared from the list.


CONCLUSION


MLFlow emerges as a powerful tool for managing and monitoring ML experiments, enhancing collaboration, and driving better decision-making.


By leveraging its robust tracking, model registry, and hyperparameter tuning capabilities, data science teams can streamline workflows, improve model performance, and accelerate time-to-value, making MLFlow an valuable asset for data-driven organizations.



Comments


bottom of page