0
0
MLOpsdevops~10 mins

Model documentation and model cards in MLOps - Commands & Configuration

Choose your learning style9 modes available
Introduction
When you create a machine learning model, you need to explain what it does and how to use it safely. Model documentation and model cards help share this information clearly so others can understand and trust your model.
When you want to share your model with teammates or other teams so they know its purpose and limits
When you need to record how your model was trained and tested for future reference
When you want to be transparent about your model’s strengths and weaknesses to avoid misuse
When you prepare your model for deployment and want to include usage instructions
When you want to comply with company or legal requirements for AI transparency
Commands
This command starts a local server to serve the ML model saved in MLflow. It lets you test the model and document its behavior by sending requests.
Terminal
mlflow models serve -m runs:/1234567890abcdef/model --no-conda -p 1234
Expected OutputExpected
2024/06/01 12:00:00 INFO mlflow.models.cli: Starting MLflow model server... 2024/06/01 12:00:00 INFO mlflow.models.cli: Listening on port 1234
-m - Specifies the model path to serve
--no-conda - Avoids creating a new conda environment for faster startup
-p - Sets the port number for the server
This command sends a test input to the running model server to get a prediction. It helps verify the model’s output matches expectations for documentation.
Terminal
curl -X POST -H 'Content-Type:application/json' --data '{"data": [[5.1, 3.5, 1.4, 0.2]]}' http://127.0.0.1:1234/invocations
Expected OutputExpected
{"predictions": [0]}
-X POST - Sends data to the server
-H 'Content-Type:application/json' - Sets the data format
--data - Provides the input features for prediction
This command launches the MLflow tracking UI in your browser. You can view model runs, parameters, metrics, and add notes to document your model clearly.
Terminal
mlflow ui
Expected OutputExpected
2024/06/01 12:05:00 INFO mlflow.server: Starting MLflow UI at http://127.0.0.1:5000
This command builds a Docker image for your ML model. It packages the model and its environment so others can run it easily with documentation included.
Terminal
mlflow models build-docker -m runs:/1234567890abcdef/model -n my-model-image
Expected OutputExpected
Successfully built image my-model-image
-m - Specifies the model path to package
-n - Names the Docker image
Key Concept

If you remember nothing else, remember: clear model documentation and cards make your model easy to understand, trust, and reuse.

Code Example
MLOps
import mlflow
from mlflow.models import Model

# Log a simple metric and parameter
mlflow.start_run()
mlflow.log_param("model_type", "decision_tree")
mlflow.log_metric("accuracy", 0.87)
mlflow.end_run()

print("Model run logged with parameters and metrics.")
OutputSuccess
Common Mistakes
Skipping model documentation and only sharing the model file
Others won’t know how to use the model correctly or understand its limits, causing errors or misuse
Always create a model card or documentation that explains the model’s purpose, data, performance, and usage
Not testing the model server with real inputs before sharing
You might share a broken or misconfigured model that fails in production
Run the model server locally and send test requests to verify outputs before deployment
Ignoring the MLflow UI and tracking features for documentation
You lose valuable history and context about model training and evaluation
Use MLflow UI to log and view model runs, parameters, and notes for clear documentation
Summary
Use mlflow models serve to start a local server for your model to test and document it.
Send test inputs with curl to verify the model’s predictions before sharing.
Launch mlflow ui to view and document model runs, parameters, and metrics.
Build a Docker image with mlflow models build-docker to package your model for easy sharing.