0
0
Ml-pythonHow-ToBeginner ยท 4 min read

How to Use Seldon Core for Deploying Machine Learning Models

To use Seldon Core, package your machine learning model as a Docker container, define a SeldonDeployment YAML manifest, and deploy it on a Kubernetes cluster with kubectl apply. Seldon Core manages the model serving, scaling, and monitoring automatically.
๐Ÿ“

Syntax

The main syntax for using Seldon Core involves creating a SeldonDeployment YAML file that describes your model deployment. Key parts include:

  • apiVersion: Defines the Seldon Core API version.
  • kind: Always SeldonDeployment for model deployments.
  • metadata: Contains the deployment name and namespace.
  • spec: Describes the model graph, replicas, and container image.

You then apply this YAML to your Kubernetes cluster using kubectl apply -f deployment.yaml.

yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
  name: my-model
  namespace: seldon
spec:
  predictors:
  - name: default
    replicas: 1
    graph:
      name: model
      implementation: SKLEARN_SERVER
      modelUri: gs://my-bucket/my-model/
    componentSpecs:
    - spec:
        containers:
        - name: model
          image: seldonio/sklearnserver:1.14.0
๐Ÿ’ป

Example

This example shows how to deploy a simple sklearn model using Seldon Core on Kubernetes. It uses the official sklearnserver image and points to a model stored in Google Cloud Storage.

yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
  name: sklearn-iris
  namespace: seldon
spec:
  predictors:
  - name: default
    replicas: 1
    graph:
      name: classifier
      implementation: SKLEARN_SERVER
      modelUri: gs://seldon-models/sklearn/iris
    componentSpecs:
    - spec:
        containers:
        - name: classifier
          image: seldonio/sklearnserver:1.14.0
Output
seldondeployment.machinelearning.seldon.io/sklearn-iris created
โš ๏ธ

Common Pitfalls

Common mistakes when using Seldon Core include:

  • Not packaging the model correctly as a Docker image or using an unsupported model server.
  • Incorrect modelUri paths causing the model to fail loading.
  • Missing Kubernetes namespace or permissions for deploying Seldon resources.
  • Forgetting to install Seldon Core operator in the cluster before deploying models.

Always verify your Kubernetes context and that the Seldon operator is running.

yaml
### Wrong: Missing modelUri
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
  name: bad-deploy
spec:
  predictors:
  - name: default
    replicas: 1
    graph:
      name: model
      implementation: SKLEARN_SERVER

### Right: Include modelUri
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
  name: good-deploy
spec:
  predictors:
  - name: default
    replicas: 1
    graph:
      name: model
      implementation: SKLEARN_SERVER
      modelUri: gs://my-bucket/my-model/
๐Ÿ“Š

Quick Reference

Key tips for using Seldon Core:

  • Install the Seldon Core operator on your Kubernetes cluster first.
  • Package your model as a Docker container or use supported pre-built servers.
  • Write a SeldonDeployment YAML manifest describing your model graph and replicas.
  • Deploy with kubectl apply -f deployment.yaml.
  • Use kubectl get seldondeployments to check deployment status.
โœ…

Key Takeaways

Seldon Core deploys ML models on Kubernetes using a SeldonDeployment YAML manifest.
Always install the Seldon Core operator before deploying models.
Ensure your model is packaged correctly and the modelUri path is valid.
Use kubectl commands to apply deployments and monitor status.
Common errors include missing modelUri and incorrect Kubernetes setup.