0
0
MLOpsdevops~10 mins

Kubeflow Pipelines overview in MLOps - Step-by-Step Execution

Choose your learning style9 modes available
Process Flow - Kubeflow Pipelines overview
Define pipeline components
Assemble components into pipeline
Compile pipeline to YAML
Upload pipeline to Kubeflow UI
Run pipeline experiment
Pipeline executes steps in order
Monitor step status and logs
View results and outputs
Pipeline completes
Kubeflow Pipelines lets you build, run, and monitor machine learning workflows step-by-step using components assembled into a pipeline.
Execution Sample
MLOps
@dsl.component
def preprocess():
    pass

@dsl.component
def train():
    pass

@dsl.pipeline(name='simple-pipeline')
def pipeline():
    preprocess_task = preprocess()
    train_task = train().after(preprocess_task)
Defines two steps and assembles them into a simple Kubeflow pipeline.
Process Table
StepActionEvaluationResult
1Define preprocess componentFunction createdpreprocess component ready
2Define train componentFunction createdtrain component ready
3Create pipeline functionPipeline function definedpipeline structure ready
4Instantiate preprocess_taskCall preprocess()preprocess step added
5Instantiate train_taskCall train()train step added
6Compile pipelineConvert to YAMLpipeline.yaml generated
7Upload pipeline.yamlUpload to Kubeflow UIPipeline available in UI
8Run pipelineStart experimentPipeline execution started
9Execute preprocess stepRun preprocessPreprocess step completed
10Execute train stepRun trainTrain step completed
11Pipeline completesAll steps donePipeline run successful
💡 All pipeline steps executed successfully, pipeline run ends.
Status Tracker
VariableStartAfter Step 4After Step 5After Step 8After Step 11
preprocess_taskNonepreprocess component instancepreprocess component instancerunningcompleted
train_taskNoneNonetrain component instancependingcompleted
pipeline_statusNot startedNot startedNot startedRunningSucceeded
Key Moments - 3 Insights
Why do we define components before assembling the pipeline?
Components are the building blocks; defining them first (see steps 1 and 2 in execution_table) allows us to reuse and organize tasks before creating the pipeline.
What happens when we compile the pipeline?
Compiling (step 6) converts the Python pipeline code into a YAML file that Kubeflow understands to run the workflow.
How does Kubeflow execute the pipeline steps?
Kubeflow runs steps in order respecting dependencies (steps 9 and 10), monitoring each step's status until completion.
Visual Quiz - 3 Questions
Test your understanding
Look at the execution_table, what is the state of 'train_task' after step 5?
Arunning
BNone
Ctrain component instance
Dcompleted
💡 Hint
Check the 'train_task' variable in variable_tracker after Step 5.
At which step does the pipeline start running in Kubeflow?
AStep 6
BStep 8
CStep 9
DStep 11
💡 Hint
Look for 'Pipeline execution started' in execution_table.
If the 'preprocess' step fails, what happens to the pipeline execution?
APipeline stops and marks failure
BPipeline continues to 'train' step
CPipeline retries 'train' step
DPipeline skips 'preprocess' and completes
💡 Hint
Refer to the execution flow where steps run in order and depend on previous success.
Concept Snapshot
Kubeflow Pipelines let you build ML workflows by:
- Defining reusable components (steps)
- Assembling them into a pipeline function
- Compiling to YAML for Kubeflow
- Uploading and running via Kubeflow UI
- Monitoring step status and outputs
Steps run in order respecting dependencies.
Full Transcript
Kubeflow Pipelines help you create machine learning workflows by defining small tasks called components. You write these components as functions, then combine them into a pipeline function. This pipeline is compiled into a YAML file that Kubeflow understands. You upload this YAML to the Kubeflow UI and start an experiment to run the pipeline. Kubeflow runs each step in order, showing status and logs. When all steps finish, the pipeline run completes successfully.