Practice - 5 Tasks
Answer the questions below
1fill in blank
easyComplete the code to save the model with reproducible results.
MLOps
model.save('model_[1].h5')
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using generic names like 'temp' or 'test' which don't track versions.
✗ Incorrect
Using a version like 'v1' helps keep track of reproducible model builds.
2fill in blank
mediumComplete the code to set a fixed random seed for reproducibility.
MLOps
import numpy as np np.random.seed([1])
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using None or 'random' which do not fix the seed.
✗ Incorrect
Setting a fixed seed like 42 ensures the random numbers are the same every run.
3fill in blank
hardFix the error in the code to log parameters for reproducibility.
MLOps
mlflow.log_param('learning_rate', [1])
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Passing the learning rate as a string or variable name instead of a number.
✗ Incorrect
Logging the parameter as a number (0.01) is correct for reproducibility tracking.
4fill in blank
hardFill both blanks to create a reproducible data split.
MLOps
train_data, test_data = train_test_split(data, test_size=[1], random_state=[2])
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using None for random_state or incorrect test sizes.
✗ Incorrect
Using test_size=0.2 and random_state=42 ensures the split is reproducible.
5fill in blank
hardFill all three blanks to log model metrics reproducibly.
MLOps
mlflow.log_metric('[1]', [2], step=[3])
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Mixing metric names and values or missing the step parameter.
✗ Incorrect
Logging 'accuracy' with value 0.95 at step 1 helps track reproducible metrics.