Which of the following is the main reason to track hardware and framework versions in an MLOps pipeline?
Think about why knowing exact versions helps when running the same model again.
Tracking hardware and framework versions helps reproduce results exactly and avoid unexpected differences in model behavior.
What is the output of the following Linux command used to check GPU info?
nvidia-smi --query-gpu=name,memory.total --format=csv,noheader
The command queries GPU name and total memory in CSV format without header.
The command outputs GPU name and memory separated by a comma, without header lines.
Which YAML snippet correctly specifies TensorFlow version 2.12.0 and CUDA version 11.8 for an MLOps environment config?
Look for correct keys and exact version strings.
Option A uses clear keys and exact version strings matching common config standards.
You upgraded TensorFlow from 2.11 to 2.12 on your training server. Suddenly, your model training script fails with an error about missing attributes. What is the most likely cause?
Think about what changes between framework versions can break code.
Framework upgrades can change APIs, causing code that depends on old APIs to fail.
Which workflow best automates tracking hardware and framework versions during model training in an MLOps pipeline?
Automation and metadata logging are key for reliable tracking.
Automated scripts that log hardware and framework info into experiment metadata ensure accurate and consistent tracking.