0
0
Computer Visionml~20 mins

Mobile deployment (TFLite, Core ML) in Computer Vision - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Mobile Deployment Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Understanding TFLite Model Conversion

Which of the following statements about converting a TensorFlow model to TensorFlow Lite (TFLite) format is correct?

ATFLite models can only run on Android devices, not iOS.
BTFLite conversion always requires retraining the model from scratch.
CTFLite conversion can include optimizations like quantization to reduce model size and improve speed.
DTFLite conversion automatically converts all Python code in the model to C++.
Attempts:
2 left
💡 Hint

Think about what optimizations help models run faster and smaller on mobile devices.

Predict Output
intermediate
2:00remaining
Output Shape After TFLite Model Inference

Given a TFLite model that takes an input image of shape (1, 224, 224, 3) and outputs a tensor of shape (1, 1000), what does the output represent?

AA batch of 1 image with 1000 class probabilities.
B1000 images each of size 1x224x224x3.
CA single scalar value representing the predicted class index.
DA 4D tensor representing feature maps for 1000 layers.
Attempts:
2 left
💡 Hint

Consider typical classification model outputs and their shapes.

Model Choice
advanced
2:00remaining
Choosing a Model for Core ML Deployment

You want to deploy a computer vision model on iOS using Core ML. Which model architecture is best suited for mobile deployment considering speed and size?

AMobileNetV2 with depthwise separable convolutions.
BVGG-19 with large fully connected layers.
CResNet-152 with 60 million parameters.
DDenseNet-201 with dense connections.
Attempts:
2 left
💡 Hint

Think about models designed specifically for mobile and embedded devices.

Hyperparameter
advanced
2:00remaining
Optimizing Quantization Parameters for TFLite

When applying post-training quantization to a TensorFlow model before converting to TFLite, which hyperparameter setting most directly affects the model's size reduction?

AChoosing the number of training epochs.
BChanging the batch size during inference.
CAdjusting the learning rate during training.
DSelecting the bit-width for weights and activations (e.g., 8-bit vs 16-bit).
Attempts:
2 left
💡 Hint

Quantization reduces precision to shrink model size.

🔧 Debug
expert
3:00remaining
Debugging Core ML Model Input Shape Mismatch

You converted a TensorFlow model to Core ML but get an error when running inference: "Input shape mismatch: expected (1, 224, 224, 3) but got (3, 224, 224)". What is the most likely cause?

AThe Core ML model expects channels last format but input is channels first.
BThe batch dimension is missing in the input tensor provided to Core ML.
CThe input image size is incorrect; it should be 224x224 pixels.
DThe model was converted without specifying the output layer.
Attempts:
2 left
💡 Hint

Check if the input tensor includes the batch size dimension.