0
0
NLPml~20 mins

Model serving for NLP - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
NLP Model Serving Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
What is the primary purpose of model serving in NLP?

Model serving is a key step after training an NLP model. What is its main goal?

ATo visualize the training loss and accuracy curves
BTo clean and preprocess the input text before training
CTo deploy the trained model so it can make predictions on new text data in real time or batch
DTo train the model on a larger dataset for better accuracy
Attempts:
2 left
💡 Hint

Think about what happens after a model is ready and you want to use it.

Predict Output
intermediate
2:00remaining
What output does this Flask model serving code produce?

Given this simple Flask app serving an NLP sentiment model, what is the JSON response when sending POST with text 'I love this!'?

NLP
from flask import Flask, request, jsonify
app = Flask(__name__)

@app.route('/predict', methods=['POST'])
def predict():
    data = request.json
    text = data.get('text', '')
    # Dummy sentiment prediction
    sentiment = 'positive' if 'love' in text else 'negative'
    return jsonify({'sentiment': sentiment})

# Assume app.run() is called elsewhere
A{"error": "No text provided"}
B{"sentiment": "negative"}
C500 Internal Server Error
D{"sentiment": "positive"}
Attempts:
2 left
💡 Hint

Check if the word 'love' is in the input text.

Hyperparameter
advanced
2:00remaining
Which batch size is best for serving an NLP model with low latency?

You want to serve an NLP model that responds quickly to single user requests. Which batch size should you choose?

ABatch size of 1 to minimize waiting time per request
BBatch size of 64 to maximize GPU throughput
CBatch size of 128 to reduce memory usage
DBatch size of 256 to increase model accuracy
Attempts:
2 left
💡 Hint

Think about how batch size affects response time for individual requests.

Metrics
advanced
2:00remaining
Which metric best measures NLP model serving quality in production?

To monitor an NLP model serving system, which metric directly reflects user experience quality?

ALoss value during training
BAverage response latency (time to get prediction)
CNumber of model parameters
DTraining accuracy on the original dataset
Attempts:
2 left
💡 Hint

Think about what users notice when using a model service.

🔧 Debug
expert
3:00remaining
Why does this NLP model serving code raise a KeyError?

Examine this snippet from a model serving function. Why does it raise a KeyError?

NLP
def serve_model(request_json):
    text = request_json['input_text']
    # Model prediction code here
    return {'prediction': 'positive'}

# Called with: serve_model({'text': 'Hello world'})
AThe key 'input_text' does not exist in the input dictionary, causing KeyError
BThe function is missing a return statement
CThe model prediction code is incomplete and causes runtime error
DThe input dictionary is empty, causing KeyError
Attempts:
2 left
💡 Hint

Check the keys used to access the input dictionary.