0
0
Prompt Engineering / GenAIml~5 mins

API-based deployment in Prompt Engineering / GenAI - Cheat Sheet & Quick Revision

Choose your learning style9 modes available
Recall & Review
beginner
What is API-based deployment in machine learning?
API-based deployment means making a machine learning model available through an Application Programming Interface (API) so other programs or users can send data and get predictions easily.
Click to reveal answer
beginner
Why use an API for deploying machine learning models?
Using an API allows many users or applications to access the model remotely, making it easy to integrate predictions into websites, apps, or other software without sharing the model code.
Click to reveal answer
intermediate
Name a common protocol used for API-based deployment.
HTTP (HyperText Transfer Protocol) is commonly used, often with REST (Representational State Transfer) style APIs to send requests and receive responses from the model server.
Click to reveal answer
beginner
What is a typical input and output in an API-based ML model deployment?
Input is usually data in JSON format sent in a request, like text or numbers. Output is the model's prediction or result, also in JSON, sent back in the response.
Click to reveal answer
intermediate
How do you ensure your API-based ML deployment is scalable?
You can use cloud services, load balancers, and container orchestration tools to handle many requests at once and keep the service fast and reliable.
Click to reveal answer
What does API stand for in API-based deployment?
AArtificial Processing Input
BAutomated Prediction Integration
CApplication Programming Interface
DAdvanced Programming Interaction
Which data format is commonly used to send input data to an ML model via API?
AXML
BJSON
CCSV
DYAML
What is a key benefit of deploying ML models via API?
AThe model runs faster on local machines
BIt requires no internet connection
COnly one user can access the model
DMultiple applications can use the model remotely
Which protocol is most often used for API communication in ML deployment?
AHTTP
BSMTP
CSSH
DFTP
How can you handle many API requests to your ML model at the same time?
AUse cloud services and load balancing
BIgnore extra requests
CUse a single powerful computer only
DLimit the model to one request per hour
Explain how API-based deployment makes machine learning models accessible to other applications.
Think about how apps talk to each other over the internet.
You got /4 concepts.
    Describe the steps to deploy a machine learning model using an API.
    Consider what happens from model ready to user getting predictions.
    You got /4 concepts.