When you send a prompt to a GenAI API, what is the typical structure of the response you receive?
Think about what extra information might be useful besides just the generated text.
GenAI APIs usually return a JSON object that includes the generated text, usage statistics like token counts, and metadata such as model version. This helps users understand the output and manage usage.
What will be the output of this Python code snippet that calls a GenAI API with a prompt?
response = {
'choices': [{'message': {'content': 'Hello, how can I help you today?'}}],
'usage': {'prompt_tokens': 5, 'completion_tokens': 7, 'total_tokens': 12}
}
print(response['choices'][0]['message']['content'])Look at how the dictionary keys are accessed to get the message content.
The code accesses the first choice's message content and prints it, so the output is the string inside 'content'.
You want to use a GenAI API to summarize long articles quickly and accurately. Which model type should you choose?
Think about which model is trained for text summarization.
Models fine-tuned for summarization understand how to condense text while keeping key points, making them best for this task.
What happens if you increase the temperature parameter in a GenAI API call?
Temperature controls randomness in text generation.
Higher temperature values make the model pick less likely words, increasing creativity and randomness in the output.
After a GenAI API call, you receive usage metrics: prompt_tokens=50, completion_tokens=150, total_tokens=200. What does this tell you?
Think about what prompt and completion tokens represent.
Prompt tokens count the input text tokens, completion tokens count the generated output tokens, and total tokens is their sum.