Challenge - 5 Problems
FastAPI LangChain Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
❓ component_behavior
intermediate2:00remaining
What is the output of this FastAPI endpoint with LangChain?
Consider a FastAPI endpoint that uses LangChain to process a prompt and return a response. What will the endpoint return when called with the prompt 'Hello'?
LangChain
from fastapi import FastAPI from langchain.llms import OpenAI app = FastAPI() llm = OpenAI(temperature=0) @app.get('/generate') async def generate(prompt: str): response = llm(prompt) return {'result': response} # Assume llm returns 'Hello, how can I help you?' for prompt 'Hello'
Attempts:
2 left
💡 Hint
Think about what the LangChain OpenAI model returns for the given prompt.
✗ Incorrect
The OpenAI model in LangChain processes the prompt 'Hello' and returns a generated response. The endpoint wraps this response in a JSON with key 'result'.
❓ lifecycle
intermediate2:00remaining
When is the LangChain model instantiated in this FastAPI app?
Given this FastAPI app code, when is the LangChain OpenAI model created?
LangChain
from fastapi import FastAPI from langchain.llms import OpenAI app = FastAPI() llm = OpenAI(temperature=0) @app.get('/generate') async def generate(prompt: str): return {'result': llm(prompt)}
Attempts:
2 left
💡 Hint
Look at where the llm variable is defined in the code.
✗ Incorrect
The llm variable is created at the module level, so it is instantiated once when the app starts, not per request.
📝 Syntax
advanced2:00remaining
Which option correctly integrates LangChain with FastAPI for async calls?
You want to call a LangChain LLM asynchronously inside a FastAPI endpoint. Which code snippet correctly does this?
Attempts:
2 left
💡 Hint
Check which methods support async and how to use await properly.
✗ Incorrect
LangChain LLMs provide an async method acall for asynchronous calls. The FastAPI endpoint must be async and await the acall method.
🔧 Debug
advanced2:00remaining
Why does this FastAPI endpoint raise a runtime error?
This FastAPI endpoint code raises a runtime error when called. What is the cause?
LangChain
from fastapi import FastAPI from langchain.llms import OpenAI app = FastAPI() llm = OpenAI(temperature=0) @app.get('/generate') def generate(prompt: str): response = await llm.acall(prompt) return {'result': response}
Attempts:
2 left
💡 Hint
Check the function definition and usage of await.
✗ Incorrect
Using await requires the function to be async. Here, the function is defined as a normal def, so await causes a runtime error.
🧠 Conceptual
expert3:00remaining
What is the best pattern to share a LangChain LLM instance across multiple FastAPI endpoints?
You want to use a single LangChain LLM instance efficiently across many FastAPI endpoints without recreating it each time. Which pattern is best?
Attempts:
2 left
💡 Hint
Consider FastAPI's recommended way to share resources safely and efficiently.
✗ Incorrect
FastAPI dependency injection with a singleton provider allows controlled, efficient sharing of the LLM instance with proper lifecycle management.