LangChain - LLM and Chat Model Integration
If you set
temperature to 1.5 and max_tokens to 10, what is the expected behavior of the Langchain model?