Complete the code to load a pre-trained question answering model using Hugging Face Transformers.
from transformers import pipeline qa_pipeline = pipeline('[1]')
The pipeline function with the argument question-answering loads a model specialized for answering questions.
Complete the code to get the answer from the question answering pipeline.
context = "The Eiffel Tower is in Paris." question = "Where is the Eiffel Tower located?" result = qa_pipeline({'question': question, 'context': [1])
The context key should be assigned the text where the answer can be found.
Fix the error in extracting the answer text from the result dictionary.
answer_text = result[[1]]The result dictionary from the pipeline contains the key 'answer' for the predicted answer text.
Fill both blanks to create a dictionary comprehension that filters answers longer than 5 characters.
filtered_answers = {q: a for q, a in answers.items() if len(a) [1] [2]The comprehension keeps answers where the length is greater than 5 characters.
Fill all three blanks to create a function that returns the answer from the QA pipeline given question and context.
def get_answer([1], [2]): result = qa_pipeline({'question': [1], 'context': [2]) return result[[3]]
The function takes question and context as inputs, calls the pipeline, and returns the 'answer' from the result.