Complete the code to load a pre-trained language model using Hugging Face Transformers.
from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained([1])
The correct model for language modeling here is "gpt2", which is a causal language model. BERT is not a causal LM, and ResNet/VGG are image models.
Complete the code to tokenize input text for the language model.
from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("gpt2") inputs = tokenizer([1], return_tensors="pt")
The tokenizer expects a string input for text. Passing a string like "Hello, how are you?" is correct. Passing a list or number is incorrect here.
Fix the error in the code to generate text from the model.
outputs = model.generate([1]) print(tokenizer.decode(outputs[0], skip_special_tokens=True))
The generate method requires input IDs tensor, which is accessed by inputs['input_ids']. Passing the whole inputs dict or other keys causes errors.
Fill both blanks to create a dictionary comprehension that maps words to their lengths only if length is greater than 3.
word_lengths = {word: [1] for word in words if [2]The dictionary comprehension maps each word to its length (len(word)) only if the length is greater than 3 (len(word) > 3).
Fill all three blanks to create a filtered dictionary of words with length greater than 4 and convert keys to uppercase.
filtered = [1]: [2] for [3] in words if len([3]) > 4}
The dictionary comprehension uses the uppercase word as key (word.upper()), the original word as value, and iterates over 'word' in words.