Complete the code to find duplicate rows based on the 'Name' column.
duplicates = df.duplicated(subset=[1])The duplicated() method checks for duplicate rows. Using subset=['Name'] tells pandas to look for duplicates only in the 'Name' column.
Complete the code to drop duplicate rows based on 'Name' and 'City' columns, keeping the first occurrence.
df_unique = df.drop_duplicates(subset=[1], keep='first')
The drop_duplicates() method removes duplicate rows. Specifying subset=['Name', 'City'] means duplicates are checked only on these two columns.
Fix the error in the code to find duplicates based on 'Age' column.
duplicates = df.duplicated(subset=[1])The subset parameter requires a list of column names. Using ['Age'] is correct. A string alone or a variable without quotes causes errors.
Fill both blanks to create a dictionary with words as keys and their lengths as values, but only for words longer than 4 characters.
lengths = {word: [1] for word in words if [2]The dictionary comprehension uses len(word) to get the length. The condition len(word) > 4 filters words longer than 4 characters.
Fill all three blanks to create a dictionary with uppercase words as keys and their lengths as values, only for words longer than 3 characters.
result = [1]: [2] for word in words if [3]
word.lower() instead of uppercase.The dictionary keys are uppercase words using word.upper(). Values are lengths with len(word). The condition filters words longer than 3 characters.