Complete the code to read a large CSV file in chunks using pandas.
chunk_iter = pd.read_csv('large_data.csv', chunksize=[1])
Using a chunksize of 10000 allows pandas to read the file in manageable pieces, which helps with memory efficiency.
Complete the code to select only specific columns while reading a large CSV file.
df = pd.read_csv('large_data.csv', usecols=[1])
Selecting only needed columns reduces memory use and speeds up processing.
Fix the error in the code to convert a large DataFrame column to category type for memory saving.
df['category_col'] = df['category_col'].[1]('category')
The astype method converts the column to the specified type, here 'category' to save memory.
Fill both blanks to create a dictionary comprehension that maps words to their lengths only if length is greater than 3.
{word: [1] for word in words if [2] > 3}The dictionary comprehension maps each word to its length using len(word) and filters words with length greater than 3.
Fill all three blanks to create a filtered dictionary from data where keys are uppercase and values are positive.
result = [1]: [2] for [3], v in data.items() if v > 0}
This comprehension creates a dictionary with keys converted to uppercase and values kept as is, only including positive values.