Complete the code to read a large CSV file in chunks using pandas.
import pandas as pd chunks = pd.read_csv('large_file.csv', chunksize=[1]) for chunk in chunks: print(chunk.head())
The chunksize parameter sets how many rows to read at a time. Using 1000 reads the file in chunks of 1000 rows.
Complete the code to sum a column named 'sales' from each chunk.
import pandas as pd chunks = pd.read_csv('large_file.csv', chunksize=1000) total_sales = 0 for chunk in chunks: total_sales += chunk['[1]'].sum() print(total_sales)
We sum the values in the 'sales' column from each chunk to get the total sales.
Fix the error in the code to correctly read chunks and count rows.
import pandas as pd chunks = pd.read_csv('large_file.csv', chunksize=500) total_rows = 0 for chunk in chunks: total_rows += len([1]) print(total_rows)
Inside the loop, each chunk is a DataFrame. We count its rows with len(chunk).
Fill both blanks to create a dictionary with word lengths for words longer than 3 letters.
words = ['apple', 'bat', 'carrot', 'dog', 'elephant'] lengths = {word: [1] for word in words if len(word) [2] 3}
The dictionary comprehension maps each word to its length if the word length is greater than 3.
Fill all three blanks to create a filtered dictionary with uppercase keys and values greater than 0.
data = {'a': 1, 'b': -2, 'c': 3, 'd': 0}
result = { [1]: [2] for k, v in data.items() if v [3] 0 }This dictionary comprehension creates a new dictionary with keys in uppercase and values only if they are greater than zero.