Complete the code to apply a function to each row of the DataFrame.
import pandas as pd df = pd.DataFrame({'A': [1, 2], 'B': [3, 4]}) result = df.apply(lambda row: row['A'] + row['B'], axis=[1]) print(result)
Setting axis=1 applies the function to each row.
Complete the code to create a new column 'Sum' by applying a function on each row.
import pandas as pd df = pd.DataFrame({'X': [5, 6], 'Y': [7, 8]}) df['Sum'] = df.apply(lambda row: row['X'] + row['Y'], axis=[1]) print(df)
Use axis=1 to apply the function row-wise and create the new column.
Fix the error in the code to correctly apply a function on rows to get the max value per row.
import pandas as pd df = pd.DataFrame({'P': [10, 20], 'Q': [30, 15]}) max_values = df.apply(lambda row: max(row), axis=[1]) print(max_values)
Using axis=1 applies the function to each row, allowing max(row) to find the max per row.
Fill both blanks to create a new column 'Difference' that subtracts column 'B' from 'A' for each row.
import pandas as pd df = pd.DataFrame({'A': [9, 4], 'B': [3, 7]}) df['Difference'] = df.apply(lambda [1]: [2]['A'] - [2]['B'], axis=1) print(df)
The lambda function takes one argument (commonly named row) representing each row. Use this argument to access columns.
Fill all three blanks to create a dictionary with keys as row indices and values as the row values from column 'b' where the value in 'a' is greater than 2.
import pandas as pd df = pd.DataFrame({'a': [1, 3], 'b': [4, 5]}) result = { [1]: row[[2]] for index, row in df.iterrows() if row[[3]] > 2 } print(result)
The dictionary uses the row index as key, the value in column 'b' as value, and filters rows where column 'a' is greater than 2.