Complete the code to load the popular MNIST dataset using TensorFlow.
import tensorflow as tf mnist = tf.keras.datasets.[1].load_data()
The MNIST dataset is loaded using tf.keras.datasets.mnist.load_data().
Complete the code to split the dataset into training and testing sets using scikit-learn.
from sklearn.model_selection import [1] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
The function train_test_split splits data into training and testing sets.
Fix the error in the code to load the CIFAR-10 dataset correctly using PyTorch.
import torchvision.datasets as datasets cifar10 = datasets.CIFAR10(root='./data', train=True, download=True, transform=[1])
The transform argument requires a callable like transforms.ToTensor() to convert images to tensors.
Fill both blanks to create a dictionary comprehension that maps dataset names to their sample counts.
dataset_sizes = {name: len([1]) for name, [2] in datasets.items()}In the dictionary datasets, each value is a dataset object or list named data or value. Using len(data) or len(value) gets sample counts.
Fill all three blanks to filter datasets with more than 10,000 samples and create a new dictionary.
large_datasets = {name: [1] for name, [2] in datasets.items() if len([3]) > 10000}The comprehension uses datasets.items() with variables name and value. The dictionary stores value as values. The filter checks len(value) > 10000.