0
0
ML Pythonprogramming~20 mins

Dimensionality reduction visualization in ML Python - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Dimensionality Reduction Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Purpose of Dimensionality Reduction Visualization
Why do we use dimensionality reduction techniques like t-SNE or PCA for visualization in machine learning?
ATo reduce the number of features so the model trains faster without losing any information
BTo increase the number of features for better model accuracy
CTo transform high-dimensional data into 2D or 3D so humans can visually understand patterns
DTo remove noisy data points from the dataset automatically
Attempts:
2 left
Predict Output
intermediate
2:00remaining
Output of PCA Dimensionality Reduction Code
What is the shape of the transformed data after applying PCA with 2 components on a dataset with shape (100, 5)?
ML Python
from sklearn.decomposition import PCA
import numpy as np
X = np.random.rand(100, 5)
pca = PCA(n_components=2)
X_pca = pca.fit_transform(X)
print(X_pca.shape)
A(100, 2)
B(2, 100)
C(5, 2)
D(100, 5)
Attempts:
2 left
Model Choice
advanced
2:00remaining
Choosing a Dimensionality Reduction Method for Visualization
You have a dataset with 1000 features and want to visualize clusters in 2D. Which method is best for preserving local structure and showing clusters clearly?
APCA because it preserves global variance and is fast
BLinear Regression because it reduces features by fitting a line
CK-Means clustering because it reduces dimensions by grouping data
Dt-SNE because it preserves local neighbor distances and reveals clusters well
Attempts:
2 left
Metrics
advanced
2:00remaining
Interpreting Explained Variance Ratio in PCA
After applying PCA with 3 components, the explained variance ratios are [0.5, 0.3, 0.1]. What does this tell you about the data?
AThe components explain 50%, 30%, and 10% of the samples respectively
BThe first component explains 50% of the variance, the second 30%, and the third 10%, totaling 90% variance explained
CThe PCA model is overfitting because variance ratios should sum to 1
DThe data has 3 features only because PCA returned 3 components
Attempts:
2 left
🔧 Debug
expert
3:00remaining
Debugging t-SNE Visualization Code
You run this code to visualize data with t-SNE but get a ValueError: 'perplexity must be less than n_samples'. What is the cause?
ML Python
from sklearn.manifold import TSNE
import numpy as np
X = np.random.rand(10, 50)
tsne = TSNE(n_components=2, perplexity=30)
X_embedded = tsne.fit_transform(X)
print(X_embedded.shape)
AThe perplexity value 30 is too high for only 10 samples; it must be less than the number of samples
BThe number of components must be equal to the number of features
CThe input data X must be 2D but here it is 3D
DTSNE requires the input data to be normalized between 0 and 1
Attempts:
2 left