0
0
ML Pythonml~10 mins

LightGBM in ML Python - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to import the LightGBM library.

ML Python
import [1] as lgb
Drag options to blanks, or click blank then click option'
Atensorflow
Bsklearn
Cxgboost
Dlightgbm
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'sklearn' instead of 'lightgbm'.
Trying to import 'xgboost' which is a different library.
2fill in blank
medium

Complete the code to create a LightGBM dataset from features X and labels y.

ML Python
train_data = lgb.Dataset([1], label=y)
Drag options to blanks, or click blank then click option'
AX
By
Ctrain
Ddata
Attempts:
3 left
💡 Hint
Common Mistakes
Passing labels y as the first argument instead of features X.
Using an undefined variable like 'train'.
3fill in blank
hard

Fix the error in the code to train a LightGBM model with 100 boosting rounds.

ML Python
model = lgb.train(params, train_data, num_boost_round=[1])
Drag options to blanks, or click blank then click option'
A10
B1000
C100
D1
Attempts:
3 left
💡 Hint
Common Mistakes
Using too few boosting rounds like 1 or 10.
Using an excessively large number like 1000 without reason.
4fill in blank
hard

Fill both blanks to set LightGBM parameters for binary classification with learning rate 0.05.

ML Python
params = {'objective': [1], 'learning_rate': [2]
Drag options to blanks, or click blank then click option'
A'binary'
B'multiclass'
C0.05
D0.1
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'multiclass' for binary classification.
Setting learning rate too high like 0.1.
5fill in blank
hard

Fill all three blanks to predict with the model and calculate accuracy score.

ML Python
y_pred = model.predict([1])
y_pred_labels = (y_pred > [2]).astype(int)
accuracy = sum(y_pred_labels == [3]) / len(y_pred_labels)
Drag options to blanks, or click blank then click option'
AX_test
B0.5
Cy_test
DX_train
Attempts:
3 left
💡 Hint
Common Mistakes
Predicting on training data instead of test data.
Using wrong threshold values.
Comparing predictions with features instead of true labels.