0
0
ML Pythonml~7 mins

Random forest in depth in ML Python

Choose your learning style9 modes available
Introduction

Random forest helps us make better decisions by combining many simple decision trees. It reduces mistakes and works well on different types of data.

When you want to predict if an email is spam or not based on many features.
When you need to estimate house prices using various details like size, location, and age.
When you want to classify types of flowers based on petal and sepal measurements.
When you have a mix of numbers and categories in your data and want a strong model.
When you want to reduce errors caused by a single decision tree's mistakes.
Syntax
ML Python
from sklearn.ensemble import RandomForestClassifier

model = RandomForestClassifier(
    n_estimators=100,      # number of trees
    max_depth=None,        # max depth of each tree
    random_state=42        # for reproducible results
)

model.fit(X_train, y_train)
predictions = model.predict(X_test)

n_estimators controls how many trees the forest has. More trees usually mean better results but slower training.

max_depth limits how deep each tree can grow. Smaller depth can prevent overfitting.

Examples
This creates a random forest with 50 trees, using default settings for other parameters.
ML Python
model = RandomForestClassifier(n_estimators=50)
This creates a forest with 200 trees, each tree limited to 10 levels deep to avoid overfitting.
ML Python
model = RandomForestClassifier(n_estimators=200, max_depth=10)
Setting random_state ensures the results are the same every time you run the code.
ML Python
model = RandomForestClassifier(n_estimators=100, random_state=0)
Sample Model

This program trains a random forest on the iris flower dataset. It splits the data, trains the model, predicts flower types, and shows accuracy and predictions.

ML Python
from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score

# Load example data
iris = load_iris()
X, y = iris.data, iris.target

# Split data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)

# Create random forest model
model = RandomForestClassifier(n_estimators=100, random_state=42)

# Train the model
model.fit(X_train, y_train)

# Predict on test data
predictions = model.predict(X_test)

# Calculate accuracy
accuracy = accuracy_score(y_test, predictions)

print(f"Accuracy: {accuracy:.2f}")
print(f"Predictions: {predictions}")
OutputSuccess
Important Notes

Random forests reduce overfitting by averaging many trees, each trained on random parts of data and features.

They can handle missing data and work well without much tuning.

Feature importance can be extracted to understand which inputs matter most.

Summary

Random forest builds many decision trees and combines their results for better accuracy.

It works well on different data types and reduces errors from single trees.

Adjusting number of trees and tree depth helps balance speed and accuracy.