0
0
ML Pythonml~5 mins

Semi-supervised learning basics in ML Python

Choose your learning style9 modes available
Introduction
Semi-supervised learning helps computers learn from a small amount of labeled data and a large amount of unlabeled data, making learning easier and cheaper.
When you have a few labeled photos but many unlabeled ones and want to teach a computer to recognize objects.
When labeling data is expensive or slow, like medical images, but you have many unlabeled examples.
When you want to improve a model's accuracy by using extra unlabeled data alongside labeled data.
When you want to build a spam filter but only have a small set of emails marked as spam or not.
When you want to cluster or group data but also have some known examples to guide the process.
Syntax
ML Python
1. Start with a small labeled dataset and a large unlabeled dataset.
2. Train a model on the labeled data.
3. Use the model to predict labels for the unlabeled data.
4. Add the most confident predictions to the labeled set.
5. Retrain the model with the expanded labeled set.
6. Repeat steps 3-5 until satisfied.
This process is often called self-training or pseudo-labeling.
Confidence means how sure the model is about its prediction.
Examples
This shows the basic loop of training, predicting, selecting confident predictions, and retraining.
ML Python
# Example of pseudo-labeling steps
labeled_data = small_labeled_set
unlabeled_data = large_unlabeled_set
model.train(labeled_data)
predictions = model.predict(unlabeled_data)
confident_preds = select_confident(predictions)
labeled_data += confident_preds
model.train(labeled_data)
LabelSpreading is a built-in semi-supervised method that spreads label information to unlabeled points.
ML Python
# Using scikit-learn LabelSpreading
from sklearn.semi_supervised import LabelSpreading
model = LabelSpreading()
model.fit(X_train, y_train_with_missing_labels)
predicted_labels = model.transduction_
Sample Model
This program uses a real dataset of handwritten digits. It hides half the labels to simulate unlabeled data. Then it trains a semi-supervised model to guess the missing labels and checks accuracy on the known labels.
ML Python
from sklearn import datasets
from sklearn.semi_supervised import LabelSpreading
from sklearn.metrics import accuracy_score
import numpy as np

# Load digits dataset
digits = datasets.load_digits()
X = digits.data
y = digits.target

# Remove labels for 50% of data to simulate unlabeled data
rng = np.random.RandomState(42)
random_unlabeled_points = rng.rand(len(y)) < 0.5
y_missing = np.copy(y)
y_missing[random_unlabeled_points] = -1  # -1 means unlabeled

# Create and train LabelSpreading model
model = LabelSpreading(kernel='knn', alpha=0.8)
model.fit(X, y_missing)

# Predict labels for all data
y_pred = model.transduction_

# Calculate accuracy only on originally labeled points
accuracy = accuracy_score(y[~random_unlabeled_points], y_pred[~random_unlabeled_points])
print(f"Accuracy on labeled data: {accuracy:.2f}")
OutputSuccess
Important Notes
Semi-supervised learning works best when unlabeled data is similar to labeled data.
Be careful: adding wrong pseudo-labels can confuse the model.
Choosing a good confidence threshold for adding pseudo-labels is important.
Summary
Semi-supervised learning uses both labeled and unlabeled data to improve learning.
It is useful when labeling data is costly or time-consuming.
Common methods include self-training and label spreading.