0
0
Computer Visionml~8 mins

LiDAR data processing basics in Computer Vision - Model Metrics & Evaluation

Choose your learning style9 modes available
Metrics & Evaluation - LiDAR data processing basics
Which metric matters for LiDAR data processing and WHY

In LiDAR data processing, common tasks include object detection, segmentation, and classification. The key metrics depend on the task:

  • For classification: Precision and Recall matter to balance false alarms and missed objects.
  • For segmentation: Intersection over Union (IoU) is important to measure how well predicted shapes match true shapes.
  • For detection: Average Precision (AP) summarizes precision-recall tradeoff across thresholds.

These metrics help us understand if the model correctly identifies objects and their shapes from 3D point clouds, which is crucial for safe and accurate applications like self-driving cars.

Confusion matrix example for LiDAR object classification
    Actual \ Predicted | Car | Pedestrian | Background
    -----------------------------------------------
    Car                | 50  | 5          | 10
    Pedestrian         | 3   | 40         | 7
    Background         | 8   | 4          | 200
    

From this matrix:

  • True Positives (TP) for Car = 50
  • False Positives (FP) for Car = 3 + 8 = 11 (Pedestrian and Background predicted as Car)
  • False Negatives (FN) for Car = 5 + 10 = 15 (Car predicted as Pedestrian or Background)

Precision for Car = TP / (TP + FP) = 50 / (50 + 11) ≈ 0.82

Recall for Car = TP / (TP + FN) = 50 / (50 + 15) ≈ 0.77

Precision vs Recall tradeoff in LiDAR data tasks

Imagine a self-driving car detecting pedestrians:

  • High Precision: Few false alarms. The car rarely thinks something is a pedestrian when it is not. This avoids unnecessary stops.
  • High Recall: Few missed detections. The car catches almost every pedestrian, which is safer.

But improving one often lowers the other. If the model is too cautious, it may miss pedestrians (low recall). If it is too sensitive, it may stop for harmless objects (low precision).

Choosing the right balance depends on the application's safety needs.

What good vs bad metric values look like for LiDAR classification
  • Good: Precision and Recall above 0.85 means the model correctly finds most objects and rarely mistakes background for objects.
  • Bad: Precision or Recall below 0.5 means many false alarms or many missed objects, which can be dangerous in real-world use.
  • IoU: Values above 0.7 show good overlap between predicted and true object shapes; below 0.4 means poor segmentation.
Common pitfalls in LiDAR metrics
  • Accuracy paradox: High accuracy can be misleading if most points are background. The model may predict background well but fail on objects.
  • Data leakage: Using the same scenes in training and testing inflates metrics falsely.
  • Overfitting: Very high training metrics but low test metrics show the model memorizes data instead of learning general patterns.
  • Ignoring class imbalance: Many more background points than objects can bias metrics. Use balanced metrics like F1-score.
Self-check question

Your LiDAR object detection model has 98% accuracy but only 12% recall on pedestrians. Is it good for production? Why or why not?

Answer: No, it is not good. The high accuracy is likely because most points are background and predicted correctly. But 12% recall means the model misses 88% of pedestrians, which is unsafe for real-world use.

Key Result
Precision and recall are key to evaluate LiDAR models, ensuring objects are detected accurately without many misses or false alarms.