What does a LiDAR point cloud primarily represent in 3D space?
Think about what LiDAR sensors measure when they send laser pulses.
LiDAR sensors emit laser pulses and measure the time it takes for the pulses to return after hitting surfaces. This creates a 3D point cloud representing the shape and position of objects.
Given the following Python code that filters LiDAR points based on height, what is the output?
import numpy as np points = np.array([[1,2,3], [4,5,1], [7,8,9], [10,11,0]]) filtered = points[points[:,2] > 2] print(filtered)
Look at the condition points[:,2] > 2 and which points satisfy it.
The code selects points where the third coordinate (height) is greater than 2. Only points [1,2,3] and [7,8,9] meet this condition.
You want to classify each point in a LiDAR point cloud into categories like ground, vegetation, and buildings. Which model type is best suited for this task?
Consider models that directly handle unordered 3D point sets.
PointNet and PointNet++ are specialized neural networks designed to process raw point clouds directly for tasks like semantic segmentation.
When converting a LiDAR point cloud into a voxel grid for processing, what is the effect of choosing a very small voxel size?
Think about how voxel size relates to detail and data volume.
Smaller voxels capture finer details but create more voxels, increasing memory use and computation time.
You have a LiDAR point cloud classification model. After testing, you get the following confusion matrix for three classes (Ground, Vegetation, Building):
Ground: TP=90, FP=10, FN=15
Vegetation: TP=80, FP=20, FN=10
Building: TP=70, FP=15, FN=20
What is the overall precision of the model?
Precision = TP / (TP + FP) summed over all classes.
Total TP = 90+80+70=240, total FP = 10+20+15=45, overall precision = 240 / (240+45) = 240/285 ≈ 0.842 ≈ 0.85.