Why monitoring detects threats early in Cybersecurity - Performance Analysis
We want to understand how the time it takes to detect threats grows as the amount of data to monitor increases.
How does monitoring keep up with more data to catch threats early?
Analyze the time complexity of the following monitoring process.
for log_entry in system_logs:
if check_for_threat(log_entry):
alert_security_team()
This code checks each log entry one by one to find any signs of threats and alerts the team immediately.
- Primary operation: Looping through each log entry to check for threats.
- How many times: Once for every log entry in the system logs.
As the number of log entries grows, the time to check all of them grows at the same rate.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 checks |
| 100 | 100 checks |
| 1000 | 1000 checks |
Pattern observation: The number of operations grows directly with the number of log entries.
Time Complexity: O(n)
This means the time to detect threats grows in a straight line as more data comes in.
[X] Wrong: "Monitoring time stays the same no matter how much data there is."
[OK] Correct: Each new log entry needs to be checked, so more data means more work and more time.
Understanding how monitoring scales with data size shows you can think about real systems that handle lots of information efficiently.
"What if the monitoring system used sampling and only checked some log entries? How would the time complexity change?"