0
0
Hadoopdata~5 mins

Audit logging in Hadoop - Time & Space Complexity

Choose your learning style9 modes available
Time Complexity: Audit logging
O(n)
Understanding Time Complexity

Audit logging in Hadoop tracks user actions and system events. Understanding time complexity helps us see how logging affects system speed as data grows.

We want to know how the logging process time changes when more events happen.

Scenario Under Consideration

Analyze the time complexity of the following audit logging snippet.


// Pseudocode for audit logging in Hadoop
for each event in eventStream {
  logEntry = createLogEntry(event)
  writeToAuditLog(logEntry)
}
    

This code processes each event by creating a log entry and writing it to the audit log.

Identify Repeating Operations

Look at what repeats as input grows.

  • Primary operation: Loop over each event in the event stream.
  • How many times: Once for every event received.
How Execution Grows With Input

As the number of events increases, the logging work grows too.

Input Size (n)Approx. Operations
1010 log entries created and written
100100 log entries created and written
10001000 log entries created and written

Pattern observation: The work grows directly with the number of events; double the events, double the work.

Final Time Complexity

Time Complexity: O(n)

This means the time to log grows in a straight line with the number of events.

Common Mistake

[X] Wrong: "Audit logging happens instantly and does not slow down as events increase."

[OK] Correct: Each event requires work to log, so more events mean more time spent logging.

Interview Connect

Understanding how logging scales helps you design systems that stay fast even as they track more data. This skill shows you can think about real system behavior.

Self-Check

"What if we batch multiple events before writing to the audit log? How would the time complexity change?"