Deepfakes and misinformation in AI for Everyone - Time & Space Complexity
We want to understand how the effort to create or detect deepfakes changes as the amount of data grows.
How does the time needed grow when more videos or images are involved?
Analyze the time complexity of the following simplified deepfake detection process.
for each video in videos:
for each frame in video:
analyze frame for deepfake signs
combine frame results for video decision
This code checks every frame in every video to decide if the video is a deepfake.
Look at what repeats most in this process.
- Primary operation: Analyzing each frame for deepfake signs.
- How many times: Once for every frame in every video.
As the number of videos or frames grows, the total work grows too.
| Input Size (videos x frames) | Approx. Operations |
|---|---|
| 10 videos x 30 frames | 300 analyses |
| 100 videos x 30 frames | 3,000 analyses |
| 1,000 videos x 30 frames | 30,000 analyses |
Pattern observation: The work grows directly with the total number of frames checked.
Time Complexity: O(n * m)
This means the time grows proportionally to the number of videos (n) times the number of frames per video (m).
[X] Wrong: "Checking one frame means the whole video is checked instantly."
[OK] Correct: Each frame needs separate analysis, so time adds up with more frames.
Understanding how work grows with data size helps you explain system limits and design better solutions.
What if we only analyzed every other frame instead of all frames? How would the time complexity change?