0
0
Data Structures Theoryknowledge~6 mins

Time complexity (Big O notation) in Data Structures Theory - Full Explanation

Choose your learning style9 modes available
Introduction
Imagine you want to know how long a task will take as the size of the input grows. Time complexity helps us understand how the time needed changes when we handle bigger problems.
Explanation
What is Time Complexity
Time complexity measures how the number of steps an algorithm takes grows as the input size increases. It focuses on the main factor that affects running time, ignoring small details.
Time complexity shows how an algorithm's running time changes with input size.
Big O Notation
Big O notation is a way to describe the upper limit of time complexity. It tells us the worst-case scenario for how long an algorithm might take, using simple expressions like O(n) or O(n²).
Big O notation expresses the worst-case growth rate of an algorithm's running time.
Common Big O Classes
Some common time complexities are constant time O(1), linear time O(n), quadratic time O(n²), and logarithmic time O(log n). Each describes how the time grows as input size n increases.
Different Big O classes describe different growth rates of running time.
Why Time Complexity Matters
Knowing time complexity helps us choose efficient algorithms that run faster on large inputs. It guides us to avoid slow methods that become impractical as data grows.
Time complexity helps pick algorithms that stay efficient as input size grows.
Real World Analogy

Imagine sorting a deck of cards. If you have just a few cards, sorting is quick. But if you have hundreds, some ways of sorting take much longer than others. Time complexity is like knowing how your sorting time grows as you add more cards.

What is Time Complexity → How the time to sort cards increases as you add more cards
Big O Notation → A simple label like 'fast' or 'slow' that tells you the worst time sorting might take
Common Big O Classes → Different sorting methods that take different amounts of time, like sorting by hand (slow) or using a machine (fast)
Why Time Complexity Matters → Choosing the best way to sort so you don't waste time when the deck is big
Diagram
Diagram
┌───────────────┐
│ Input Size n  │
└──────┬────────┘
       │
       ▼
┌─────────────────────────┐
│ Time Complexity Growth   │
│                         │
│ O(1)  ──┐               │
│ O(log n)│               │
│ O(n)   │               │
│ O(n²)  │               │
└────────┴───────────────┘
Diagram showing how time complexity grows with input size for different Big O classes.
Key Facts
Time ComplexityA measure of how the running time of an algorithm changes as input size grows.
Big O NotationA notation that describes the upper bound of an algorithm's running time.
O(1) - Constant TimeAn algorithm that takes the same time regardless of input size.
O(n) - Linear TimeAn algorithm whose running time grows directly with input size.
O(n²) - Quadratic TimeAn algorithm whose running time grows proportionally to the square of input size.
O(log n) - Logarithmic TimeAn algorithm whose running time grows slowly as input size increases.
Code Example
Data Structures Theory
def linear_search(arr, target):
    for i in range(len(arr)):
        if arr[i] == target:
            return i
    return -1

print(linear_search([1, 3, 5, 7, 9], 7))
OutputSuccess
Common Confusions
Big O shows exact running time
Big O shows exact running time Big O only shows the growth trend, not the exact time or number of steps.
Lower Big O always means faster in practice
Lower Big O always means faster in practice Sometimes algorithms with better Big O are slower for small inputs due to overhead.
Big O includes all details of running time
Big O includes all details of running time Big O ignores constants and small terms to focus on main growth factors.
Summary
Time complexity helps us understand how an algorithm's running time grows with input size.
Big O notation describes the worst-case growth rate using simple expressions like O(n) or O(n²).
Knowing time complexity guides us to choose efficient algorithms for large inputs.