Imagine you have two ways to find a book in a library:
- Method A: You check every book one by one from the first shelf to the last.
- Method B: You use the library's index to go directly to the shelf where the book is.
Which method is faster and why?
Think about how using a map or index helps you reach your destination faster.
Method B is faster because it uses an index (like a map) to jump directly to the right place, avoiding checking every book.
Given a list of 5 numbers: [3, 7, 2, 9, 5]
Algorithm 1: Check each number to see if it is 9.
Algorithm 2: Sort the list first, then check if 9 is the last number.
How many steps does each algorithm take to find the number 9?
lst = [3, 7, 2, 9, 5] # Algorithm 1 for num in lst: if num == 9: break # Algorithm 2 sorted_list = sorted(lst) if sorted_list[-1] == 9: pass
Count how many numbers are checked before finding 9 in Algorithm 1. For Algorithm 2, consider sorting steps plus the check.
Algorithm 1 checks numbers until it finds 9 at the 4th step. Algorithm 2 sorts the list (which takes multiple steps, simplified here as 5 steps) plus 1 step to check the last number, totaling 6 steps.
Which algorithm will be faster for very large input sizes?
Algorithm X: Runs in time proportional to n (linear time).
Algorithm Y: Runs in time proportional to n squared (quadratic time).
Think about how doubling the input size affects each algorithm's steps.
Algorithm X grows linearly, so doubling input doubles time. Algorithm Y grows quadratically, so doubling input quadruples time. For large inputs, Algorithm X is faster.
An algorithm checks every pair of items in a list to find duplicates. If the list has n items, it compares each item with every other item.
What is the time complexity of this algorithm?
Think about how many pairs are in a list of n items.
Checking every pair means comparing n items with n-1 others, roughly n*n = n squared comparisons.
You need to search for a name in a phone book with 1 million entries.
Which method is the most efficient?
Think about how dividing the search space helps find the target faster.
Binary search works on sorted data and cuts the search space in half each step, making it very fast for large data.