scipy.constants module - Time & Space Complexity
We want to understand how the time to access physical constants grows as we use the scipy.constants module more.
Specifically, how does the cost change when looking up constants?
Analyze the time complexity of the following code snippet.
from scipy import constants
# Access a constant by name
speed_of_light = constants.value('speed of light in vacuum')
# Access multiple constants
names = ['electron mass', 'proton mass', 'Avogadro constant']
values = [constants.value(name) for name in names]
This code looks up physical constants by their names using scipy.constants.value function.
- Primary operation: Looking up a constant by name in a dictionary.
- How many times: Once for single access, multiple times for list comprehension (once per name).
Each lookup is a quick dictionary search. Doing more lookups means doing more searches.
| Input Size (n) | Approx. Operations |
|---|---|
| 1 | 1 dictionary lookup |
| 10 | 10 dictionary lookups |
| 100 | 100 dictionary lookups |
Pattern observation: The total work grows directly with the number of lookups.
Time Complexity: O(n)
This means the time grows linearly with how many constants you look up.
[X] Wrong: "Looking up constants is instant no matter how many times I do it."
[OK] Correct: Each lookup takes a small amount of time, so doing many lookups adds up.
Understanding how dictionary lookups scale helps you reason about performance in data science tasks that use lookups or mappings.
"What if we stored constants in a list and searched linearly instead of a dictionary? How would the time complexity change?"