Shared state overview in Rust - Time & Space Complexity
When programs share data between parts, it affects how long tasks take.
We want to see how the time to run changes as more parts share and access the data.
Analyze the time complexity of the following code snippet.
use std::sync::{Arc, Mutex};
use std::thread;
fn main() {
let counter = Arc::new(Mutex::new(0));
let mut handles = vec![];
for _ in 0..10 {
let counter = Arc::clone(&counter);
let handle = thread::spawn(move || {
let mut num = counter.lock().unwrap();
*num += 1;
});
handles.push(handle);
}
for handle in handles {
handle.join().unwrap();
}
println!("Result: {}", *counter.lock().unwrap());
}
This code creates 10 threads that share and update a single counter safely using a lock.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Loop creating 10 threads, each locking and updating shared data once.
- How many times: 10 times, once per thread.
As the number of threads increases, the program does more locking and updating steps.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 lock and update steps |
| 100 | About 100 lock and update steps |
| 1000 | About 1000 lock and update steps |
Pattern observation: The work grows directly with the number of threads sharing the state.
Time Complexity: O(n)
This means the time to complete grows in a straight line as more threads share and update the state.
[X] Wrong: "Adding more threads won't affect time much because they run at the same time."
[OK] Correct: Even if threads run together, they must wait to lock the shared data, so more threads mean more waiting and longer total time.
Understanding how shared data affects time helps you write safe and efficient programs that work well when many parts run together.
"What if we replaced the Mutex with a lock-free atomic counter? How would the time complexity change?"