Concurrency safety guarantees in Rust - Time & Space Complexity
When working with concurrency in Rust, it is important to understand how safety checks affect performance.
We want to see how the program's running time changes as we use concurrency features safely.
Analyze the time complexity of the following Rust code using concurrency safety features.
use std::sync::{Arc, Mutex};
use std::thread;
fn main() {
let counter = Arc::new(Mutex::new(0));
let mut handles = vec![];
let n = 100usize;
for _ in 0..n {
let counter = Arc::clone(&counter);
let handle = thread::spawn(move || {
let mut num = counter.lock().unwrap();
*num += 1;
});
handles.push(handle);
}
for handle in handles {
handle.join().unwrap();
}
}
This code creates n threads, each safely incrementing a shared counter using a mutex.
Look for loops and repeated actions that affect time.
- Primary operation: Spawning and joining
nthreads, each locking and unlocking the mutex once. - How many times: Exactly
ntimes, once per thread.
As n grows, the number of threads and mutex locks grows linearly.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 thread spawns and 10 mutex locks |
| 100 | About 100 thread spawns and 100 mutex locks |
| 1000 | About 1000 thread spawns and 1000 mutex locks |
Pattern observation: The work grows directly with the number of threads created.
Time Complexity: O(n)
This means the time to complete grows in a straight line as you add more threads.
[X] Wrong: "Using a mutex makes the program run in constant time regardless of threads."
[OK] Correct: Each thread must wait to lock the mutex, so time grows with the number of threads.
Understanding how concurrency safety affects performance shows you can write code that is both safe and efficient.
"What if we replaced the mutex with an atomic type? How would the time complexity change?"