Local-exec provisioner in Terraform - Time & Space Complexity
We want to understand how the time to run local commands grows when using Terraform's local-exec provisioner.
Specifically, how does running commands on multiple resources affect total execution time?
Analyze the time complexity of the following operation sequence.
resource "null_resource" "example" {
count = var.resource_count
provisioner "local-exec" {
command = "echo Resource ${count.index} created"
}
}
This creates multiple resources, each running a local command after creation.
Identify the API calls, resource provisioning, data transfers that repeat.
- Primary operation: Running the local-exec command for each resource.
- How many times: Once per resource, so equal to the number of resources created.
Each resource triggers one local command execution, so total commands grow directly with resource count.
| Input Size (n) | Approx. API Calls/Operations |
|---|---|
| 10 | 10 local commands run |
| 100 | 100 local commands run |
| 1000 | 1000 local commands run |
Pattern observation: The number of local commands grows linearly as the number of resources increases.
Time Complexity: O(n)
This means the total time to run local commands grows directly in proportion to the number of resources.
[X] Wrong: "Running local-exec commands happens all at once, so time stays the same no matter how many resources."
[OK] Correct: Each local-exec runs separately by default, so more resources mean more commands and more total time.
Understanding how local-exec scales helps you design Terraform configurations that run efficiently and predictably as your infrastructure grows.
"What if we changed local-exec to run commands in parallel? How would the time complexity change?"