Linear programming (linprog) in SciPy - Time & Space Complexity
We want to understand how the time it takes to solve a linear programming problem grows as the problem size increases.
Specifically, how does the solver's work change when we add more variables or constraints?
Analyze the time complexity of the following code snippet.
from scipy.optimize import linprog
c = [1, 2, 3]
A = [[-1, 1, 0], [0, -1, 1]]
b = [1, 1]
result = linprog(c, A_ub=A, b_ub=b, method='highs')
This code solves a linear programming problem minimizing a cost with inequality constraints using SciPy's linprog.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: The solver repeatedly performs matrix operations and pivot steps to find the optimal solution.
- How many times: The number depends on the number of variables and constraints, as the solver iterates to improve the solution.
As the number of variables and constraints grows, the solver does more work to check and update possible solutions.
| Input Size (variables + constraints) | Approx. Operations |
|---|---|
| 10 | Few hundred operations |
| 100 | Thousands of operations |
| 1000 | Millions of operations |
Pattern observation: The work grows faster than just adding more variables; it grows roughly with the cube of the input size due to matrix calculations.
Time Complexity: O(n^3)
This means the time to solve the problem grows roughly with the cube of the number of variables and constraints combined.
[X] Wrong: "Solving a linear program always takes time proportional to the number of variables only."
[OK] Correct: The number of constraints also affects the time, and the solver's matrix operations depend on both variables and constraints together.
Understanding how the solver's time grows helps you explain algorithm efficiency clearly and shows you can think about real-world problem sizes.
"What if we changed the solver method from 'highs' to a simpler algorithm? How would the time complexity change?"