0
0
SciPydata~10 mins

Linear programming (linprog) in SciPy - Step-by-Step Execution

Choose your learning style9 modes available
Concept Flow - Linear programming (linprog)
Define objective function coefficients c
Define inequality constraints A_ub and b_ub
Define equality constraints A_eq and b_eq (optional)
Call linprog solver with inputs
Solver finds optimal solution x
Check solver success and output results
Linear programming solves a problem by defining an objective and constraints, then finding the best solution using linprog.
Execution Sample
SciPy
from scipy.optimize import linprog
c = [-1, -2]
A = [[2, 1], [1, 1]]
b = [20, 16]
res = linprog(c, A_ub=A, b_ub=b)
print(res.x)
This code finds values for two variables that maximize the objective under given constraints.
Execution Table
StepActionInput/ConditionResult/Output
1Define objective coefficients cc = [-1, -2]Objective: minimize -1*x1 - 2*x2
2Define inequality constraints A_ub and b_ubA = [[2,1],[1,1]], b = [20,16]Constraints: 2x1 + x2 <= 20, x1 + x2 <= 16
3Call linprog solverlinprog(c, A_ub=A, b_ub=b)Solver starts optimization
4Solver iterates to find feasible solutionChecking constraints and objectiveIntermediate solutions tested
5Solver finds optimal solutionOptimal x foundx = [0.0, 16.0]
6Check solver successres.success == TrueOptimization successful
7Print solutionprint(res.x)[0.0, 16.0]
8ExitOptimization completeBest solution found within constraints
💡 Solver stops after finding optimal solution that satisfies all constraints.
Variable Tracker
VariableStartAfter Step 3After Step 5Final
c[-1, -2][-1, -2][-1, -2][-1, -2]
A[[2,1],[1,1]][[2,1],[1,1]][[2,1],[1,1]][[2,1],[1,1]]
b[20,16][20,16][20,16][20,16]
res.xNoneNone[0.0, 16.0][0.0, 16.0]
res.successNoneNoneTrueTrue
Key Moments - 3 Insights
Why are the coefficients in c negative when we want to maximize?
linprog only minimizes, so to maximize we minimize the negative of the objective. See Step 1 and Step 3 in execution_table.
What does A_ub and b_ub represent in the problem?
They represent inequality constraints of the form A_ub * x <= b_ub. This is shown in Step 2 where constraints are defined.
How do we know the solver found a valid solution?
res.success is True as shown in Step 6, indicating the solver found a solution meeting all constraints.
Visual Quiz - 3 Questions
Test your understanding
Look at the execution_table at Step 5, what is the value of res.x?
A[10.0, 5.0]
B[0.0, 16.0]
C[8.0, 8.0]
D[5.0, 10.0]
💡 Hint
Check the 'Result/Output' column at Step 5 in execution_table.
At which step does the solver confirm the optimization was successful?
AStep 6
BStep 4
CStep 3
DStep 7
💡 Hint
Look for 'res.success == True' in the execution_table.
If we change c to [1, 2] instead of [-1, -2], what happens to the optimization goal?
AIt still maximizes the original objective
BIt maximizes the negative objective
CIt minimizes the original objective
DIt minimizes the negative objective
💡 Hint
Recall linprog minimizes the objective; changing sign changes maximize to minimize.
Concept Snapshot
Linear programming with linprog:
- Define objective coefficients c (minimize c·x)
- Define inequality constraints A_ub·x <= b_ub
- Optionally define equality constraints A_eq·x = b_eq
- Call linprog(c, A_ub=A_ub, b_ub=b_ub)
- Check res.success and res.x for solution
- To maximize, minimize negative objective
Full Transcript
Linear programming uses linprog to find the best values for variables that minimize an objective function while respecting constraints. We start by defining the objective coefficients as c, constraints as A_ub and b_ub, then call linprog. The solver tries different values to find the minimum of c·x that satisfies constraints. Since linprog only minimizes, to maximize we use negative coefficients. The solver returns res.x with the best solution and res.success to confirm success. This step-by-step process helps understand how linprog works.