Skip to content

Commit 022c575

Browse files
authored
Merge pull request #786 from JuliaOpt/bl/doc_todos
Address TODOs
2 parents a2bb7bc + fc2dc7e commit 022c575

File tree

2 files changed

+101
-51
lines changed

2 files changed

+101
-51
lines changed

docs/Project.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,4 +3,4 @@ Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
33
MathOptInterface = "b8f27783-ece8-5eb3-8dc8-9495eed66fee"
44

55
[compat]
6-
Documenter = "~0.21"
6+
Documenter = "~0.22"

docs/src/apimanual.md

Lines changed: 100 additions & 50 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,9 @@
11
```@meta
22
CurrentModule = MathOptInterface
3+
DocTestSetup = quote
4+
using MathOptInterface
5+
const MOI = MathOptInterface
6+
end
37
```
48

59
# Manual
@@ -138,15 +142,25 @@ from the [`ModelLike`](@ref) abstract type.
138142
Notably missing from the model API is the method to solve an optimization problem.
139143
`ModelLike` objects may store an instance (e.g., in memory or backed by a file format)
140144
without being linked to a particular solver. In addition to the model API, MOI
141-
defines [`AbstractOptimizer`](@ref). *Optimizers* (or solvers) implement the model API (inheriting from `ModelLike`) and additionally
142-
provide methods to solve the model.
145+
defines [`AbstractOptimizer`](@ref). *Optimizers* (or solvers) implement the
146+
model API (inheriting from `ModelLike`) and additionally provide methods to
147+
solve the model.
143148

144149
Through the rest of the manual, `model` is used as a generic `ModelLike`, and
145150
`optimizer` is used as a generic `AbstractOptimizer`.
146151

147-
[Discuss how models are constructed, optimizer attributes.]
152+
Models are constructed by
153+
* adding variables using [`add_variables`](@ref) (or [`add_variables`](@ref)),
154+
see [Adding variables](@ref);
155+
* setting an objective sense and function using [`set`](@ref),
156+
see [Setting an objective](@ref);
157+
* and adding constraints using [`add_constraint`](@ref) (or
158+
[`add_constraints`](@ref)), see [Sets and Constraints](@ref).
159+
160+
The way the problem is solved by the optimimizer is controlled by
161+
[`AbstractOptimizerAttribute`](@ref)s, see [Solver-specific attributes](@ref).
148162

149-
## Variables
163+
## Adding variables
150164

151165
All variables in MOI are scalar variables.
152166
New scalar variables are created with [`add_variable`](@ref) or
@@ -210,6 +224,8 @@ the function ``5x_1 - 2.3x_2 + 1``.
210224
`[ScalarAffineTerm(5.0, x[1]), ScalarAffineTerm(-2.3, x[2])]`. This is
211225
Julia's broadcast syntax and is used quite often.
212226

227+
### Setting an objective
228+
213229
Objective functions are assigned to a model by setting the
214230
[`ObjectiveFunction`](@ref) attribute. The [`ObjectiveSense`](@ref) attribute is
215231
used for setting the optimization sense.
@@ -289,10 +305,11 @@ add_constraint(model, VectorOfVariables([x,y,z]), SecondOrderCone(3))
289305
### Constraints by function-set pairs
290306

291307
Below is a list of common constraint types and how they are represented
292-
as function-set pairs in MOI. In the notation below, ``x`` is a vector of decision variables,
293-
``x_i`` is a scalar decision variable, and all other terms are fixed constants.
294-
295-
[Define notation more precisely. ``a`` vector; ``A`` matrix; don't reuse ``u,l,b`` as scalar and vector]
308+
as function-set pairs in MOI. In the notation below, ``x`` is a vector of
309+
decision variables, ``x_i`` is a scalar decision variable, ``\alpha, \beta`` are
310+
scalar constants, ``a, b`` are constant vectors, `A` is a constant matrix and
311+
``\mathbb{R}_+`` (resp. ``\mathbb{R}_-``) is the set of nonnegative (resp.
312+
nonpositive) real numbers.
296313

297314
#### Linear constraints
298315

@@ -301,11 +318,11 @@ as function-set pairs in MOI. In the notation below, ``x`` is a vector of decisi
301318
| ``a^Tx \le u`` | `ScalarAffineFunction` | `LessThan` |
302319
| ``a^Tx \ge l`` | `ScalarAffineFunction` | `GreaterThan` |
303320
| ``a^Tx = b`` | `ScalarAffineFunction` | `EqualTo` |
304-
| ``l \le a^Tx \le u`` | `ScalarAffineFunction` | `Interval` |
305-
| ``x_i \le u`` | `SingleVariable` | `LessThan` |
306-
| ``x_i \ge l`` | `SingleVariable` | `GreaterThan` |
307-
| ``x_i = b`` | `SingleVariable` | `EqualTo` |
308-
| ``l \le x_i \le u`` | `SingleVariable` | `Interval` |
321+
| ``\alpha \le a^Tx \le \beta`` | `ScalarAffineFunction` | `Interval` |
322+
| ``x_i \le \beta | `SingleVariable` | `LessThan` |
323+
| ``x_i \ge \alpha | `SingleVariable` | `GreaterThan` |
324+
| ``x_i = \beta | `SingleVariable` | `EqualTo` |
325+
| ``\alpha \le x_i \le \beta | `SingleVariable` | `Interval` |
309326
| ``Ax + b \in \mathbb{R}_+^n`` | `VectorAffineFunction` | `Nonnegatives` |
310327
| ``Ax + b \in \mathbb{R}_-^n`` | `VectorAffineFunction` | `Nonpositives` |
311328
| ``Ax + b = 0`` | `VectorAffineFunction` | `Zeros` |
@@ -314,8 +331,6 @@ By convention, solvers are not expected to support nonzero constant terms in the
314331

315332
Constraints with `SingleVariable` in `LessThan`, `GreaterThan`, `EqualTo`, or `Interval` sets have a natural interpretation as variable bounds. As such, it is typically not natural to impose multiple lower or upper bounds on the same variable, and by convention we do not ask solver interfaces to support this. It is natural, however, to impose upper and lower bounds separately as two different constraints on a single variable. The difference between imposing bounds by using a single `Interval` constraint and by using separate `LessThan` and `GreaterThan` constraints is that the latter will allow the solver to return separate dual multipliers for the two bounds, while the former will allow the solver to return only a single dual for the interval constraint.
316333

317-
[Define ``\mathbb{R}_+, \mathbb{R}_-``]
318-
319334
#### Conic constraints
320335

321336

@@ -326,12 +341,14 @@ Constraints with `SingleVariable` in `LessThan`, `GreaterThan`, `EqualTo`, or `I
326341
| ``2yz \ge \lVert x \rVert_2^2, y,z \ge 0`` | `VectorOfVariables` | `RotatedSecondOrderCone` |
327342
| ``(a_1^Tx + b_1,a_2^Tx + b_2,a_3^Tx + b_3) \in \mathcal{E}`` | `VectorAffineFunction` | `ExponentialCone` |
328343
| ``A(x) \in \mathcal{S}_+`` | `VectorAffineFunction` | `PositiveSemidefiniteConeTriangle` |
329-
| ``A(x) \in \mathcal{S}'_+`` | `VectorAffineFunction` | `PositiveSemidefiniteConeSquare` |
344+
| ``B(x) \in \mathcal{S}_+`` | `VectorAffineFunction` | `PositiveSemidefiniteConeSquare` |
330345
| ``x \in \mathcal{S}_+`` | `VectorOfVariables` | `PositiveSemidefiniteConeTriangle` |
331-
| ``x \in \mathcal{S}'_+`` | `VectorOfVariables` | `PositiveSemidefiniteConeSquare` |
346+
| ``x \in \mathcal{S}_+`` | `VectorOfVariables` | `PositiveSemidefiniteConeSquare` |
332347

333-
334-
[Define ``\mathcal{E}`` (exponential cone), ``\mathcal{S}_+`` (smat), ``\mathcal{S}'_+`` (svec). ``A(x)`` is an affine function of ``x`` that outputs a matrix.]
348+
where ``\mathcal{E}`` is the exponential cone (see [`ExponentialCone`](@ref)),
349+
``\mathcal{S}_+`` is the set of positive semidefinite symmetric matrices,
350+
``A`` is an affine map that outputs symmetric matrices and
351+
``B`` is an affine map that outputs square matrices.
335352

336353
#### Quadratic constraints
337354

@@ -469,59 +486,92 @@ non-global tree search solvers like
469486

470487
## A complete example: solving a knapsack problem
471488

472-
[ needs formatting help, doc tests ]
473-
489+
We first need to select a solver supporting the given problem (see
490+
[`supports`](@ref) and [`supports_constraint`](@ref)). In this example, we
491+
want to solve a binary-constrained knapsack problem:
492+
`max c'x: w'x <= C, x binary`. Suppose we choose GLPK:
474493
```julia
475-
using MathOptInterface
476-
const MOI = MathOptInterface
477494
using GLPK
478-
479-
# Solves the binary-constrained knapsack problem:
480-
# max c'x: w'x <= C, x binary using GLPK.
481-
495+
optimizer = GLPK.Optimizer()
496+
```
497+
We first define the constants of the problem:
498+
```jldoctest knapsack; setup = :(optimizer = MOI.Utilities.MockOptimizer(MOI.Utilities.Model{Float64}()); MOI.Utilities.set_mock_optimize!(optimizer, mock -> MOI.Utilities.mock_optimize!(mock, ones(3))))
482499
c = [1.0, 2.0, 3.0]
483500
w = [0.3, 0.5, 1.0]
484501
C = 3.2
485502
486-
num_variables = length(c)
503+
# output
487504
488-
optimizer = GLPK.Optimizer()
489-
490-
# Create the variables in the problem.
491-
x = MOI.add_variables(optimizer, num_variables)
492-
493-
# Set the objective function.
505+
3.2
506+
```
507+
We create the variables of the problem and set the objective function:
508+
```jldoctest knapsack
509+
x = MOI.add_variables(optimizer, length(c))
494510
objective_function = MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(c, x), 0.0)
495511
MOI.set(optimizer, MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
496512
objective_function)
497513
MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MAX_SENSE)
498514
499-
# Add the knapsack constraint.
515+
# output
516+
517+
MAX_SENSE::OptimizationSense = 1
518+
```
519+
We add the knapsack constraint and integrality constraints:
520+
```jldoctest knapsack
500521
knapsack_function = MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(w, x), 0.0)
501522
MOI.add_constraint(optimizer, knapsack_function, MOI.LessThan(C))
502-
503-
# Add integrality constraints.
504-
for i in 1:num_variables
505-
MOI.add_constraint(optimizer, MOI.SingleVariable(x[i]), MOI.ZeroOne())
523+
for x_i in x
524+
MOI.add_constraint(optimizer, MOI.SingleVariable(x_i), MOI.ZeroOne())
506525
end
507526
508-
# All set!
527+
# output
528+
529+
```
530+
We are all set! We can now call [`optimize!`](@ref) and wait for the solver to
531+
find the solution:
532+
```jldoctest knapsack
509533
MOI.optimize!(optimizer)
510534
511-
termination_status = MOI.get(optimizer, MOI.TerminationStatus())
512-
obj_value = MOI.get(optimizer, MOI.ObjectiveValue())
513-
if termination_status != MOI.OPTIMAL
514-
error("Solver terminated with status $termination_status")
515-
end
535+
# output
536+
537+
```
538+
The first thing to check after optimization is why the solver stopped, e.g.,
539+
did it stop because of a time limit or did it stop because it found the optimal
540+
solution?
541+
```jldoctest knapsack
542+
MOI.get(optimizer, MOI.TerminationStatus())
516543
517-
@assert MOI.get(optimizer, MOI.ResultCount()) > 0
544+
# output
518545
519-
@assert MOI.get(optimizer, MOI.PrimalStatus()) == MOI.FEASIBLE_POINT
520546
521-
primal_variable_result = MOI.get(optimizer, MOI.VariablePrimal(), x)
547+
OPTIMAL::TerminationStatusCode = 1
548+
```
549+
It found the optimal solution! Now let's see what is that solution.
550+
```jldoctest knapsack
551+
MOI.get(optimizer, MOI.PrimalStatus())
552+
553+
# output
554+
555+
FEASIBLE_POINT::ResultStatusCode = 1
556+
```
557+
What is its objective value?
558+
```jldoctest knapsack
559+
MOI.get(optimizer, MOI.ObjectiveValue())
560+
561+
# output
562+
563+
6.0
564+
```
565+
And what is the value of the variables `x`?
566+
```jldoctest knapsack
567+
MOI.get(optimizer, MOI.VariablePrimal(), x)
568+
569+
# output
522570
523-
@show obj_value
524-
@show primal_variable_result
571+
3-element Array{Float64,1}:
572+
1.0
573+
1.0
574+
1.0
525575
```
526576

527577
## Problem modification

0 commit comments

Comments
 (0)