diff --git a/.save/example-double-integrator-energy.md b/.save/example-double-integrator-energy.md
new file mode 100644
index 00000000..3fa79742
--- /dev/null
+++ b/.save/example-double-integrator-energy.md
@@ -0,0 +1,324 @@
+# [Double integrator: energy minimisation](@id example-double-integrator-energy)
+
+Let us consider a wagon moving along a rail, whose acceleration can be controlled by a force $u$.
+We denote by $x = (q, v)$ the state of the wagon, where $q$ is the position and $v$ the velocity.
+
+```@raw html
+
+```
+
+We assume that the mass is constant and equal to one, and that there is no friction. The dynamics are given by
+
+```math
+ \dot q(t) = v(t), \quad \dot v(t) = u(t),\quad u(t) \in \R,
+```
+
+which is simply the [double integrator](https://en.wikipedia.org/w/index.php?title=Double_integrator&oldid=1071399674) system. Let us consider a transfer starting at time $t_0 = 0$ and ending at time $t_f = 1$, for which we want to minimise the transfer energy
+
+```math
+ \frac{1}{2}\int_{0}^{1} u^2(t) \, \mathrm{d}t
+```
+
+starting from $x(0) = (-1, 0)$ and aiming to reach the target $x(1) = (0, 0)$.
+
+First, we need to import the [OptimalControl.jl](https://control-toolbox.org/OptimalControl.jl) package to define the optimal control problem, [NLPModelsIpopt.jl](https://jso.dev/NLPModelsIpopt.jl) to solve it, and [Plots.jl](https://docs.juliaplots.org) to visualise the solution.
+
+```@example main
+using OptimalControl
+using NLPModelsIpopt
+using Plots
+```
+
+## Optimal control problem
+
+Let us define the problem with the [`@def`](@ref) macro:
+
+```@raw html
+
+
+```
+
+```@example main
+t0 = 0; tf = 1; x0 = [-1, 0]; xf = [0, 0]
+
+ocp = @def begin
+ t ∈ [t0, tf], time
+ x = (q, v) ∈ R², state
+ u ∈ R, control
+
+ x(t0) == x0
+ x(tf) == xf
+
+ ∂(q)(t) == v(t)
+ ∂(v)(t) == u(t)
+
+ 0.5∫( u(t)^2 ) → min
+end
+nothing # hide
+```
+
+```@raw html
+
+
+```
+
+### Mathematical formulation
+
+```math
+ \begin{aligned}
+ & \text{Minimise} && \frac{1}{2}\int_0^1 u^2(t) \,\mathrm{d}t \\
+ & \text{subject to} \\
+ & && \dot{q}(t) = v(t), \\[0.5em]
+ & && \dot{v}(t) = u(t), \\[1.0em]
+ & && x(0) = (-1,0), \\[0.5em]
+ & && x(1) = (0,0).
+ \end{aligned}
+```
+
+```@raw html
+
+
+```
+
+!!! note "Nota bene"
+
+ For a comprehensive introduction to the syntax used above to define the optimal control problem, see [this abstract syntax tutorial](@ref manual-abstract-syntax). In particular, non-Unicode alternatives are available for derivatives, integrals, *etc.*
+
+## [Solve and plot](@id example-double-integrator-energy-solve-plot)
+
+### Direct method
+
+We can [`solve`](@ref) it simply with:
+
+```@example main
+direct_sol = solve(ocp)
+nothing # hide
+```
+
+
+
+And [`plot`](@ref) the solution with:
+
+```@example main
+plot(direct_sol)
+```
+
+!!! note "Nota bene"
+
+ The `solve` function has options, see the [solve tutorial](@ref manual-solve). You can customise the plot, see the [plot tutorial](@ref manual-plot).
+
+### Indirect method
+
+The first solution was obtained using the so-called direct method.[^1] Another approach is to use an [indirect simple shooting](@extref tutorial-indirect-simple-shooting) method. We begin by importing the necessary packages.
+
+```@example main
+using OrdinaryDiffEq # Ordinary Differential Equations (ODE) solver
+using NonlinearSolve # Nonlinear Equations (NLE) solver
+```
+
+To define the shooting function, we must provide the maximising control in feedback form:
+
+```@example main
+# maximising control, H(x, p, u) = p₁v + p₂u - u²/2
+u(x, p) = p[2]
+
+# Hamiltonian flow
+f = Flow(ocp, u)
+
+# state projection, p being the costate
+π((x, p)) = x
+
+# shooting function
+S(p0) = π( f(t0, x0, p0, tf) ) - xf
+nothing # hide
+```
+
+We are now ready to solve the shooting equations.
+
+```@example main
+# auxiliary in-place NLE function
+nle!(s, p0, _) = s[:] = S(p0)
+
+# initial guess for the Newton solver
+p0_guess = [1, 1]
+
+# NLE problem with initial guess
+prob = NonlinearProblem(nle!, p0_guess)
+
+# resolution of S(p0) = 0
+shooting_sol = solve(prob; show_trace=Val(true))
+p0_sol = shooting_sol.u # costate solution
+
+# print the costate solution and the shooting function evaluation
+println("\ncostate: p0 = ", p0_sol)
+println("shoot: S(p0) = ", S(p0_sol), "\n")
+```
+
+
+
+To plot the solution obtained by the indirect method, we need to build the solution of the optimal control problem. This is done using the costate solution and the flow function.
+
+```@example main
+indirect_sol = f((t0, tf), x0, p0_sol; saveat=range(t0, tf, 100))
+plot(indirect_sol)
+```
+
+[^1]: J. T. Betts. Practical methods for optimal control using nonlinear programming. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2001.
+
+!!! note
+
+ - You can use [MINPACK.jl](@extref Tutorials Resolution-of-the-shooting-equation) instead of [NonlinearSolve.jl](https://docs.sciml.ai/NonlinearSolve).
+ - For more details about the flow construction, visit the [Compute flows from optimal control problems](@ref manual-flow-ocp) page.
+ - In this simple example, we have set an arbitrary initial guess. It can be helpful to use the solution of the direct method to initialise the shooting method. See the [Goddard tutorial](@extref Tutorials tutorial-goddard) for such a concrete application.
+
+## State constraint
+
+The following example illustrates both direct and indirect solution approaches for the energy minimization problem with a state constraint on the maximal velocity. The workflow demonstrates a practical strategy: a direct method on a coarse grid first identifies the problem structure and provides an initial guess for the indirect method, which then computes a precise solution via shooting based on Pontryagin's Maximum Principle.
+
+!!! note
+
+ The direct solution can be refined using a finer discretization grid for higher accuracy.
+
+### Direct method: constrained case
+
+We add the path constraint
+
+```math
+ v(t) \le 1.2.
+```
+
+Let us model, solve and plot the optimal control problem with this constraint.
+
+```@example main
+# the upper bound for v
+v_max = 1.2
+
+# the optimal control problem
+ocp = @def begin
+ t ∈ [t0, tf], time
+ x = (q, v) ∈ R², state
+ u ∈ R, control
+
+ v(t) ≤ v_max # state constraint
+
+ x(t0) == x0
+ x(tf) == xf
+
+ ∂(q)(t) == v(t)
+ ∂(v)(t) == u(t)
+
+ 0.5∫( u(t)^2 ) → min
+end
+
+# solve with a direct method
+direct_sol = solve(ocp; grid_size=50)
+
+# plot the solution
+plt = plot(direct_sol; label="Direct", size=(800, 600))
+```
+
+The solution has three phases (unconstrained-constrained-unconstrained arcs), requiring definition of Hamiltonian flows for each phase and a shooting function to enforce boundary and switching conditions.
+
+### Indirect method: constrained case
+
+Under the normal case, the pseudo-Hamiltonian reads:
+
+```math
+H(x, p, u, \mu) = p_1 v + p_2 u - \frac{u^2}{2} + \mu\, g(x),
+```
+
+where $g(x) = v_{\max} - v$. Along a boundary arc we have $g(x(t)) = 0$; differentiating gives:
+
+```math
+ \frac{\mathrm{d}}{\mathrm{d}t}g(x(t)) = -\dot{v}(t) = -u(t) = 0.
+```
+
+The zero control maximises the Hamiltonian, so $p_2(t) = 0$ along that arc. From the adjoint equation we then have
+
+```math
+ \dot{p}_2(t) = -p_1(t) + \mu(t) = 0 \quad \Rightarrow \mu(t) = p_1(t).
+```
+
+Because the adjoint vector is continuous at both the entry time $t_1$ and the exit time $t_2$, the unknowns are $p_0 \in \mathbb{R}^2$ together with $t_1$ and $t_2$. The target condition supplies two equations, $g(x(t_1)) = 0$ enforces the state constraint, and $p_2(t_1) = 0$ encodes the switching condition.
+
+```@example main
+# flow for unconstrained extremals
+f_interior = Flow(ocp, (x, p) -> p[2])
+
+ub = 0 # boundary control
+g(x) = v_max - x[2] # constraint: g(x) ≥ 0
+μ(p) = p[1] # dual variable
+
+# flow for boundary extremals
+f_boundary = Flow(ocp, (x, p) -> ub, (x, u) -> g(x), (x, p) -> μ(p))
+
+# shooting function
+function shoot!(s, p0, t1, t2)
+ x_t0, p_t0 = x0, p0
+ x_t1, p_t1 = f_interior(t0, x_t0, p_t0, t1)
+ x_t2, p_t2 = f_boundary(t1, x_t1, p_t1, t2)
+ x_tf, p_tf = f_interior(t2, x_t2, p_t2, tf)
+ s[1:2] = x_tf - xf
+ s[3] = g(x_t1)
+ s[4] = p_t1[2]
+end
+nothing # hide
+```
+
+We can derive an initial guess for the costate and the entry/exit times from the direct solution:
+
+```@example main
+t = time_grid(direct_sol) # the time grid as a vector
+x = state(direct_sol) # the state as a function of time
+p = costate(direct_sol) # the costate as a function of time
+
+# initial costate
+p0 = p(t0)
+
+# times where constraint is active
+t12 = t[ 0 .≤ (g ∘ x).(t) .≤ 1e-3 ]
+
+# entry and exit times
+t1 = minimum(t12) # entry time
+t2 = maximum(t12) # exit time
+nothing # hide
+```
+
+We can now solve the shooting equations.
+
+```@example main
+# auxiliary in-place NLE function
+nle!(s, ξ, _) = shoot!(s, ξ[1:2], ξ[3], ξ[4])
+
+# initial guess for the Newton solver
+ξ_guess = [p0..., t1, t2]
+
+# NLE problem with initial guess
+prob = NonlinearProblem(nle!, ξ_guess)
+
+# resolution of the shooting equations
+shooting_sol = solve(prob; show_trace=Val(true))
+p0, t1, t2 = shooting_sol.u[1:2], shooting_sol.u[3], shooting_sol.u[4]
+
+# print the costate solution and the entry and exit times
+println("\np0 = ", p0, "\nt1 = ", t1, "\nt2 = ", t2)
+```
+
+
+
+To reconstruct the constrained trajectory, concatenate the flows as follows: an unconstrained arc until $t_1$, a boundary arc from $t_1$ to $t_2$, and a final unconstrained arc from $t_2$ to $t_f$.
+This composition yields the full solution (state, costate, and control), which we then plot alongside the direct method for comparison.
+
+```@example main
+# concatenation of the flows
+φ = f_interior * (t1, f_boundary) * (t2, f_interior)
+
+# compute the solution: state, costate, control...
+indirect_sol = φ((t0, tf), x0, p0; saveat=range(t0, tf, 100))
+
+# plot the solution on the previous plot
+plot!(plt, indirect_sol; label="Indirect", color=2, linestyle=:dash)
+```
diff --git a/.save/example-double-integrator-time.md b/.save/example-double-integrator-time.md
new file mode 100644
index 00000000..cb90c83b
--- /dev/null
+++ b/.save/example-double-integrator-time.md
@@ -0,0 +1,211 @@
+# [Double integrator: time minimisation](@id example-double-integrator-time)
+
+The problem consists in minimising the final time $t_f$ for the double integrator system
+
+```math
+ \dot x_1(t) = x_2(t), \quad \dot x_2(t) = u(t), \quad u(t) \in [-1,1],
+```
+
+and the limit conditions
+
+```math
+ x(0) = (-1,0), \quad x(t_f) = (0,0).
+```
+
+This problem can be interpreted as a simple model for a wagon with constant mass moving along a line without friction.
+
+```@raw html
+
+```
+
+First, we need to import the [OptimalControl.jl](https://control-toolbox.org/OptimalControl.jl) package to define the
+optimal control problem and [NLPModelsIpopt.jl](https://jso.dev/NLPModelsIpopt.jl) to solve it.
+We also need to import the [Plots.jl](https://docs.juliaplots.org) package to plot the solution.
+
+```@example main
+using OptimalControl
+using NLPModelsIpopt
+using Plots
+```
+
+## Optimal control problem
+
+Let us define the problem:
+
+```@raw html
+
+
+```
+
+```@example main
+ocp = @def begin
+
+ tf ∈ R, variable
+ t ∈ [0, tf], time
+ x = (q, v) ∈ R², state
+ u ∈ R, control
+
+ -1 ≤ u(t) ≤ 1
+
+ q(0) == -1
+ v(0) == 0
+ q(tf) == 0
+ v(tf) == 0
+
+ ẋ(t) == [v(t), u(t)]
+
+ tf → min
+
+end
+nothing # hide
+```
+
+```@raw html
+
+
+```
+
+### Mathematical formulation
+
+```math
+ \begin{aligned}
+ & \text{Minimise} && t_f \\[0.5em]
+ & \text{subject to} \\[0.5em]
+ & && \dot q(t) = v(t), \\
+ & && \dot v(t) = u(t), \\[0.5em]
+ & && -1 \le u(t) \le 1, \\[0.5em]
+ & && q(0) = -1, \\[0.5em]
+ & && v(0) = 0, \\[0.5em]
+ & && q(t_f) = 0, \\[0.5em]
+ & && v(t_f) = 0.
+ \end{aligned}
+```
+
+```@raw html
+
+
+```
+
+!!! note "Nota bene"
+
+ For a comprehensive introduction to the syntax used above to define the optimal control problem, see [this abstract syntax tutorial](@ref manual-abstract-syntax). In particular, non-Unicode alternatives are available for derivatives, integrals, *etc.*
+
+## Solve and plot
+
+### Direct method
+
+Let us solve it with a direct method (we set the number of time steps to 200):
+
+```@example main
+sol = solve(ocp; grid_size=200)
+nothing # hide
+```
+
+
+
+and plot the solution:
+
+```@example main
+plt = plot(sol; label="Direct", size=(800, 600))
+```
+
+!!! note "Nota bene"
+
+ The `solve` function has options, see the [solve tutorial](@ref manual-solve). You can customise the plot, see the [plot tutorial](@ref manual-plot).
+
+### Indirect method
+
+We now turn to the indirect method, which relies on Pontryagin’s Maximum Principle. The pseudo-Hamiltonian is given by
+
+```math
+H(x, p, u) = p_1 v + p_2 u - 1,
+```
+
+where $p = (p_1, p_2)$ is the costate vector. The optimal control is of bang–bang type:
+
+```math
+u(t) = \mathrm{sign}(p_2(t)),
+```
+
+with one switch from $u=+1$ to $u=-1$ at one single time denoted $t_1$. Let us implement this approach. First, we import the necessary packages:
+
+```@example main
+using OrdinaryDiffEq
+using NonlinearSolve
+```
+
+Define the bang–bang control and Hamiltonian flow:
+
+```@example main
+# pseudo-Hamiltonian
+H(x, p, u) = p[1]*x[2] + p[2]*u - 1
+
+# bang–bang control
+u_max = +1
+u_min = -1
+
+# Hamiltonian flow
+f_max = Flow(ocp, (x, p, tf) -> u_max)
+f_min = Flow(ocp, (x, p, tf) -> u_min)
+nothing # hide
+```
+
+The shooting function enforces the conditions:
+
+```@example main
+t0 = 0
+x0 = [-1, 0]
+xf = [ 0, 0]
+function shoot!(s, p0, t1, tf)
+ x_t0, p_t0 = x0, p0
+ x_t1, p_t1 = f_max(t0, x_t0, p_t0, t1)
+ x_tf, p_tf = f_min(t1, x_t1, p_t1, tf)
+ s[1:2] = x_tf - xf # target conditions
+ s[3] = p_t1[2] # switching condition
+ s[4] = H(x_tf, p_tf, -1) # free final time
+end
+nothing # hide
+```
+
+We are now ready to solve the shooting equations:
+
+```@example main
+# in-place shooting function
+nle!(s, ξ, λ) = shoot!(s, ξ[1:2], ξ[3], ξ[4])
+
+# initial guess: costate and final time
+ξ_guess = [0.1, 0.1, 0.5, 1]
+
+# NLE problem
+prob = NonlinearProblem(nle!, ξ_guess)
+
+# resolution of the shooting equations
+sol = solve(prob; show_trace=Val(true))
+p0, t1, tf = sol.u[1:2], sol.u[3], sol.u[4]
+
+# print the solution
+println("\np0 = ", p0, "\nt1 = ", t1, "\ntf = ", tf)
+```
+
+
+
+Finally, we reconstruct and plot the solution obtained by the indirect method:
+
+```@example main
+# concatenation of the flows
+φ = f_max * (t1, f_min)
+
+# compute the solution: state, costate, control...
+flow_sol = φ((t0, tf), x0, p0; saveat=range(t0, tf, 200))
+
+# plot the solution on the previous plot
+plot!(plt, flow_sol; label="Indirect", color=2, linestyle=:dash)
+```
+
+!!! note
+
+ - You can use [MINPACK.jl](@extref Tutorials Resolution-of-the-shooting-equation) instead of [NonlinearSolve.jl](https://docs.sciml.ai/NonlinearSolve).
+ - For more details about the flow construction, visit the [Compute flows from optimal control problems](@ref manual-flow-ocp) page.
+ - In this simple example, we have set an arbitrary initial guess. It can be helpful to use the solution of the direct method to initialise the shooting method. See the [Goddard tutorial](@extref Tutorials tutorial-goddard) for such a concrete application.
diff --git a/test/problems/TestProblems.jl b/test/problems/TestProblems.jl
index 678bf18d..59d360c4 100644
--- a/test/problems/TestProblems.jl
+++ b/test/problems/TestProblems.jl
@@ -4,7 +4,9 @@ using OptimalControl
include("beam.jl")
include("goddard.jl")
+include("double_integrator.jl")
export Beam, Goddard
+export DoubleIntegratorTime, DoubleIntegratorEnergy, DoubleIntegratorEnergyConstrained
end
diff --git a/test/problems/double_integrator.jl b/test/problems/double_integrator.jl
new file mode 100644
index 00000000..e3e196d6
--- /dev/null
+++ b/test/problems/double_integrator.jl
@@ -0,0 +1,170 @@
+# Double integrator optimal control problems used for indirect method tests.
+
+using OptimalControl
+
+"""
+ DoubleIntegratorTime()
+
+Return data for the double integrator time minimization problem.
+
+The problem consists in minimising the final time `tf` for the system:
+- ẋ₁(t) = x₂(t)
+- ẋ₂(t) = u(t)
+- u(t) ∈ [-1, 1]
+
+with boundary conditions:
+- x(0) = (-1, 0)
+- x(tf) = (0, 0)
+
+The function returns a NamedTuple with fields:
+ * `ocp` – CTParser/@def optimal control problem
+ * `obj` – reference optimal objective value (tf = 2.0)
+ * `name` – short problem name
+ * `x0` – initial state
+ * `xf` – final state
+ * `t0` – initial time
+ * `tf` – final time (reference value)
+ * `u_max` – maximum control value
+ * `u_min` – minimum control value
+"""
+function DoubleIntegratorTime()
+ t0 = 0.0
+ x0 = [-1.0, 0.0]
+ xf = [0.0, 0.0]
+ u_max = 1.0
+ u_min = -1.0
+
+ @def ocp begin
+ tf ∈ R, variable
+ t ∈ [0, tf], time
+ x = (q, v) ∈ R², state
+ u ∈ R, control
+
+ -1 ≤ u(t) ≤ 1
+
+ q(0) == -1
+ v(0) == 0
+ q(tf) == 0
+ v(tf) == 0
+
+ ẋ(t) == [v(t), u(t)]
+
+ tf → min
+ end
+
+ return (
+ ocp=ocp,
+ obj=2.0,
+ name="double_integrator_time",
+ x0=x0,
+ xf=xf,
+ t0=t0,
+ tf=2.0,
+ u_max=u_max,
+ u_min=u_min
+ )
+end
+
+"""
+ DoubleIntegratorEnergy()
+
+Return data for the double integrator energy minimization problem (unconstrained).
+
+The problem consists in minimising ∫₀¹ u²(t)/2 dt for the system:
+- ẋ₁(t) = x₂(t)
+- ẋ₂(t) = u(t)
+
+with boundary conditions:
+- x(0) = (-1, 0)
+- x(1) = (0, 0)
+
+The function returns a NamedTuple with fields:
+ * `ocp` – CTParser/@def optimal control problem
+ * `obj` – reference optimal objective value (6.0)
+ * `name` – short problem name
+ * `x0` – initial state
+ * `xf` – final state
+ * `t0` – initial time
+ * `tf` – final time
+"""
+function DoubleIntegratorEnergy()
+ t0 = 0.0
+ tf = 1.0
+ x0 = [-1.0, 0.0]
+ xf = [0.0, 0.0]
+
+ @def ocp begin
+ t ∈ [0, 1], time
+ x = (q, v) ∈ R², state
+ u ∈ R, control
+
+ x(0) == [-1, 0]
+ x(1) == [0, 0]
+
+ ∂(q)(t) == v(t)
+ ∂(v)(t) == u(t)
+
+ 0.5∫(u(t)^2) → min
+ end
+
+ return (ocp=ocp, obj=6.0, name="double_integrator_energy", x0=x0, xf=xf, t0=t0, tf=tf)
+end
+
+"""
+ DoubleIntegratorEnergyConstrained()
+
+Return data for the double integrator energy minimization problem with state constraint.
+
+The problem consists in minimising ∫₀¹ u²(t)/2 dt for the system:
+- ẋ₁(t) = x₂(t)
+- ẋ₂(t) = u(t)
+- v(t) ≤ v_max = 1.2
+
+with boundary conditions:
+- x(0) = (-1, 0)
+- x(1) = (0, 0)
+
+The function returns a NamedTuple with fields:
+ * `ocp` – CTParser/@def optimal control problem
+ * `obj` – reference optimal objective value (nothing - no reference available)
+ * `name` – short problem name
+ * `x0` – initial state
+ * `xf` – final state
+ * `t0` – initial time
+ * `tf` – final time
+ * `v_max` – maximum velocity constraint
+"""
+function DoubleIntegratorEnergyConstrained()
+ t0 = 0.0
+ tf = 1.0
+ x0 = [-1.0, 0.0]
+ xf = [0.0, 0.0]
+ v_max = 1.2
+
+ @def ocp begin
+ t ∈ [0, 1], time
+ x = (q, v) ∈ R², state
+ u ∈ R, control
+
+ v(t) ≤ 1.2
+
+ x(0) == [-1, 0]
+ x(1) == [0, 0]
+
+ ∂(q)(t) == v(t)
+ ∂(v)(t) == u(t)
+
+ 0.5∫(u(t)^2) → min
+ end
+
+ return (
+ ocp=ocp,
+ obj=nothing,
+ name="double_integrator_energy_constrained",
+ x0=x0,
+ xf=xf,
+ t0=t0,
+ tf=tf,
+ v_max=v_max
+ )
+end
diff --git a/test/problems/goddard.jl b/test/problems/goddard.jl
index 310adcdb..96ab1722 100644
--- a/test/problems/goddard.jl
+++ b/test/problems/goddard.jl
@@ -17,7 +17,7 @@ The function returns a NamedTuple with fields:
function Goddard(; vmax=0.1, Tmax=3.5)
# constants
Cd = 310
- beta = 500
+ β = 500
b = 2
r0 = 1
v0 = 0
@@ -46,7 +46,7 @@ function Goddard(; vmax=0.1, Tmax=3.5)
0 ≤ u(t) ≤ 1
# Component-wise dynamics (Goddard rocket)
- D = Cd * v(t)^2 * exp(-beta * (r(t) - r0))
+ D = Cd * v(t)^2 * exp(-β * (r(t) - r0))
g = 1 / r(t)^2
T = Tmax * u(t)
@@ -60,5 +60,17 @@ function Goddard(; vmax=0.1, Tmax=3.5)
# Components for a reasonable initial guess around a feasible trajectory.
init = (state=[1.01, 0.05, 0.8], control=0.5, variable=[0.1])
- return (ocp=goddard, obj=1.01257, name="goddard", init=init)
+ # Dynamics functions for indirect methods
+ function F0(x)
+ r, v, m = x
+ D = Cd * v^2 * exp(-β * (r - r0))
+ return [v, -D / m - 1 / r^2, 0]
+ end
+
+ function F1(x)
+ r, v, m = x
+ return [0, Tmax / m, -b * Tmax]
+ end
+
+ return (ocp=goddard, obj=1.01257, name="goddard", init=init, F0=F0, F1=F1)
end
diff --git a/test/suite/indirect/test_double_integrator_energy.jl b/test/suite/indirect/test_double_integrator_energy.jl
new file mode 100644
index 00000000..1e3b8251
--- /dev/null
+++ b/test/suite/indirect/test_double_integrator_energy.jl
@@ -0,0 +1,120 @@
+# ============================================================================
+# Double Integrator Energy Minimization - Indirect Method Tests
+# ============================================================================
+# This file tests the indirect shooting method for the double integrator
+# energy minimization problem, both unconstrained and with state constraint.
+
+module TestDoubleIntegratorEnergy
+
+import Test
+import OptimalControl
+import NonlinearSolve: NonlinearProblem, solve
+import LinearAlgebra: norm
+import OrdinaryDiffEq: OrdinaryDiffEq
+
+# Include shared test problems via TestProblems module
+include(joinpath(@__DIR__, "..", "..", "problems", "TestProblems.jl"))
+using .TestProblems
+
+const VERBOSE = isdefined(Main, :TestOptions) ? Main.TestOptions.VERBOSE : true
+const SHOWTIMING = isdefined(Main, :TestOptions) ? Main.TestOptions.SHOWTIMING : true
+
+function test_double_integrator_energy()
+ Test.@testset "Double Integrator Energy Minimization" verbose=VERBOSE showtiming=SHOWTIMING begin
+
+ # ====================================================================
+ # INTEGRATION TEST - Unconstrained Energy Minimization
+ # ====================================================================
+
+ Test.@testset "Unconstrained singular control" begin
+ # Get problem from TestProblems
+ prob_data = TestProblems.DoubleIntegratorEnergy()
+ ocp = prob_data.ocp
+ x0 = prob_data.x0
+ xf = prob_data.xf
+ t0 = prob_data.t0
+ tf = prob_data.tf
+ obj_ref = prob_data.obj
+
+ # Singular control: u(x, p) = p₂
+ u(x, p) = p[2]
+
+ # Hamiltonian flow
+ f = OptimalControl.Flow(ocp, u)
+
+ # State projection
+ π((x, p)) = x
+
+ # Shooting function
+ S(p0) = π(f(t0, x0, p0, tf)) - xf
+
+ # Known solution (from documentation)
+ p0_ref = [12.0, 6.0]
+
+ # Test shooting function with known solution
+ s = S(p0_ref)
+
+ # Verify solution (should be close to zero)
+ Test.@test norm(s) < 1e-6
+
+ # Note: We don't test the objective value directly here since
+ # we're testing the shooting method, not the full solution
+ end
+
+ # ====================================================================
+ # INTEGRATION TEST - Constrained Energy Minimization
+ # ====================================================================
+
+ Test.@testset "Constrained three-arc structure" begin
+ # Get problem from TestProblems
+ prob_data = TestProblems.DoubleIntegratorEnergyConstrained()
+ ocp = prob_data.ocp
+ x0 = prob_data.x0
+ xf = prob_data.xf
+ t0 = prob_data.t0
+ tf = prob_data.tf
+ v_max = prob_data.v_max
+
+ # Flow for unconstrained extremals (singular control u = p₂)
+ f_interior = OptimalControl.Flow(ocp, (x, p) -> p[2])
+
+ # Boundary control and constraint
+ ub = 0.0 # boundary control
+ g(x) = v_max - x[2] # constraint: g(x) ≥ 0
+ μ(p) = p[1] # dual variable
+
+ # Flow for boundary extremals
+ f_boundary = OptimalControl.Flow(ocp, (x, p) -> ub, (x, u) -> g(x), (x, p) -> μ(p))
+
+ # Shooting function
+ function shoot!(s, p0, t1, t2)
+ x_t0, p_t0 = x0, p0
+ x_t1, p_t1 = f_interior(t0, x_t0, p_t0, t1)
+ x_t2, p_t2 = f_boundary(t1, x_t1, p_t1, t2)
+ x_tf, p_tf = f_interior(t2, x_t2, p_t2, tf)
+ s[1:2] = x_tf - xf # target conditions
+ s[3] = g(x_t1) # constraint activation at entry
+ s[4] = p_t1[2] # switching condition
+ end
+
+ # Known solution (from documentation)
+ p0_ref = [38.4, 9.6]
+ t1_ref = 0.25
+ t2_ref = 0.75
+
+ # Test shooting function with known solution
+ s = zeros(4)
+ shoot!(s, p0_ref, t1_ref, t2_ref)
+
+ # Verify solution (should be close to zero)
+ Test.@test norm(s) < 1e-6
+
+ # Note: obj_ref is nothing for this problem (no reference value available)
+ end
+ end
+end
+
+end # module
+
+# CRITICAL: Redefine in outer scope for TestRunner
+test_double_integrator_energy() = TestDoubleIntegratorEnergy.test_double_integrator_energy()
diff --git a/test/suite/indirect/test_double_integrator_time.jl b/test/suite/indirect/test_double_integrator_time.jl
new file mode 100644
index 00000000..787921e9
--- /dev/null
+++ b/test/suite/indirect/test_double_integrator_time.jl
@@ -0,0 +1,78 @@
+# ============================================================================
+# Double Integrator Time Minimization - Indirect Method Tests
+# ============================================================================
+# This file tests the indirect shooting method for the double integrator
+# time minimization problem. It uses bang-bang control with one switching time.
+
+module TestDoubleIntegratorTime
+
+import Test
+import OptimalControl
+import NonlinearSolve: NonlinearProblem, solve
+import LinearAlgebra: norm
+import OrdinaryDiffEq: OrdinaryDiffEq
+
+# Include shared test problems via TestProblems module
+include(joinpath(@__DIR__, "..", "..", "problems", "TestProblems.jl"))
+using .TestProblems
+
+const VERBOSE = isdefined(Main, :TestOptions) ? Main.TestOptions.VERBOSE : true
+const SHOWTIMING = isdefined(Main, :TestOptions) ? Main.TestOptions.SHOWTIMING : true
+
+function test_double_integrator_time()
+ Test.@testset "Double Integrator Time Minimization" verbose=VERBOSE showtiming=SHOWTIMING begin
+
+ # ====================================================================
+ # INTEGRATION TEST - Bang-Bang Shooting Method
+ # ====================================================================
+
+ Test.@testset "Bang-bang shooting with switching time" begin
+ # Get problem from TestProblems
+ prob_data = TestProblems.DoubleIntegratorTime()
+ ocp = prob_data.ocp
+ x0 = prob_data.x0
+ xf = prob_data.xf
+ t0 = prob_data.t0
+ u_max = prob_data.u_max
+ u_min = prob_data.u_min
+ obj_ref = prob_data.obj
+
+ # Pseudo-Hamiltonian: H(x, p, u) = p₁v + p₂u - 1
+ H(x, p, u) = p[1] * x[2] + p[2] * u - 1
+
+ # Hamiltonian flows for bang-bang control
+ f_max = OptimalControl.Flow(ocp, (x, p, tf) -> u_max)
+ f_min = OptimalControl.Flow(ocp, (x, p, tf) -> u_min)
+
+ # Shooting function
+ function shoot!(s, p0, t1, tf)
+ x_t0, p_t0 = x0, p0
+ x_t1, p_t1 = f_max(t0, x_t0, p_t0, t1)
+ x_tf, p_tf = f_min(t1, x_t1, p_t1, tf)
+ s[1:2] = x_tf - xf # target conditions
+ s[3] = p_t1[2] # switching condition
+ s[4] = H(x_tf, p_tf, u_min) # free final time
+ end
+
+ # Known solution (from documentation)
+ p0_ref = [1.0, 1.0]
+ t1_ref = 1.0
+ tf_ref = 2.0
+
+ # Test shooting function with known solution
+ s = zeros(4)
+ shoot!(s, p0_ref, t1_ref, tf_ref)
+
+ # Verify solution (should be close to zero)
+ Test.@test norm(s) < 1e-6
+
+ # Verify objective value
+ Test.@test tf_ref ≈ obj_ref atol=1e-6
+ end
+ end
+end
+
+end # module
+
+# CRITICAL: Redefine in outer scope for TestRunner
+test_double_integrator_time() = TestDoubleIntegratorTime.test_double_integrator_time()
diff --git a/test/suite/indirect/test_goddard.jl b/test/suite/indirect/test_goddard.jl
new file mode 100644
index 00000000..36c79233
--- /dev/null
+++ b/test/suite/indirect/test_goddard.jl
@@ -0,0 +1,114 @@
+# ============================================================================
+# Goddard Indirect Method Tests
+# ============================================================================
+# This file tests the indirect shooting method for the Goddard rocket problem.
+# It uses CTFlows (Hamiltonian flows, Lie brackets) and NonlinearSolve to
+# solve the shooting equations for a complex bang-singular-constrained-bang
+# control structure.
+
+module TestGoddardIndirect
+
+import Test
+import OptimalControl
+import LinearAlgebra: norm
+import OrdinaryDiffEq: OrdinaryDiffEq
+import CTFlows: CTFlows ## TODO: remove when CTFlows is exported by OptimalControl
+
+# Include shared test problems via TestProblems module
+include(joinpath(@__DIR__, "..", "..", "problems", "TestProblems.jl"))
+using .TestProblems
+
+const VERBOSE = isdefined(Main, :TestOptions) ? Main.TestOptions.VERBOSE : true
+const SHOWTIMING = isdefined(Main, :TestOptions) ? Main.TestOptions.SHOWTIMING : true
+
+# ============================================================================
+# TOP-LEVEL: Problem parameters
+# ============================================================================
+
+const Cd = 310
+const Tmax = 3.5
+const β = 500
+const b = 2
+const t0 = 0
+const r0 = 1
+const v0 = 0
+const vmax = 0.1
+const m0 = 1
+const mf = 0.6
+const x0 = [r0, v0, m0]
+
+function test_goddard()
+ Test.@testset "Goddard Indirect Method" verbose=VERBOSE showtiming=SHOWTIMING begin
+
+ # ====================================================================
+ # INTEGRATION TEST - Indirect Shooting Method
+ # ====================================================================
+
+ Test.@testset "Shooting with B+ S C B0 structure" begin
+ # Get problem from TestProblems
+ prob_data = TestProblems.Goddard()
+ ocp = prob_data.ocp
+ F0 = prob_data.F0
+ F1 = prob_data.F1
+
+ # Constraint function
+ g(x) = vmax - x[2]
+ final_mass_cons(xf) = xf[3] - mf
+
+ # Bang controls
+ u0 = 0
+ u1 = 1
+
+ # Singular control
+ H0 = OptimalControl.Lift(F0)
+ H1 = OptimalControl.Lift(F1)
+ H01 = OptimalControl.@Lie {H0, H1}
+ H001 = OptimalControl.@Lie {H0, H01}
+ H101 = OptimalControl.@Lie {H1, H01}
+ us(x, p) = -H001(x, p) / H101(x, p)
+
+ # Boundary control
+ ub(x) = -OptimalControl.Lie(F0, g)(x) / OptimalControl.Lie(F1, g)(x)
+ μ(x, p) = H01(x, p) / OptimalControl.Lie(F1, g)(x)
+
+ # Flows
+ f0 = OptimalControl.Flow(ocp, (x, p, v) -> u0)
+ f1 = OptimalControl.Flow(ocp, (x, p, v) -> u1)
+ fs = OptimalControl.Flow(ocp, (x, p, v) -> us(x, p))
+ fb = OptimalControl.Flow(ocp, (x, p, v) -> ub(x), (x, u, v) -> g(x), (x, p, v) -> μ(x, p))
+
+ # Shooting function
+ function shoot!(s, p0, t1, t2, t3, tf)
+ x1, p1 = f1(t0, x0, p0, t1)
+ x2, p2 = fs(t1, x1, p1, t2)
+ x3, p3 = fb(t2, x2, p2, t3)
+ xf, pf = f0(t3, x3, p3, tf)
+ s[1] = final_mass_cons(xf)
+ s[2:3] = pf[1:2] - [1, 0]
+ s[4] = H1(x1, p1)
+ s[5] = H01(x1, p1)
+ s[6] = g(x2)
+ return s[7] = H0(xf, pf)
+ end
+
+ # Known solution
+ p0 = [3.9457646586891744, 0.15039559623165552, 0.05371271293970545]
+ t1 = 0.023509684041879215
+ t2 = 0.059737380899876
+ t3 = 0.10157134842432228
+ tf = 0.20204744057100849
+
+ # Test shooting function with known solution
+ s = zeros(eltype(p0), 7)
+ shoot!(s, p0, t1, t2, t3, tf)
+
+ # Verify solution
+ Test.@test norm(s) < 1e-6
+ end
+ end
+end
+
+end # module
+
+# CRITICAL: Redefine in outer scope for TestRunner
+test_goddard() = TestGoddardIndirect.test_goddard()