Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Flow example #384

Open
jbcaillau opened this issue Jan 20, 2025 · 2 comments
Open

[Bug] Flow example #384

jbcaillau opened this issue Jan 20, 2025 · 2 comments
Assignees
Labels
bug Something isn't working

Comments

@jbcaillau
Copy link
Member

hi @oameye (cc @ocots @PierreMartinon); moving your separate example here (different issue compared to #369). The example below converges with the commented version of the cost, not with the uncommented one (which is not exactly what you posted), but runs without any AD error:

  • can you please confirm which (new) cost you want to minimise?
  • in any case, I suggest to replace your mysqrt by the asqrt function below (smoothed and symmetrised)
using OptimalControl
using NLPModelsIpopt
using LinearAlgebra

T = 50  # Time horizon

# Define the vector field
f(u, v) = [u - u^3 - 10*u*v^2,  -(1 - u^2)*v]
f(x) = f(x...)

asqrt(x; ε=1e-9) = sqrt(sqrt(x^2 + ε^2))

@def action begin
    t  [0, T], time
    x  R², state
    u  R², control
    x(0) == [-1, 0]    # Starting point (left well)
    x(T) == [1, 0]     # End point (right well)
    (t) == u(t)       # Path dynamics
    #∫( sum((u(t) - f(x(t))).^2) ) → min  # Minimize deviation from deterministic flow
    ( asqrt(sum((u(t) - f(x(t))).^2)) )  min  # Minimize deviation from deterministic flow
end

# Linear interpolation for x₁
x1(t) = -(1 - t/T) + t/T
# Parabolic guess for x₂
x2(t) = 0.3(-x1(t)^2 + 1)
x(t) = [x1(t), x2(t)]
u(t) = f(x(t))
init = (state=x, control=u)

sol = solve(action; init=init, grid_size=50)

~
~
~
"foo.jl" 32L, 859B écrit(s) 24,1 Tout

@jbcaillau jbcaillau added the bug Something isn't working label Jan 20, 2025
@oameye
Copy link
Contributor

oameye commented Jan 20, 2025

Hi, I would like to perform a geometric minimal action optimization, which is a variant of the now added tutorial here. It has the advantage, one does not have to additionally minimise for the arrival time T.

The integral function is modified to:
$\int_0^T \lVert u(s)\rVert \lVert f(x(s))\rVert - u(s)\cdot f(x(s)) \mathrm{d}s$

When I run with the modified integral and asqrt I get

using OptimalControl, LinearAlgebra
using NLPModelsIpopt

T = 50  # Time horizon

# Define the vector field
f(u, v) = [u - u^3 - 10*u*v^2,  -(1 - u^2)*v]
f(x) = f(x...)


asqrt(x; ε=1e-9) = sqrt(sqrt(x^2 + ε^2))

@def action begin
    t  [0, T], time
    x  R², state
    u  R², control
    x(0) == [-1, 0]    # Starting point (left well)
    x(T) == [1, 0]     # End point (right well)
    (t) == u(t)       # Path dynamics
    (asqrt(dot(u(t),u(t))*dot(f(x(t)),f(x(t)))) - dot(u(t),f(x(t))))  min
end
# Linear interpolation for x₁
x1(t) = -(1 - t/T) + t/T
# Parabolic guess for x₂
x2(t) = 0.3(-x1(t)^2 + 1)
x(t) = [x1(t), x2(t)]
u(t) = f(x(t))
init = (state=x, control=u)

sol = solve(action; init=init, grid_size=50)
ERROR: Cannot determine ordering of Dual tags ForwardDiff.Tag{ReverseDiff.var"#131#134"{typeof(+), ForwardDiff.Dual{ForwardDiff.Tag{ADNLPModels.var"#ψ#72"{CTDirect.var"#34#36"{CTDirect.DOCP}}, Float64}, Float64, 1}}, ForwardDiff.Dual{ForwardDiff.Tag{ADNLPModels.var"#ψ#72"{CTDirect.var"#34#36"{CTDirect.DOCP}}, Float64}, Float64, 1}} and ForwardDiff.Tag{ADNLPModels.var"#ψ#72"{CTDirect.var"#34#36"{CTDirect.DOCP}}, Float64}
Stacktrace:
  [1] value
    @ C:\Users\User\.julia\packages\ForwardDiff\UBbGT\src\dual.jl:98 [inlined]
  [2] (::ForwardDiff.var"#76#77"{ForwardDiff.Tag{}})(d::ForwardDiff.Dual{ForwardDiff.Tag{…}, ForwardDiff.Dual{…}, 1})
    @ ForwardDiff C:\Users\User\.julia\packages\ForwardDiff\UBbGT\src\apiutils.jl:6
  [3] value!(f::ForwardDiff.var"#76#77"{}, r::DiffResults.ImmutableDiffResult{…}, x::ForwardDiff.Dual{…})
    @ DiffResults C:\Users\User\.julia\packages\DiffResults\YLo25\src\DiffResults.jl:168
  [4] extract_value!
    @ C:\Users\User\.julia\packages\ForwardDiff\UBbGT\src\apiutils.jl:5 [inlined]
  [5] derivative!
    @ C:\Users\User\.julia\packages\ForwardDiff\UBbGT\src\derivative.jl:49 [inlined]
  [6] binary_scalar_forward_exec!(f::typeof(+), output::ReverseDiff.TrackedReal{…}, input::Tuple{…}, cache::Base.RefValue{…})
    @ ReverseDiff C:\Users\User\.julia\packages\ReverseDiff\p1MzG\src\derivatives\scalars.jl:116
  [7] scalar_forward_exec!(instruction::ReverseDiff.ScalarInstruction{…})
    @ ReverseDiff C:\Users\User\.julia\packages\ReverseDiff\p1MzG\src\derivatives\scalars.jl:88
  [8] forward_exec!(instruction::ReverseDiff.ScalarInstruction{…})
    @ ReverseDiff C:\Users\User\.julia\packages\ReverseDiff\p1MzG\src\tape.jl:82
  [9] ForwardExecutor
    @ C:\Users\User\.julia\packages\ReverseDiff\p1MzG\src\api\tape.jl:76 [inlined]
 [10] (::FunctionWrappers.CallWrapper{Nothing})(f::ReverseDiff.ForwardExecutor{ReverseDiff.ScalarInstruction{…}})
    @ FunctionWrappers C:\Users\User\.julia\packages\FunctionWrappers\Q5cBx\src\FunctionWrappers.jl:65
 [11] macro expansion
    @ C:\Users\User\.julia\packages\FunctionWrappers\Q5cBx\src\FunctionWrappers.jl:137 [inlined]
 [12] do_ccall
    @ C:\Users\User\.julia\packages\FunctionWrappers\Q5cBx\src\FunctionWrappers.jl:125 [inlined]
 [13] FunctionWrapper
    @ C:\Users\User\.julia\packages\FunctionWrappers\Q5cBx\src\FunctionWrappers.jl:144 [inlined]
 [14] forward_pass!(compiled_tape::ReverseDiff.CompiledTape{ReverseDiff.GradientTape{…}})
    @ ReverseDiff C:\Users\User\.julia\packages\ReverseDiff\p1MzG\src\api\tape.jl:124
 [15] seeded_forward_pass!
    @ C:\Users\User\.julia\packages\ReverseDiff\p1MzG\src\api\tape.jl:42 [inlined]
 [16] gradient!(result::Tuple{…}, tape::ReverseDiff.CompiledTape{…}, input::Tuple{…})
    @ ReverseDiff C:\Users\User\.julia\packages\ReverseDiff\p1MzG\src\api\gradients.jl:79
 [17] ∇l!
    @ C:\Users\User\.julia\packages\ADNLPModels\bOFzz\src\sparse_hessian.jl:194 [inlined]
 [18] ∇l!
    @ C:\Users\User\.julia\packages\ADNLPModels\bOFzz\src\sparse_hessian.jl:193 [inlined]
 [19] sparse_hess_coord!(b::ADNLPModels.SparseReverseADHessian{…}, x::Vector{…}, obj_weight::Float64, y::SubArray{…}, vals::Vector{…})
    @ ADNLPModels C:\Users\User\.julia\packages\ADNLPModels\bOFzz\src\sparse_hessian.jl:328
 [20] hess_coord!(b::ADNLPModels.SparseReverseADHessian{…}, nlp::ADNLPModels.ADNLPModel{…}, x::Vector{…}, y::SubArray{…}, obj_weight::Float64, vals::Vector{…})
    @ ADNLPModels C:\Users\User\.julia\packages\ADNLPModels\bOFzz\src\sparse_hessian.jl:352
 [21] hess_coord!(nlp::ADNLPModels.ADNLPModel{…}, x::Vector{…}, y::Vector{…}, vals::Vector{…}; obj_weight::Float64)
    @ ADNLPModels C:\Users\User\.julia\packages\ADNLPModels\bOFzz\src\nlp.jl:706
 [22] hess_coord!
    @ C:\Users\User\.julia\packages\ADNLPModels\bOFzz\src\nlp.jl:695 [inlined]
 [23] (::NLPModelsIpopt.var"#eval_h#5"{})(x::Vector{…}, rows::Vector{…}, cols::Vector{…}, σ::Float64, λ::Vector{…}, values::Vector{…})
    @ NLPModelsIpopt C:\Users\User\.julia\packages\NLPModelsIpopt\0YgvC\src\NLPModelsIpopt.jl:109
 [24] _Eval_H_CB(n::Int32, x_ptr::Ptr{…}, ::Int32, obj_factor::Float64, m::Int32, lambda_ptr::Ptr{…}, ::Int32, nele_hess::Int32, iRow::Ptr{…}, jCol::Ptr{…}, values_ptr::Ptr{…}, user_data::Ptr{…})
    @ Ipopt C:\Users\User\.julia\packages\Ipopt\gCQtM\src\C_wrapper.jl:129
 [25] IpoptSolve(prob::Ipopt.IpoptProblem)
    @ Ipopt C:\Users\User\.julia\packages\Ipopt\gCQtM\src\C_wrapper.jl:399
 [26] solve!(solver::IpoptSolver, nlp::ADNLPModels.ADNLPModel{…}, stats::SolverCore.GenericExecutionStats{…}; callback::Function, kwargs::@Kwargs{})
    @ NLPModelsIpopt C:\Users\User\.julia\packages\NLPModelsIpopt\0YgvC\src\NLPModelsIpopt.jl:240
 [27] solve!
    @ C:\Users\User\.julia\packages\NLPModelsIpopt\0YgvC\src\NLPModelsIpopt.jl:161 [inlined]
 [28] #solve!#16
    @ C:\Users\User\.julia\packages\SolverCore\vTpnV\src\solver.jl:40 [inlined]
 [29] solve!
    @ C:\Users\User\.julia\packages\SolverCore\vTpnV\src\solver.jl:38 [inlined]
 [30] solve_docp(tag::CTDirect.IpoptTag, docp::CTDirect.DOCP, nlp::ADNLPModels.ADNLPModel{…}; display::Bool, max_iter::Int64, tol::Float64, print_level::Int64, mu_strategy::String, linear_solver::String, kwargs::@Kwargs{})
    @ CTSolveExtIpopt C:\Users\User\.julia\packages\CTDirect\1ZEiU\ext\CTSolveExtIpopt.jl:54
 [31] solve_docp
    @ C:\Users\User\.julia\packages\CTDirect\1ZEiU\ext\CTSolveExtIpopt.jl:15 [inlined]
 [32] direct_solve(::OptimalControlModel{…}; init::@NamedTuple{}, grid_size::Int64, time_grid::Nothing, kwargs::@Kwargs{})
    @ CTDirect C:\Users\User\.julia\packages\CTDirect\1ZEiU\src\solve.jl:119
 [33] solve(::OptimalControlModel{Autonomous, Fixed}; kwargs::@Kwargs{init::@NamedTuple{}, grid_size::Int64})
    @ OptimalControl C:\Users\User\.julia\packages\OptimalControl\6SuFg\src\solve.jl:61
 [34] top-level scope
    @ Untitled-1:30
Some type information was truncated. Use `show(err)` to see complete types.

@ocots
Copy link
Member

ocots commented Jan 20, 2025

I do not understand why but if you remove dot(f(x(t)),f(x(t))), this error does not show up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants