Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add overloads for Base functions needed for tracing through matrix exponential from ExponentialUtilities #55

Closed

Conversation

Vaibhavdixit02
Copy link

I am not a 100% confident if this is the right thing to do

Comment on lines +38 to +39
Base.isless(x::SparseConnectivityTracer.AbstractTracer, ::SparseConnectivityTracer.AbstractTracer) = true
Base.isless(::SparseConnectivityTracer.AbstractTracer, ::Float64) = true
Copy link
Owner

@adrhill adrhill May 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comparisons like max, isless and isequal will need primal value information, otherwise we always hit the first branch of any comparison.
To give you an example, the function f(x, y) = isless(x, y) ? x : y would always return the same Jacobian pattern [1 0].

Any global pattern we return should be a conservative estimate of the sparsity over the entire input domain.
In case of the example above, this would be [1 1] (not sparse).

We will add functionality for local sparsity detection soon. I opened #56 for the purpose of tracking.

Comment on lines +35 to +36
Base.floatmin(x::Type{<:SparseConnectivityTracer.AbstractTracer}) = Base.floatmin(Float32)
Base.eps(x::Type{<:SparseConnectivityTracer.AbstractTracer}) = Base.eps(Float32)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These should already exist:

# Functions returning constant output
# that only depends on the input type.
# For the purpose of operator overloading,
# these are kept separate from ops_1_to_1_z.
ops_1_to_1_const = (
:zero, :one,
:eps,
:typemin, :typemax,
:floatmin, :floatmax, :maxintfloat,
)

Did you encounter any errors on these functions?

@adrhill
Copy link
Owner

adrhill commented May 8, 2024

Could you add a test-case for the matrix exponential?

I suggest we merge it as a @test_broken under the "Real-world examples"

@testset verbose = true "Real-world examples" begin
@testset "Brusselator" begin
include("brusselator.jl")
end
@testset "NNlib" begin
include("nnlib.jl")
end
end

and make it work once #56 is merged.

@gdalle gdalle mentioned this pull request May 16, 2024
@adrhill
Copy link
Owner

adrhill commented May 16, 2024

I tried running this in #65 and ran into issues with in-place updated scalars.

julia> x = rand(2, 2)
2×2 Matrix{Float64}:
 0.548476  0.996494
 0.311785  0.340692

julia> f!(y, x) = exponential!(x)
f! (generic function with 1 method)

julia> J = local_jacobian_pattern(f!, y, x)
ERROR: TypeError: in typeassert, expected Float64, got a value of type SparseConnectivityTracer.Dual{Float64, GradientTracer{BitSet}}
Stacktrace:
  [1] opnorm1(A::Matrix{SparseConnectivityTracer.Dual{Float64, GradientTracer{BitSet}}})
    @ LinearAlgebra /Applications/Julia-1.10.app/Contents/Resources/julia/share/julia/stdlib/v1.10/LinearAlgebra/src/generic.jl:665
  [2] opnorm(A::Matrix{SparseConnectivityTracer.Dual{Float64, GradientTracer{BitSet}}}, p::Int64)
    @ LinearAlgebra /Applications/Julia-1.10.app/Contents/Resources/julia/share/julia/stdlib/v1.10/LinearAlgebra/src/generic.jl:741
  [3] exponential!(A::Matrix{…}, method::ExpMethodHigham2005, _cache::Tuple{…})
    @ ExponentialUtilities ~/.julia/packages/ExponentialUtilities/xLH9y/src/exp_noalloc.jl:85
  [4] exponential!(A::Matrix{SparseConnectivityTracer.Dual{Float64, GradientTracer{BitSet}}}, method::ExpMethodHigham2005)
    @ ExponentialUtilities ~/.julia/packages/ExponentialUtilities/xLH9y/src/exp_noalloc.jl:83
  [5] exponential!(A::Matrix{SparseConnectivityTracer.Dual{Float64, GradientTracer{BitSet}}})
    @ ExponentialUtilities ~/.julia/packages/ExponentialUtilities/xLH9y/src/exp.jl:15
  [6] f!(y::Matrix{SparseConnectivityTracer.Dual{…}}, x::Matrix{SparseConnectivityTracer.Dual{…}})
    @ Main ./REPL[190]:1
  [7] trace_function(::Type{SparseConnectivityTracer.Dual{…}}, f!::typeof(f!), y::Matrix{Float64}, x::Matrix{Float64})
    @ SparseConnectivityTracer ~/Developer/SparseConnectivityTracer.jl/src/pattern.jl:33
  [8] local_jacobian_pattern(f!::Function, y::Matrix{Float64}, x::Matrix{Float64}, ::Type{BitSet})
    @ SparseConnectivityTracer ~/Developer/SparseConnectivityTracer.jl/src/pattern.jl:180
  [9] local_jacobian_pattern(f!::Function, y::Matrix{Float64}, x::Matrix{Float64})
    @ SparseConnectivityTracer ~/Developer/SparseConnectivityTracer.jl/src/pattern.jl:179
 [10] top-level scope
    @ REPL[191]:1
Some type information was truncated. Use `show(err)` to see complete types.

The problem is opnorm1 from LineaAlgebra.jl not being generic enough:

function opnorm1(A::AbstractMatrix{T}) where T
    require_one_based_indexing(A)
    m, n = size(A)
    Tnorm = typeof(float(real(zero(T))))
    Tsum = promote_type(Float64, Tnorm)
    nrm::Tsum = 0
    @inbounds begin
        for j = 1:n
            nrmj::Tsum = 0           # <--- Float64
            for i = 1:m
                nrmj += norm(A[i,j]) # <--- Dual{Float64, GradientTracer{BitSet}}
            end
            nrm = max(nrm,nrmj)
        end
    end
    return convert(Tnorm, nrm)
end

@Vaibhavdixit02
Copy link
Author

You need the overloads I had defined for real and float to make this work right

@adrhill
Copy link
Owner

adrhill commented May 17, 2024

Right, they are included in #65. Otherwise the error is already thrown at Tnorm = typeof(float(real(zero(T)))) in line 4.

@adrhill
Copy link
Owner

adrhill commented May 17, 2024

Basically, the type annotation in LinearAlgebra is the issue:

julia> t = Dual(2.0, GradientTracer(BitSet([1, 2, 3])))
Dual{Float64, GradientTracer{BitSet}}(2.0, GradientTracer{BitSet}(1, 2, 3))

julia> res = 1.0
1.0

julia> res += t
Dual{Float64, GradientTracer{BitSet}}(3.0, GradientTracer{BitSet}(1, 2, 3))

julia> res2::Float64 = 1.0
1.0

julia> res2 += t
ERROR: cannot assign an incompatible value to the global Main.res2.
Stacktrace:
 [1] top-level scope
   @ REPL[218]:1

I'm closing this for now, since I don't see a way to work around it. Code needs to be written in a generic manner compatible with any Number type for it to work with SCT.

@adrhill adrhill closed this May 17, 2024
@gdalle
Copy link
Collaborator

gdalle commented May 17, 2024

I think the solution is overloads at the vector scale, not the number scale

@adrhill
Copy link
Owner

adrhill commented May 17, 2024

I guess we could start touching both, even though it feels a bit "dirty".

Ideally, we would just use vector overloads when we are out of options, since scalar overloads can be much less conservative (see e.g. #56 (comment)).

@adrhill
Copy link
Owner

adrhill commented May 17, 2024

Do ForwardDiff and Measurements add overloads at array scale?

@gdalle
Copy link
Collaborator

gdalle commented May 18, 2024

I searched for Array and Vector in https://github.com/JuliaDiff/ForwardDiff.jl/blob/master/src/dual.jl, turned up nothing, so I guess not

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants