Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simple exponential growth ODE example #1

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

ali-ramadhan
Copy link
Member

@ali-ramadhan ali-ramadhan commented Apr 20, 2021

After talking with @glwagner and @sandreza we thought it might be good to start a super simple toy example.

This PR just adds a simple function simulate_exponential_growth that solves the differential equation dx/dt = kx using KernelAbstractions.jl and array mutation.

If we could take gradients of it then we could solve simple parameter estimation problems like estimating the parameter k or the initial condition x₀ from the final state. Being so simple there's an analytic solution we could easily compare against.

Do let us know if this is too simple.

@wsmoses
Copy link
Member

wsmoses commented Apr 20, 2021

Seems reasonable to me and reminiscent of this test from the original Enzyme paper (https://github.com/wsmoses/Enzyme/blob/savework2/enzyme/benchmarks/ode/ode.cpp) I think the one thing that needs to be done to get this work is KA playing nicely.

@wsmoses
Copy link
Member

wsmoses commented Apr 20, 2021

Oh actually I see the iteration loop is actually outside the kernel rather than inside it. This may require the integration that allows going through a kernel launch so may be slightly more tricky. The kernel should trivially work though. It would also be useful if you have a sample (perhaps more meaty, but up to you) kernel where all the work is in the kernel, rather than in the calling code.

@ali-ramadhan
Copy link
Member Author

Ah the iteration loop could be inside the kernel. I guess I made the kernel take just one time since the Oceananigans.jl kernels just take one time step as well.

I moved the iteration loop into the kernel now.

@glwagner
Copy link
Collaborator

It would also be useful if you have a sample (perhaps more meaty, but up to you) kernel

@wsmoses we can also come up with a one-dimensional PDE example. Would that be useful or should we work with this zero dimensional case for now?


# iterative kernel launches
@show simulate_exponential_growth(x₀=1.0, k=2.5, Δt=0.01, N=100, device=CPU())
@show simulate_exponential_growth(x₀=1.0, k=2.5, Δt=0.01, N=100, device=CUDADevice())
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These three lines are now also a single kernel launch, right?

@show simulate_exponential_growth(x₀=1.0, k=2.5, Δt=0.01, N=100, device=CUDADevice())

# Optimization problem:
# If x₀=1, Δt = 0.01, N = 100, and x(t = 1) = π then what is k?
Copy link
Collaborator

@glwagner glwagner Apr 21, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we add something like

using Enzyme

simulate_exponential_growth(k) = simulate_exponential_growth(; x₀=1.0, k=k, Δt=0.01, N=100, device=CPU())

@show autodiff(simulate_exponential_growth, Active(1.0))

to see what happens?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I realized I was a little confused. We need to define a loss function and take the gradient of that. Perhaps we can define a k_truth, and compare the result of the simulation at a given k_test? I'll make a comment on the kernel.

@glwagner
Copy link
Collaborator

glwagner commented Apr 21, 2021

We might want something like

function simulate_exponential_growth(; x₀, k_test, k_truth, Δt, N, device)

    if device isa CPU
        x = [x₀]
    elseif device isa CUDADevice
        x = CuArray([x₀])
    end

    time_step_kernel! = time_step!(device, 1)
    event = time_step_kernel!(x, k_test, Δt, N, ndrange=1)
    wait(event)

    CUDA.@allowscalar error = abs(x[1] - x₀ * exp(k_truth * Δt * N))

    return error
end

The example in the Enzyme README takes the gradient of a single-argument function. If we need to use this pattern then we'd want to define a second function:

simulate_exponential_growth(k) = simulate_exponential_growth(; x₀=1.0, k_test=k, k_truth=1.0, Δt=0.01, N=100, device=CPU())

and perhaps try to execute

@show autodiff(simulate_exponential_growth, Active(1.1))

We could also determine whether the result of the autodiff is similar to the result of a finite difference (more of a sanity check since presumably there have already been countless tests of this nature).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants