Skip to content

Commit

Permalink
Update docs
Browse files Browse the repository at this point in the history
  • Loading branch information
vavrines committed Feb 4, 2021
1 parent 01bdf16 commit b85090f
Show file tree
Hide file tree
Showing 6 changed files with 74 additions and 21 deletions.
9 changes: 7 additions & 2 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -50,8 +50,13 @@ parallel_page = [
"CUDA" => "para_cuda.md",
]

ml_page = [
"KitML" => "kitml1.md",
"UBE" => "kitml2.md",
]

fortran_page = [
"KitFort.jl" => "fortran1.md",
"KitFort" => "fortran1.md",
"Benchmark" => "fortran2.md",
]

Expand All @@ -71,7 +76,7 @@ makedocs(
"Tutorial" => tutorial_page,
"Parallelization" => parallel_page,
"Utility" => utility_page,
"SciML" => "kitml.md",
"SciML" => ml_page,
"Fortran" => fortran_page,
"Index" => "function_index.md",
"Python" => "python.md",
Expand Down
Binary file added docs/src/assets/icnn.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion docs/src/fortran1.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# KitFort.jl
# KitFort and high performance computing

Numerical simulations of nonlinear models and differential equations are essentially connected with supercomputers and high-performance computing (HPC).
The performance of a supercomputer or a software program is commonly measured in floating-point operations per second (FLOPS).
Expand Down
63 changes: 63 additions & 0 deletions docs/src/kitml1.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
# KitML and scientific machine learning

Machine learning is building its momentum in scientific computing.
Given the nonlinear structure of differential and integral equations, it is promising to incorporate the universal function approximator from machine learning models into the governing equations and achieve the balance between efficiency and accuracy.
KitML is designed as a scientific machine learning toolbox, which devotes to fusing mechanical and neural models.
For example, the Boltzmann collision operator can be divided into a combination of relaxation model and neural network, i.e. the so-called universal Boltzmann equation.
```math
\frac{df}{dt} = \int_{\mathcal{R}^{3}} \int_{\mathcal{S}^{2}} \mathcal{B}(\cos \beta, g)\left[f\left(\mathbf{u}^{\prime}\right) f\left(\mathbf{u}_{*}^{\prime}\right)-f(\mathbf{u}) f\left(\mathbf{u}_{*}\right)\right] d \mathbf{\Omega} d \mathbf{u}_{*} \simeq \nu(\mathcal{M}-f)+\mathrm{NN}_{\theta}(\mathcal{M}-f)
```
The UBE has the following benefits.
First, it automatically ensures the asymptotic limits.
Let's consider the Chapman-Enskog method for solving Boltzmann equation, where the distribution function is approximated with expansion series.
```math
f \simeq f^{(0)}+f^{(1)}+f^{(2)}+\cdots, \quad f^{(0)}=\mathcal{M}
```
Take the zeroth order truncation, and consider an illustrative multi-layer perceptron.
```math
\mathrm{NN}_{\theta}(x)=\operatorname{layer}_{n}\left(\ldots \text { layer }_{2}\left({\sigma}\left(\text { layer }_{1}(x)\right)\right)\right), \quad \operatorname{layer}(x)=w x
```
Given the zero input from ``M − f``, the contribution from collision term is absent, and the moment equation naturally leads to the Euler equations.
```math
\frac{\partial}{\partial t}\left(\begin{array}{c}
\rho \\
\rho \mathbf{U} \\
\rho E
\end{array}\right)+\nabla_{\mathbf{x}} \cdot\left(\begin{array}{c}
\rho \mathbf{U} \\
\rho \mathbf{U} \otimes \mathbf{U} \\
\mathbf{U}(\rho E+p)
\end{array}\right)=\int\left(\begin{array}{c}
1 \\
\mathbf{u} \\
\frac{1}{2} \mathbf{u}^{2}
\end{array}\right)\left(\mathcal{M}_{t}+\mathbf{u} \cdot \nabla_{\mathbf{x}} \mathcal{M}\right) d \mathbf{u}=0
```

KitML provides two functions to construct universal Boltzmann equation, and it works seamlessly with any modern ODE solver in [DifferentialEquations.jl](https://github.com/SciML/DifferentialEquations.jl).
```@docs
ube_dfdt
ube_dfdt!
```

Besides, we provide an input convex neural network (ICNN) developed by Amos et al.

The neural network parameters are constrained such that the output of the network is a convex function of the inputs.
The structure of the ICNN is shown as follows, and it allows for efficient inference via optimization over some inputs to the network given others, and can be applied to settings including structured prediction, data imputation, reinforcement learning, and others.
It is important for entropy-based modelling, since the minimization principle works exclusively with convex function.

![](./assets/icnn.png)

```@docs
ICNNLayer
ICNNChain
```
Besides, we also provide scientific machine learning training interfaces and I/O methods.
They are consistent with both [Flux.jl](https://github.com/FluxML/Flux.jl) and [DiffEqFlux.jl](https://github.com/SciML/DiffEqFlux.jl) ecosystem.

```@docs
sci_train
sci_train!
load_data
save_model
```
18 changes: 1 addition & 17 deletions docs/src/kitml.md → docs/src/kitml2.md
Original file line number Diff line number Diff line change
@@ -1,24 +1,16 @@
# Scientific Machine Learning and KitML
# Universal Boltzmann equation

Machine learning is building its momentum in scientific computing.
Given the nonlinear structure of differential and integral equations, it is promising to incorporate the universal function approximator from machine learning models into the governing equations and achieve the balance between efficiency and accuracy.
In the following, we present a universal differential equation strategy to construct the neural network enhanced Boltzmann equation.
The complicated fivefold integral operator is replaced by a combination of relaxation and neural models.
It promises a completely differential structure and thus the neural ODE type training and computing becomes possible.
The approach reduces the computational cost up to three orders of magnitude and preserves the perfect accuracy.
The detailed theory and implementation can be found in [Tianbai Xiao and Martin Frank, Using neural networks to accelerate the solution of the Boltzmann equation](https://arxiv.org/pdf/2010.13649.pdf).

```@docs
ube_dfdt
ube_dfdt!
```

First we load all the packages needed and set up the configurations.
```julia
using OrdinaryDiffEq, Flux, DiffEqFlux, Plots
using KitBase, KitML

# config
begin
case = "homogeneous"
maxTime = 3
Expand Down Expand Up @@ -47,7 +39,6 @@ end

The dataset is produced by the fast spectral method, which solves the nonlinear Boltzmann integral with fast Fourier transformation.
```julia
# dataset
begin
tspan = (0.0, maxTime)
tran = linspace(tspan[1], tspan[2], tlen)
Expand Down Expand Up @@ -124,7 +115,6 @@ end
Then we define the neural network and construct the unified model with mechanical and neural parts.
The training is conducted by DiffEqFlux.jl with ADAM optimizer.
```julia
# neural model
begin
model_univ = DiffEqFlux.FastChain(
DiffEqFlux.FastDense(nu, nu * nh, tanh),
Expand Down Expand Up @@ -152,18 +142,13 @@ begin
end
end

# train
res = DiffEqFlux.sciml_train(loss, p_model, ADAM(), cb = cb, maxiters = 200)
res = DiffEqFlux.sciml_train(loss, res.minimizer, ADAM(), cb = cb, maxiters = 200)

# residual history
plot(log.(his))
```

Once we have trained a hybrid Boltzmann collision term, we could solve it as a normal differential equation with any desirable solvers.
Consider the Midpoint rule as an example, the solution algorithm and visualization can be organized.
```julia
# solution
ube = ODEProblem(KitML.ube_dfdt, f0_1D, tspan, [M0_1D, τ0, (model_univ, res.minimizer)]);
sol = solve(
ube,
Expand All @@ -173,7 +158,6 @@ sol = solve(
saveat = tran,
);

# result
plot(
vSpace.u[:, vSpace.nv÷2, vSpace.nw÷2],
data_boltz_1D[:, 1],
Expand Down
3 changes: 2 additions & 1 deletion docs/src/reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,5 @@
- Xiao, T., Cai, Q., & Xu, K. (2017). A well-balanced unified gas-kinetic scheme for multiscale flow transport under gravitational field. Journal of Computational Physics, 332, 475-491.
- Xiao, T., Xu, K., & Cai, Q. (2019). A unified gas-kinetic scheme for multiscale and multicomponent flow transport. Applied Mathematics and Mechanics, 40(3), 355-372.
- Xiao, T., Liu, C., Xu, K., & Cai, Q. (2020). A velocity-space adaptive unified gas kinetic scheme for continuum and rarefied flows. Journal of Computational Physics, 415, 109535.
- Xiao, T., & Frank, M. (2020). Using neural networks to accelerate the solution of the Boltzmann equation. arXiv:2010.13649.
- Xiao, T., & Frank, M. (2020). Using neural networks to accelerate the solution of the Boltzmann equation. arXiv:2010.13649.
- Amos, B., Xu, L., & Kolter, J. Z. (2017, July). Input convex neural networks. In International Conference on Machine Learning (pp. 146-155). PMLR.

2 comments on commit b85090f

@vavrines
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JuliaRegistrator
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Registration pull request created: JuliaRegistries/General/29324

After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version.

This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:

git tag -a v0.7.1 -m "<description of version>" b85090f04837d26f7de8f4cdbb3ef52159c1bf5a
git push origin v0.7.1

Please sign in to comment.