Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documentation check #283

Merged
merged 25 commits into from
Apr 12, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@ Alternatively, we could use a broadcasting syntax.
end
```

As you can see, `RxInfer` offers a model specification syntax that resembles closely to the mathematical equations defined above. We use `datavar` function to create "clamped" variables that take specific values at a later date. $\theta \sim \mathrm{Beta}(2.0, 7.0)$ expression creates random variable $θ$ and assigns it as an output of $\mathrm{Beta}$ node in the corresponding FFG.
As you can see, `RxInfer` offers a model specification syntax that resembles closely to the mathematical equations defined above. The $\theta \sim \mathrm{Beta}(2.0, 7.0)$ expression creates random variable $θ$ and assigns it as an output of $\mathrm{Beta}$ node in the corresponding FFG.

> [!NOTE]
> `RxInfer.jl` uses `GraphPPL.jl` for model and constraints specification. `GraphPPL.jl` API has been changed in version `4.0.0`. See [Migration Guide](https://reactivebayes.github.io/GraphPPL.jl/stable/) for more details.
Expand Down
21 changes: 7 additions & 14 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,29 +27,22 @@ Given a probabilistic model, RxInfer allows for an efficient message-passing bas

## Why RxInfer

Many important AI applications, including audio processing, self-driving vehicles, weather forecasting, and extended-reality video processing require continually solving an inference task in sophisticated probabilistic models with a large number of latent variables.
Often, the inference task in these applications must be performed continually and in real-time in response to new observations.
Popular MC-based inference methods, such as the No U-Turn Sampler (NUTS) or Hamiltonian Monte Carlo (HMC) sampling, rely on computationally heavy sampling procedures that do not scale well to probabilistic models with thousands of latent states.
Therefore, while MC-based inference is an very versatile tool, it is practically not suitable for real-time applications.
While the alternative variational inference method (VI) promises to scale better to large models than sampling-based inference, VI requires the derivation of gradients of a "variational Free Energy" cost function.
For large models, manual derivation of these gradients might not be feasible, while automated "black-box" gradient methods do not scale either because they are not capable of taking advantage of sparsity or conjugate pairs in the model.
Therefore, while Bayesian inference is known as the optimal data processing framework, in practice, real-time AI applications rely on much simpler, often ad hoc, data processing algorithms.

RxInfer aims to remedy these issues by running efficient Bayesian inference in sophisticated probabilistic models,
taking advantage of local conjugate relationships in probabilistic models, and focusing on real-time Bayesian inference in large state-space models with thousands of latent variables. In addition, RxInfer provides a straightforward way to extend its functionality with custom factor nodes and message passing update rules. The engine is capable of running
various Bayesian inference algorithms in different parts of the factor graph of a single probabilistic model. This makes it easier
to explore different "what-if" scenarios and enables very efficient inference in specific cases.
Many important AI applications, including audio processing, self-driving vehicles, weather forecasting, and extended-reality video processing require continually solving an inference task in sophisticated probabilistic models with a large number of latent variables. Often, the inference task in these applications must be performed continually and in real-time in response to new observations.

Popular MC-based inference methods, such as the No U-Turn Sampler (NUTS) or Hamiltonian Monte Carlo (HMC) sampling, rely on computationally heavy sampling procedures that do not scale well to probabilistic models with thousands of latent states. Therefore, while MC-based inference is an very versatile tool, it is practically not suitable for real-time applications. While the alternative variational inference method (VI) promises to scale better to large models than sampling-based inference, VI requires the derivation of gradients of a "Variational Free Energy" cost function. For large models, manual derivation of these gradients might not be feasible, while automated "black-box" gradient methods do not scale either because they are not capable of taking advantage of sparsity or conjugate pairs in the model. Therefore, while Bayesian inference is known as the optimal data processing framework, in practice, real-time AI applications rely on much simpler, often ad hoc, data processing algorithms.

RxInfer aims to remedy these issues by running efficient Bayesian inference in sophisticated probabilistic models, taking advantage of local conjugate relationships in probabilistic models, and focusing on real-time Bayesian inference in large state-space models with thousands of latent variables. In addition, RxInfer provides a straightforward way to extend its functionality with custom factor nodes and message passing update rules. The engine is capable of running various Bayesian inference algorithms in different parts of the factor graph of a single probabilistic model. This makes it easier to explore different "what-if" scenarios and enables very efficient inference in specific cases.

## Package Features

- User friendly syntax for specification of probabilistic models, achieved with [`GraphPPL`](https://github.com/ReactiveBayes/GraphPPL.jl).
- Support for hybrid models combining discrete and continuous latent variables.
- Factorisation and functional form constraints specification.
- Factorization and functional form constraints specification.
- Graph visualisation and extensions with different custom plugins.
- Saving graph on a disk and re-loading it later on.
- Automatic generation of message passing algorithms, achieved with [`ReactiveMP`](https://github.com/ReactiveBayes/ReactiveMP.jl).
- Support for hybrid distinct message passing inference algorithm under a unified paradigm.
- Evaluation of Bethe free energy as a model performance measure.
- Evaluation of Bethe Free Energy as a model performance measure.
- Schedule-free reactive message passing API.
- Scalability for large models with millions of parameters and observations.
- High performance.
Expand Down
2 changes: 1 addition & 1 deletion docs/src/library/functional-forms.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ form_constraint.fixed_value = Gamma(1.0, 1.0)

## [CompositeFormConstraint](@id lib-forms-composite-constraint)

It is possible to create a composite functional form constraint with the `+` operator, e.g:
It is possible to create a composite functional form by stacking operators, e.g:

```@example constraints-functional-forms
@constraints begin
Expand Down
2 changes: 1 addition & 1 deletion docs/src/library/model-construction.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Also read the [_Model Specification_](@ref user-guide-model-specification) guide

## [`@model` macro](@id lib-model-construction-model-macro)

`RxInfer` operates with so-called [graphical probabilistic models](https://en.wikipedia.org/wiki/Graphical_model), more specifically [factor graphs](https://en.wikipedia.org/wiki/Factor_graph). Working with graphs directly is, however, tedius and error-prone, especially for large models. To simplify the process, `RxInfer` exports the `@model` macro, which translates a textual description of a probabilistic model into a corresponding factor graph representation.
`RxInfer` operates with so-called [graphical probabilistic models](https://en.wikipedia.org/wiki/Graphical_model), more specifically [factor graphs](https://en.wikipedia.org/wiki/Factor_graph). Working with graphs directly is, however, tedious and error-prone, especially for large models. To simplify the process, `RxInfer` exports the `@model` macro, which translates a textual description of a probabilistic model into a corresponding factor graph representation.

```@docs
RxInfer.@model
Expand Down
4 changes: 2 additions & 2 deletions docs/src/manuals/comparison.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Nowadays there's plenty of probabilistic programming languages and packages avai

| Toolbox | Universality | Efficiency | Expressiveness | Debugging & Visualization | Modularity | Inference Engine | Language | Community & Ecosystem |
| -------------------------------------------------------------------- | ------------ | ---------- | -------------- | ------------------------- | ---------- | ---------------- | -------- | --------------------- |
| [**RxInfer.jl**](https://rxinfer.ml/) | ~ | ✓ | ✓ | ~ | | Message-passing | Julia | ✗ |
| [**RxInfer.jl**](https://rxinfer.ml/) | ~ | ✓ | ✓ | ~ | | Message-passing | Julia | ✗ |
| [**ForneyLab.jl**](https://github.com/biaslab/ForneyLab.jl) | ✗ | ~ | ✗ | ~ | ✗ | Message-passing | Julia | ✗ |
| [**Infer.net**](https://dotnet.github.io/infer/) | ~ | ✓ | ✗ | ✓ | ✗ | Message-passing | C# | ✗ |
| [**PGMax**](https://github.com/google-deepmind/PGMax) | ✗ | ✓ | ✗ | ✓ | ✗ | Message-passing | Python | ✗ |
Expand All @@ -28,7 +28,7 @@ Nowadays there's plenty of probabilistic programming languages and packages avai

**Notes**:
- **Universality**: Denotes the capability to depict a vast array of probabilistic models.
- **Efficiency**: Highlights computational competence. A "" in this context suggests perceived slowness.
- **Efficiency**: Highlights computational competence. A "~" in this context suggests perceived slowness.
- **Expressiveness**: Assesses the ability to concisely formulate intricate probabilistic models.
- **Debugging & Visualization**: Evaluates the suite of tools for model debugging and visualization.
- **Modularity**: Reflects the potential to create models by integrating smaller models.
Expand Down
5 changes: 1 addition & 4 deletions docs/src/manuals/customization/postprocess.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,7 @@
# [Inference results postprocessing](@id user-guide-inference-postprocess)

[`infer`](@ref) allow users to postprocess the inference result with the `postprocess = ...` keyword argument. The inference engine
operates on __wrapper__ types to distinguish between marginals and messages. By default
these wrapper types are removed from the inference results if no addons option is present.
Together with the enabled addons, however, the wrapper types are preserved in the
inference result output value. Use the options below to change this behaviour:
operates on __wrapper__ types to distinguish between marginals and messages. By default these wrapper types are removed from the inference results if no addons option is present. Together with the enabled addons, however, the wrapper types are preserved in the inference result output value. Use the options below to change this behaviour:

```@docs
inference_postprocess
Expand Down
10 changes: 5 additions & 5 deletions docs/src/manuals/inference/streamlined.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# [Streamlined (online) inference](@id manual-online-inference)
# [Streaming (online) inference](@id manual-online-inference)

This guide explains how to use the [`infer`](@ref) function for dynamic datasets. We'll show how `RxInfer` can continuously update beliefs asynchronously whenever a new observation arrives. We'll use a simple Beta-Bernoulli model as an example, which has been covered in the [Getting Started](@ref user-guide-getting-started) section,
but keep in mind that these techniques can apply to any model.
Expand Down Expand Up @@ -444,7 +444,7 @@ Nice, the history of the estimated posteriors aligns well with the real (hidden)
## [Callbacks](@id manual-online-inference-callbacks)

The [`RxInferenceEngine`](@ref) has its own lifecycle. The callbacks differ a little bit from [Using callbacks with Static Inference](@ref manual-static-inference-callbacks).
Here are available callbacks that can be used together with the streamlined inference:
Here are available callbacks that can be used together with the streaming inference:
```@eval
using RxInfer, Test, Markdown
# Update the documentation below if this test does not pass
Expand Down Expand Up @@ -522,7 +522,7 @@ nothing #hide

## [Event loop](@id manual-online-inference-event-loop)

In constrast to [`Static Inference`](@ref manual-static-inference), the streamlined version of the [`infer`](@ref) function
In constrast to [`Static Inference`](@ref manual-static-inference), the streaming version of the [`infer`](@ref) function
does not provide callbacks such as `on_marginal_update`, since it is possible to subscribe directly on those updates with the
`engine.posteriors` field. However, the reactive inference engine provides an ability to listen to its internal event loop, that also includes "pre" and "post" events for posterior updates.

Expand Down Expand Up @@ -863,9 +863,9 @@ nothing #hide
The `:before_stop` and `:after_stop` events are not emmited in case of the datastream completion. Use the `:on_complete` instead.


## [Using `data` keyword argument with the streamlined inference](@id manual-online-inference-data)
## [Using `data` keyword argument with streaming inference](@id manual-online-inference-data)

The streamlined version does support static datasets as well.
The streaming version does support static datasets as well.
Internally, it converts it to a datastream, that emits all observations in a sequntial order without any delay. As an example:

```@example manual-online-inference
Expand Down
14 changes: 13 additions & 1 deletion docs/src/manuals/meta-specification.md
Original file line number Diff line number Diff line change
Expand Up @@ -170,4 +170,16 @@ println("Estimated mean for latent state `x` is ", mean(inference_result.posteri
!!! warning
The above example is not mathematically correct. It is only used to show how we can work with `@meta` as well as how to create a meta structure for a node in `RxInfer.jl`.

Read more about the `@meta` macro in the [official documentation](https://reactivebayes.github.io/GraphPPL.jl/stable/) of GraphPPL
Read more about the `@meta` macro in the [official documentation](https://reactivebayes.github.io/GraphPPL.jl/stable/) of GraphPPL

## Adding metadata to nodes in submodels

Similarly to the `@constraints` macro, the `@meta` macro exposes syntax to push metadata to nodes in submodels. With the `for meta in submodel` syntax we can apply metadata to nodes in submodels. For example, if we use the `gaussian_model_with_meta` mnodel in a larger model, we can write:

```@example custom-meta
custom_meta = @meta begin
for meta in gaussian_model_with_meta
NormalMeanVariance(y) -> MetaConstrainedMeanNormal(-2, 2)
end
end
```
2 changes: 2 additions & 0 deletions docs/src/manuals/migration-guide-v2-v3.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,8 @@ infer(model = coin_toss(prior=Beta(1, 1)),
Initialization of messages and marginals to kickstart the inference procedure was previously done with the `initmessages` and `initmarginals` keyword. With the introduction of a nested model specificiation in the `@model` macro, we now need a more specific way to initialize messages and marginals. This is done with the new `@initialization` macro. The syntax for the `@initialization` macro is similar to the `@constraints` and `@meta` macro. An example is shown below:

```@example migration-guide
@model function submodel() end #hide

@initialization begin
# Initialize the marginal for the variable x
q(x) = vague(NormalMeanVariance)
Expand Down
6 changes: 2 additions & 4 deletions docs/src/manuals/model-specification.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,7 @@ where `model_arguments...` may include both hypeparameters and data.
`model_arguments` are converted to keyword arguments. Positional arguments in the model specification are not supported.
Thus it is not possible to use Julia's multiple dispatch for the model arguments.

The `@model` macro returns a regular Julia function (in this example `model_name()`) which can be executed as usual. The only difference here is that
all arguments of the model function are treated as keyword arguments. Upon calling, the model function returns a so-called model generator object, e.g:
The `@model` macro returns a regular Julia function (in this example `model_name()`) which can be executed as usual. The only difference here is that all arguments of the model function are treated as keyword arguments. Upon calling, the model function returns a so-called model generator object, e.g:

```@example model-specification-model-macro
using RxInfer #hide
Expand All @@ -34,8 +33,7 @@ nothing #hide
```

The model generator is not a real model (yet). For example, in the code above, we haven't specified anything for the `observation`.
The generator object allows us to iteratively add extra properties to the model, condition on data, and/or assign extra metadata information
without actually materializing the entire graph structure. Read extra information about model generator [here](@ref lib-model-construction).
The generator object allows us to iteratively add extra properties to the model, condition on data, and/or assign extra metadata information without actually materializing the entire graph structure. Read extra information about model generator [here](@ref lib-model-construction).

## A state space model example

Expand Down
6 changes: 6 additions & 0 deletions src/model/plugins/initialization_plugin.jl
Original file line number Diff line number Diff line change
Expand Up @@ -295,6 +295,12 @@ function init_macro_interior(init_body::Expr)
return init_body
end

"""
@initialization

Macro for specifying the initialization state of a model. Accepts either a function or a block of code.
Allows the specification of initial messages and marginals that can be applied to a model in the `infer` function.
"""
macro initialization(init_body)
return esc(RxInfer.init_macro_interior(init_body))
end
Expand Down
Loading