-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge rxinference
and inference
functions into infer
#190
Conversation
I guess we can make it mandatory and allow |
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
rxinference
and inference
functions into infer
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## main #190 +/- ##
==========================================
- Coverage 80.31% 80.31% -0.01%
==========================================
Files 11 11
Lines 1285 1290 +5
==========================================
+ Hits 1032 1036 +4
- Misses 253 254 +1 ☔ View full report in Codecov by Sentry. |
I recommend not halting this PR for long @bvdmitri |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some documentation has been lost in the process. Also the function documentation is kinda big. Do we want to maybe make a documentation page instead (I'm not sure, just asking your opinion)?
- `events = nothing`: inference cycle events, optional, see below for more info (exclusive for streamline inference) | ||
- `uselock = false`: specifies either to use the lock structure for the inference or not, if set to true uses `Base.Threads.SpinLock`. Accepts custom `AbstractLock`. (exclusive for streamline inference) | ||
- `autostart = true`: specifies whether to call `RxInfer.start` on the created engine automatically or not (exclusive for streamline inference) (exclusive for streamline inference) | ||
- `warn = true`: enables/disables warnings |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
catch_exception
argument explanation is missing (did we have it?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe the short explanation was given in line 1493, but the extended description was missing indeed. Added now.
- `pipeline`: changes the default pipeline for each factor node in the graph | ||
- `global_reactive_scheduler`: changes the scheduler of reactive streams, see Rocket.jl for more info, defaults to no scheduler | ||
|
||
- ### `returnvars` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Something has been lost here, the returnvars
argument behaves differently for batched and streamlined versions. I think a part of the documentation has been lost in the process.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed.
|
||
**Note**: The `predictvars` argument is exclusive for batch setting. | ||
|
||
- ### `historyvars` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
historyvars
is exclusive for streamline setting
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is said in both short and extended description.
src/inference.jl
Outdated
|
||
Specifies the number of variational (or loopy belief propagation) iterations. By default set to `nothing`, which is equivalent of doing 1 iteration. | ||
|
||
- ### `free_energy` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The free_energy
behaves differently for the streamlined and batched versions. In the streamlined it is indeed an observable, but in the batched version it is just a vector.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(again probably some documentation has been lost in the process)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed.
src/inference.jl
Outdated
This setting specifies whenever the `infer` function should create an observable of Bethe Free Energy (BFE) values. The BFE observable returns a new computed value for each VMP iteration. | ||
Note, however, that it may be not possible to compute BFE values for every model. If `free_energy = true` and `keephistory > 0` the engine exposes extra fields to access the history of the Bethe free energy updates: | ||
|
||
- `engine.free_energy_history`: Returns a free energy history averaged over the VMP iterations |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is exclusive to the streamlined version, the batched version does not have those
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed
Co-authored-by: Bagaev Dmitry <[email protected]>
Co-authored-by: Bagaev Dmitry <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well done @albertpod ! You can merge after the CI pass
This PR addresses issue #181 and marks significant update to the
RxInfer
, wherein we merge the functionalities ofinference
andrxinference
into a single, "umbrella" function:infer
. This change simplifies the user interface by providing a unified method for conducting both static and streaming probabilistic inference.The decision to use
infer
for static or streaming datasets is now determined by the presence of theautoupdates
keyword. This keyword is crucial in controlling how updated posteriors are transformed into priors for the following observations, making it specifically applicable and essential for streaming data (real-world) scenarios.Key Changes:
Unified Inference Functionality: The
infer
function consolidates all features previously divided betweeninference
andrxinference
. The original functions have been internally renamed to__inference
and__rxinference
and are no longer exported. This unified approach enablesinfer
to support both batch/static and streaming/online applications.Updated Documentation: Comprehensive updates have been made to the documentation to reflect the merging of
inference
andrxinference
. This includes a detailed and clear docstring for theinfer
function, ensuring ease of use and understanding.Updated Examples: All examples across the package have been updated to utilize the new
infer
function, demonstrating its application in various contexts.Deprecation Notices: Appropriate deprecation notices have been added for the
inference
andrxinference
functions, guiding users towards the new unified approach. (I haven't managed to make@deprecate inference(; kwargs...) infer(kwargs...)
work, seems deprecate macro isn't good for keyword arguments)