Streamline run_inference_algorithm
and the streaming average
#713
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The
run_inference_algorithm
code currently has a mode to incrementally compute a value (e.g. an incremental expectation), which saves memory. This PR presents a cleaner (and in my opinion, more functional) version of the code. The core idea is that to incrementally compute an expectation, the right thing to do is to modify the kernel, and then pass that kernel torun_inference_algorithm
, rather than modifyrun_inference_algorithm
. As part of this change, we also lettransform
be a general function on(state, info)
, which allows information from the info to be used (which we have needed).main
commit;pre-commit
is installed and configured on your machine, and you ran it before opening the PR;