Best viewed here.
- upgrade CI to python 3.9
- remove numba functions from the core package
- import tree_map from tree_util
- fix VMap module
- Fix deployement of 0.6.1
- patch vmap_lift_with_state (now rely on hk.experimental.lift_with_state)
- add stateful.py module with vmap_lift_with_state and unroll_lift_with… (#65)
- remove tensorflow and pytorch from requirements
- remove dependency with eagerpy (#62)
- correct imports of tree utils to avoid FutureWarnings (#63)
- replace jax.tree_multi_map (removed in JAX release v0.3.16) with jax.tree_map
- faster unroll: do not propagate params in scan state (#61)
- update to Jax v0.3.10 (#59)
- add optimizers (#56), (#58)
-
EWMA alignement with pandas and speedup (#53) This adds the options: *
com
*min_periods
*ignore_na
*return_info
-
[wax_numba] add an implementation of the ewma in numba extending the one of pandas with the additional modes we have in wax:
adjust='linear'
initial_value
parameter- a state management for online usages and warm-start of the ewma.
- add
numba
to requirements
-
[EWMA] use
log1com
as a haiku parameter to ease training with gradient descent. -
Align EWMCov and EWMVar with EWMA (#55)
-
[PctChange] correct
PctChange
module to align with pandas behavior. Introducefillna_zero
option.
- [modules] faster EWMA in adjust=True mode.
- [unroll] split rng in two rng keys.
-
[VMap] VMap module works in contexts without PRNG key
-
[online optimizer] ; refactor
- refactor OnlineOptimizer outputs: only return loss, model_info, opt_loss by default. New option 'return_params' to return params in outputs
- OnlineOptimizer returns updated params if return_params is set to True
-
[newton optimizer]: use NamedTuple instead of base.OptState
-
[unroll] propagate pbar argument to static_scan
-
[unroll] Renew the PRNG key in the unroll operations
-
refactor usage of OnlineOptimizer in notebooks
-
format with laster version of black
-
require jax<=0.2.21
-
add graphviz to optional dependencies
-
upgrade jupytext to 1.13.3
-
use python 3.8 in CI and documentation
-
Documentation:
- New notebook : 07_Online_Time_Series_Prediction
- New notebook : 08_Online_learning_in_non_stationary_environments
-
API modifications:
- refactor accessors and stream
- GymFeedback now assumes that agent and env return info object
- OnlineSupervisedLearner action is y_pred, loss and params are returned as info
-
Improvements:
- introduce general unroll transformation.
- dynamic_unroll can handle Callable objects
- UpdateOnEvent can handle any signature for functions
- EWMCov can handle the x and y arguments explicitly
- add initial action option to GymFeedback
-
New Features:
- New module UpdateParams
- New module SNARIMAX, ARMA
- New module OnlineOptimizer
- New module VMap
- add grads_fill_nan_inf option to OnlineSupervisedLearner
- Introduce
unroll_transform_with_state
following Haiku API. - New function auto_format_with_shape and tree_auto_format_with_shape
- New module Ffill
- New module Counter
-
Deprecate:
- deprecate dynamic_unroll and static_unroll, refactor their usages.
-
Fixes:
- Simplify Buffer to work only on ndarrays (implementation on pytrees were too complex)
- EWMA behave corectly with gradient
- MaskStd behave correctly with gradient
- correct encode_int64 when working on int32
- update notebook 06_Online_Linear_Regression and add it to run-notebooks rule
- correct pct_change to behave correctly when input data has nan values.
- correct eagerpy test for update of tensorflow, pytorch and jax
- remove duplicate license comments
- use numpy.allclose instsead of jax.numpy.allclose for comparaison of non Jax objects
- update comment in notebooks : jaxlib==0.1.67+cuda111 to jaxlib==0.1.70+cuda111
- fix jupytext dependency
- add seaborn as optional dependency
- First realease.