Release v0.5.0
Release 0.5.0
New features
-
Optimization callback functionalities has been improved. A dedicated
Callback
class is added which
is able to access the optimizer, the cost function, the parameters as well as gradients, during the
optimization. In addition, multiple callbacks can be specified. This opens up the endless possiblities
of customizing the the optimization progress with schedulers, trackers, heuristics, tricks, etc.
(#219) -
Tensorboard-based optimization tracking is added as a builtin
Callback
class:TensorboardCallback
.
It can automatically track costs as well as all trainable parameters during optimization in realtime.
Tensorboard can be most conveniently viewed from VScode.
(#219)import numpy as np from mrmustard.training import Optimizer, TensorboardCallback def cost_fn(): ... def as_dB(cost): delta = np.sqrt(np.log(1 / (abs(cost) ** 2)) / (2 * np.pi)) cost_dB = -10 * np.log10(delta**2) return cost_dB tb_cb = TensorboardCallback(cost_converter=as_dB, track_grads=True) opt = Optimizer(euclidean_lr = 0.001); opt.minimize(cost_fn, max_steps=200, by_optimizing=[...], callbacks=tb_cb) # Logs will be stored in `tb_cb.logdir` which defaults to `./tb_logdir/...` but can be customized. # VScode can be used to open the Tensorboard frontend for live monitoring. # Or, in command line: `tensorboard --logdir={tb_cb.logdir}` and open link in browser.
-
Gaussian states support a
bargmann
method for returning the bargmann representation.
(#235) -
The
ket
method ofState
now supports new keyword argumentsmax_prob
andmax_photons
.
Use them to speed-up the filling of a ket array up to a certain probability or total photon number.
(#235)from mrmustard.lab import Gaussian # Fills the ket array up to 99% probability or up to the |0,3>, |1,2>, |2,1>, |3,0> subspace, whichever is reached first. # The array has the autocutoff shape, unless the cutoffs are specified explicitly. ket = Gaussian(2).ket(max_prob=0.99, max_photons=3)
-
Gaussian transformations support a
bargmann
method for returning the bargmann representation.
(#239) -
BSGate.U now supports method='vanilla' (default) and 'schwinger' (slower, but stable to any cutoff)
(#248)
Breaking Changes
-
The previous
callback
argument toOptimizer.minimize
is nowcallbacks
since we can now pass
multiple callbacks to it.
(#219) -
The
opt_history
attribute ofOptimizer
does not have the placeholder at the beginning anymore.
(#235)
Improvements
-
The math module now has a submodule
lattice
for constructing recurrence relation strategies in the Fock lattice.
There are a few predefined strategies inmrmustard.math.lattice.strategies
.
(#235) -
Gradients in the Fock lattice are now computed using the vector-jacobian product.
This saves a lot of memory and speeds up the optimization process by roughly 4x.
(#235) -
Tests of the compact_fock module now use hypothesis.
(#235) -
Faster implementation of the fock representation of
BSgate
,Sgate
andSqueezedVacuum
, ranging from 5x to 50x.
(#239) -
More robust implementation of cutoffs for States.
(#239) -
Dependencies and versioning are now managed using Poetry.
(#257)
Bug fixes
-
Fixed a bug that would make two progress bars appear during an optimization
(#235) -
The displacement of the dual of an operation had the wrong sign
(#239) -
When projecting a Gaussian state onto a Fock state, the upper limit of the autocutoff now respect the Fock projection.
(#246) -
Fixed a bug for the algorithms that allow faster PNR sampling from Gaussian circuits using density matrices. When the
cutoff of the first detector is equal to 1, the resulting density matrix is now correct.