-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
149 custom optimization loop #151
Conversation
@jgallowa07 I think we would still want to look at trajectories for our objective function over iterates, not necessarily the error metric (which is a norm on the step taken in parameter space). |
@wsdewitt I think I agree, but are you suggesting that we don't need to look at error at all? For context, here's the This is with a tolerance of So a few questions come to mind:
|
These figures are from a the simulated data run for 100K iterations with FISTA acceleration, with a tolerance of loss (objective w/o penalty). This is not a and error ( |
Thanks Jared! Indeed, the model isn't converging. Given the oscillatory loss and error traces, I'm guessing it's a line search (step size) issue. Can you try increasing line search iterations to |
HOWEVER, it seems double precision (re #150 ) is exactly what we needed. Here's a direct comparison of single vs double error trajectories on the simulation data. test this out on the spike data first, but if the results look good I think we should be good to cleanup and merge this PR. |
10a6569
to
379805e
Compare
I've moved the prototype notebook to (notebooks/param_transform_ref_equivariance_prototype.ipynb)[notebooks/param_transform_ref_equivariance_prototype.ipynb], and squashed the commits. With that, I think we can merge this PR so that we may branch off |
This PR addresses #149
Currently the custom update loop has been added in addition to adding an
Model.iter_error
property which hold the error per iteration for the last call toModel.fit()
. It also updates themultidms.model_collection.fit_models
to conform to the new fitting heuristic.TODO: