-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migrate nems.tf to nems.analysis #178
Comments
Perhaps lower priority, but something to keep in mind with the overhaul: the fitter was implemented in TF1.0. Some backward-compatible routines are in place to take care of the fit, but if we're porting it into the nems.analysis / nems.fitters world, it may be worth using a more standard implementation. Also related is the question of if/how to use batching for situations where we don't have trials of uniform length (eg, from behavior data). Right now, the code is designed to have a bunch of batches of equal lenght, which correspond to distinct stimuli / trials. |
Is it just the TF portion that doesn't support batches of different lengths, or all of NEMS? |
Only TF. the rest of NEMS doesn't have the concept of batches -- at least in this context. "batches" in nems are groups of cells the same stimulus, not subsets of data for one cell. so two meanings of batch, which may also be confusing. |
* #178 - migrate modelspec2tf to modelspec object * #178 - add map layer function: converts modules to layers * #178 - add tf to package list * #178 - change modelspec2tf(modelspec) to modelspec.modelspec2tf() * #178 - fix circular imports; update tf api to 2.0 * don't print sql, log instead * #178 - tf.compat.v1 changes * #190 - add tf compatible nmse and shrinkage; add loss keyword * fix error in map_layer; change default cost_function in fit_tf_init * divide by response, not prediction * add loss types; fit train data; add non improving/tolerance early stopping * add options for early stopping steps and tolerance; shape fix * add inf iter for tf train * early stopping bug fix * shorten fitter keyword * fix bug in early stopping * fix bug in batch size updates * track largest iter in tf to extra_results * add learning rate keyword to tf * add he and glorot uniform distribution initializers; add distr keyword to tf * #193 add exacloud setting; make tf save use scratch space * move initializers to tf/initializers * move loss functions to tf/loss_functions; remove unused code * change default fitter to gradient descent * fix bug in _fit_net() where optimizer was not being set * various tf code cleanup * move tf import into function * rename letters; also create parents in mkdir * rename letters * use env variable to detect if on exacloud * insert pdb * update save names for baphy figure/modelpath * add logging * don't overwrite meta * add exacloud batch maker * move job hist to user dir * make writeable * typo * typo * add logging * change to print * fix path * cast paths to str * add new line; testing don't run * fix duplicate srun * remove defaults * final version * fix run * don't request cpu * try without flags * add some back in * was it last newline? * test * add logging * add newline * direct both to same * add error logging * log working * change job name and comment * fix comment * debugging strange variable fit behavior. Mostly testing fine with pop models Co-authored-by: Alexander Tomlinson <[email protected]>
nems.tf.cnnlink.modelspec2tf
nems.tf.cnnlink.map_layer
The text was updated successfully, but these errors were encountered: