-
Notifications
You must be signed in to change notification settings - Fork 144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce the state machine in parameter derivative routines needed #5292
Comments
Thanks for thinking this through. I agree the current implementation is a mess. How much effort do you think this would be? Would this still allow parameter derivatives to be called at the WavefunctionComponent level? |
My main concern is that we have been trying to keep the data dependence of estimators intelligible, thats less of a concern if these are just a context available only to the self healing estimator that can be supplied once per section or once per block. I don't have a ton of time today but aren't the variables basically owned by the BaseCostFunction? Are we talking about making a reference to these available to the SelfHealingEstimator or a different set of them? When you say evaluate I assume you mean once per measurement i.e. in the fully concurrent accumulate. |
I don't think this would change the data dependence. This estimator currently (and still would) depend primarily on the trial wavefunction. The only main difference is requiring parameter derivatives to be correct at call time in an estimator just as they are with electron coordinate derivatives. |
If you need to access them inside the accumulate then the data dependence has changed. At what scope are the derivatives the accumulation depends on mutable? Or are we talking about a list of activate variables that the estimator uses to call the evaluateDerivatives every accumulation. Apologies for not keeping up with the self healing methodology. |
My intention was to resolve the implicit dependency of Regarding intelligent data sharing between estimators can be viewed as an optimization that eliminates redundant calculation. We can do that once we have the basic implementation working that allows evaluating derivatives in each estimator. @PDoakORNL |
A quick and dirty work would be moving the checkOut into each evaluateDerivatives call.
That is one of the goal. So yes. |
Ok I did not really see how tricky this was, I get the difficulty now. |
Can we get around the linear search by using hash tables instead? Or is some type of index mapping array better? |
Is there evidence that the linear search is unacceptably slow? (Looking for the fastest route to get something working here. Optimize later when proven to be needed). |
Yes, agreed. We should prioritize getting to first functioning over speed. |
I will note that two parameter derivatives are needed: dPsi/dp and dE_local/dp. |
The state machine was invented for WFOpt.
checkInVariables
queries all the optimizable pieces in TWF and then we create aglobalVars
checkOutVariables
updates the per piecemyVars
by storing its index in theglobalVars
.During
evaluateDerivatives(globalVars)
,myVars
and its indices are used.Clearly the global indices stored in all the per piece
myVars
are the state machine. A solution is to keep the action ofcheckOutVariables
withinevaluateDerivatives
and then it can be paired with arbitrary amount ofglobalVars
which extends this machinery to estimators and even any number of estimators enabled in a single run.Each feature that needs
evaluateDerivatives
calls simply register its ownFeatureVars
by callingcheckInVariables
.The text was updated successfully, but these errors were encountered: