-
-
Notifications
You must be signed in to change notification settings - Fork 199
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP]: normalize_shocks and normalize_levels allow user to impose true propositions on sims #1094
base: master
Are you sure you want to change the base?
Conversation
Codecov Report
@@ Coverage Diff @@
## master #1094 +/- ##
==========================================
- Coverage 73.65% 73.64% -0.01%
==========================================
Files 69 69
Lines 10579 10595 +16
==========================================
+ Hits 7792 7803 +11
- Misses 2787 2792 +5
Continue to review full report at Codecov.
|
@Mv77, could you review this? And, if you have the right permissions, merge it? It passes all tests and should not affect any existing results, since if the normalize booleans are not set then all it does is divide some things by 1.0. PS. I'm interested to see how much improvement there is from this versus from the Harmenberg thing. And, I can't see any reason they couldn't be combined, to achieve even more improvement. What I really want to do, though, is to get the "reshuffling" technology implemented. @wdu9, you might be interested too. Thanks for your earlier input -- you put me on the right track |
@llorracc I am starting to take a look now. Will get to the code soon, but I have a couple of conceptual questions.
|
This is exactly right. It goes back to whether we are thinking of our discretizations as defining an exact model that is being solved on its own terms, or whether we are thinking of them as approximations of a "true" model in which the shock is, say, really continuously lognormally distributed. My preference is strongly for the latter, because it is the more general formulation. If two solutions to a model differ because one used a 5 point and the other used a 7 point approximation, then whichever of them is closer to what you get when you have an infinity-point approximation is the one that is defined as being "closer" to the truth. I'd rather do shuffling than dividing, because shuffling has the virtue that the simulated outcomes match numerically identically the computations that went into the calculation of the expectations, and it is deeply attractive to have identical calculations going into the solution and the simulation phases. But implementing shuffling would require considerably more work, and it is possible that dividing by the mean gets 95 percent of the benefits -- that is something I want to figure out.
No, PermGroFac is a separate object from PermShk. Throughout, we have always insisted that E[PermShk]=1, and have handled either life cycle patterns or aggregate growth using PermGroFac. So, it's not a problem to impose E[PermShk]=1.
This may be a very good catch: I have not examined the simulation code carefully enough to determine whether, as I had assumed, the draw of the permanent shock occurs before the calculation of b or m. If it does, then we're fine. That was the case in my original Mathematica code, so I had assumed it is the case in our HARK simulations, but if not then this step may need to be moved to some earlier point. (Though, if that is the case, maybe some renaming will be in order, since I think of "transitions" as defining how you get from t-1 to t, and if by the time you get to "transitions" some of that has already been done then our nomenclature may not be ideal). |
Fully agree, but you are imposing HARK/HARK/ConsumptionSaving/ConsIndShockModel.py Line 1809 in 8f0057e
|
Here is the relevant code HARK/HARK/ConsumptionSaving/ConsIndShockModel.py Lines 1795 to 1816 in 8f0057e
Notice |
Oh, yes, you're right about that. When I did that my thought was "the right way to handle this is to have an aggregate PLvl variable that tracks the aggregate movements and an idiosyncratic pLvl whose mean should always be 1 but I have a sneaking suspicion we have not done it that way ... even though it's being done that way in the particular case I'm working with right now (Harmenberg-Aggregation)." |
This implements a simple version of an old idea from CDC Mathematica code: Simulation efficiency can be substantially improved by imposing on the stochastic draws facts that we know are true in the population, like that the average values of permanent and transitory shocks are 1 in each period.