Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HARK 2.0 pre-alpha/code for BST notebook and dashboard #1055

Open
wants to merge 179 commits into
base: master
Choose a base branch
from

Conversation

MridulS
Copy link
Member

@MridulS MridulS commented Aug 5, 2021

I went in a cleaned up a bit more from the Update-0p11-for-QE-revised branch and removed the boilerplate code files.
This one only has the changes made to the models.

continued from #1050

Didn't want to make changes to the active branches so created a new branch and the branch for this code is in Update-0p11-for-QE-revised-remove-sphinx.

@sbenthall
Copy link
Contributor

Thanks @MridulS -- this will be a lot easier to review!

@llorracc
Copy link
Collaborator

llorracc commented Aug 5, 2021

@MridulS thanks. Is there a tool to do this specifically? If I had just deleted the sphinx files (and pytest-generated ones) it still would have wanted to replace the upstream files with my deletions. And I don't want to .gitignore those (esp the built sphinx docs) because when they are updated remotely I want to get them locally.

In your revised PR, it is much easier to see that almost everything falls into one of two categories:

  • ConsIndShockModel.py and core.py
  • Files like ConsINdShockModel_AgentDicts.py which are read in by the new (stub) ConsIndShockModel.py
  • Minor/cosmetic edits to jupyter notebooks

One small thing I was a bit confused about -- I merged in the changes Seb merged a couple of days ago, and it looked like Seb had removed my changes to DiscreteDistribution, where it inherited from DiscreteDistributionOld and then added new content. Since BST relies on the new version, I have reinstated that bit, but perhaps I misunderstood this part of Seb's merge.

@sbenthall
Copy link
Contributor

One small thing I was a bit confused about -- I merged in the changes Seb merged a couple of days ago, and it looked like Seb had removed my changes to DiscreteDistribution, where it inherited from DiscreteDistributionOld and then added new content. Since BST relies on the new version, I have reinstated that bit, but perhaps I misunderstood this part of Seb's merge.

That sounds like an accurate description of changes I made to the 'innocuous changes' PR before merging.

I've opened #1051 for the pmf/pmv issue, which I think should be properly implemented in HARK, and #1053 to discuss the interface to calc_expectations.

@llorracc
Copy link
Collaborator

llorracc commented Aug 5, 2021 via email

@sbenthall
Copy link
Contributor

These were the changes I made before I merged #1048
bc2051d

This was in accord with my review.
#1048 (comment)

I gathered merging this PR was urgent.

@llorracc
Copy link
Collaborator

llorracc commented Aug 5, 2021

Just tested it and this works with the public version of the BST notebook. Yay!

# vector of point masses; here it is called the pmv (probability mass vector)


class DiscreteDistribution(DiscreteDistributionOld):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be addressed #1051

pseudo = (agent.pseudo_terminal is True)
if not pseudo: # Then it's a real solution; should be part of the list
solution.insert(0, deepcopy(agent.solution_terminal))
completed_cycles = 0 # NOQA
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This feature of being able to break off and restart a solution maybe can be split out and brought into HARK 1.0

if not go: # Finished solving
# All models should incorporate 'stge_kind', but some have not
# Handle cases where that has not yet been implemented:
if not hasattr(solution_now.Bilt, 'stge_kind'):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is "Bilt"?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is a deliberate misspelling of "Built" - results that are constructed or might be modified in the course of the solution (as distinct from "Parm" which is parameters fed into the code).

PS. I know your preference is for words that are deliberately spelled correctly. A virtue of my choice of variable names is that if we can agree on a better alternative, a global search and replace will break neither normal text nor code.

to be tested. Must have dict 'conditions' [name]

quiet : boolean
If True, construct the conditions but print nothing
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand why there needs to be both a verbosity and a quietness setting.

# 1. use it pervasively
# * in which case there should be a companion set_parameter
# or
# 2. Eliminate it
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good point.
I believe the issue is that the parameters dictionary was implemented for a few good reasons, but for backwards compatibility, it was made consistent with the 'whiteboard' way of accessing parameters through object attributes.

# 'approximation' pars
self.aprox_par = {'tolerance': self.tolerance, 'seed': self.seed}
# if lim exists, sol approaches truth as all aprox_par -> lim
self.aprox_lim = {'tolerance': 0.0, 'seed': None}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From logging.DEBUG to logging.CRITICAL

quietly : boolean
If True, suppress output
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

again.. if you want it to be quiet, couldn't you set the messaging level to something minimal?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some circs in which you may want things to run quietly, but not disturb the messaging level set by the user. It would be a pain (and error-prone) to set self.messaging_level_originally_requested_by_user then set the logging level to suppress all output, then to restore self.messaging_level_originally_requested_by_user. There are a couple of other use cases as well, like you might want to suppress output while debugging because you know what it will be and it just clutters things up.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a pain for who? error prone for who?
it seems natural for the method-level logging level to be scoped to the particular method, while leaving a global logging level in place as the default.

@sbenthall
Copy link
Contributor

@llorracc you mention some handling of EOP shocks in this comment: #1039 (comment)

I've been looking for the corresponding changes in this PR but haven't been able to find them amid all the other changes.

I wonder if you could point to them directly.

'chosen_to_next_BOP': {}, # or
'chosen_to_next_choice': {},
'EOP_to_next_BOP': {}, # or
'EOP_to_next_choice': {}, # EOP: End of Problem/Period
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Support for BOP and EOP here:
#1039 (comment)

class Prospectations(SimpleNamespace):
"""Expectations prior to the realization of current period shocks."""

pass
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

given the ambiguity about the timing of the shocks... maybe it would be better to have these namespaces be numbered, or indexed by the name of some variable, rather than named

'BOP_to_choice': BOP_to_choice
}

return possible_transitions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a lot of structure is implied by the code here.
I wonder if it would be possible to require that the equations get laid out, and then have the structure (such as the timing of the shocks) be inferred from the model specification.
That's more along the lines of what I'm working on in #865

_log.info('\t' + str(Tran[key]['raw_text'][eqn_name]))

def check_conditions(self, messaging_level=logging.DEBUG, quietly=False):
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if the check_conditions functionality is properly part of the solver functionality or part of the type defintion


Bilt.__dict__.update({k: v for k, v in Modl.Rewards.vals.items()})

return soln
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am confused why a utility function (which knows its derivatives, which is an awesome addition!) returns a solution object.

Seems like a mathematical function is more basic than a solution.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was part of a pattern of building up the contents of the solution object with standalone commands like

soln = self.def_utility_CRRA()
soln = self.def_value_funcs()

Probably would be an improvement to have self.def_utility_CRRA() return a 'utility' object, def_value_funcs a vFunc, etc. Then the calling code could become:

soln.u = self.def_utility_CRRA()
soln.vFunc = self.def_value_funcs()

The way these kinds of things are done right now is rather haphazard and inconsistent. An improvement in the code would be to systematize and regularlize these kinds of things.

init_perfect_foresight_plus.update( # In principle, kinks exist all the way to infinity
{'aprox_par': {'MaxKinks': '500'}})
init_perfect_foresight_plus.update( # In principle, kinks exist all the way to infinity
{'aprox_lim': {'MaxKinks': float('inf')}})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see #914

CRRA_fcts.update({'prmtv_par': 'True'})
init_perfect_foresight_plus['prmtv_par'].append('CRRA')
# init_perfect_foresight_plus['_fcts'].update({'CRRA': CRRA_fcts})
init_perfect_foresight_plus.update({'CRRA_fcts': CRRA_fcts})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you remind me what the different fcts are for?
Some seem to be about presentation.

Maybe a more explicit presentation layer is in order.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps a better alternative would be to use a python feature introduced in 3.9: Variables can now have documentation/explanations directly attached to them. Basically, seems to allow what I'm doing by adding _fcts, as built-in part of python.

I've long wondered why programming languages don't all allow this, rather than expecting users to hunt down potential explanations in the code. No good reason, I think.

init_perfect_foresight_plus.update({'T_age_fcts': T_age_fcts})

T_cycle_fcts = {
'about': 'Number of periods in a "cycle" (like, a lifetime) for this agent type'}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe this can be inferred from the documentation

As all aprox parameters approach their limits simultaneously,
the numerical solution should converge to the 'true' solution
that would be obtained with infinite computational power

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

'permShk': shks_permuted[permPos] + zeros, # + zeros fixes size
'tranShk': shks_permuted[tranPos] + zeros, # all pmts for each st
'aNrm': starting_states
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could the names of variables be extracted out for more generalizability?

@sbenthall
Copy link
Contributor

This PR will never be merged into HARK, though it has been used as a source of inspiration to more recent work.
Can it be closed (to take it off the active PR's list)?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants