Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

change to pmap step in computeNextIteration #47

Open
wants to merge 18 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 13 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,15 @@
# Forked version of SMM.jl

This is a modified version of SMM.jl with two changes for the STBNews project:

1. Change to parallel dispatch to improve memory usage (in `algoBGP.jl`). Note STB code uses `algoSTB.jl`, in `stb-model` project.
2. Allow passing options to objective function (in `mprob.jl::evaluateObjective`)

To install, in your julia REPL, type

```julia
] add https://github.com/gregobad/SMM.jl#better_parallel
```


# SMM.jl: Simulated Method of Moments for Julia
Expand All @@ -11,15 +23,6 @@

This package provides a `Julia` infrastructure for *[Simulated Method of Moments](http://en.wikipedia.org/wiki/Method_of_simulated_moments)* estimation, or other problems where we want to optimize a non-differentiable objective function. The setup is suitable for all kinds of **likelihood-free estimators** - in general, those require evaluating the objective at many regions. The user can supply their own algorithms for generating successive new parameter guesses. We provide a set of MCMC template algorithms. The code can be run in serial or on a cluster.


## Installation

In your julia REPL, type

```julia
] add SMM
```

## Documentation

[![][docs-stable-img]][docs-stable-url]
Expand Down Expand Up @@ -105,7 +108,7 @@ New algorithms:

## History

This package grew out of the [R package `mopt`](https://github.com/tlamadon/mopt).
This package grew out of the [R package `mopt`](https://github.com/tlamadon/mopt).

## Thanks to all Contributors!

Expand Down
21 changes: 9 additions & 12 deletions src/SMM.jl
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ module SMM
# Dependencies
# ############

using Distributions
using Distributions
using DataFrames, DataFramesMeta
using OrderedCollections
using PDMats
Expand All @@ -28,16 +28,17 @@ import Base.get, Base.write, Base.==
import Base.show, Statistics.mean, Statistics.median

# exports: Types
export MProb, Eval,MAlgo, MAlgoBGP
export MProb, Eval, MAlgo, MAlgoBGP

# exports: methods
export addParam!,
addSampledParam!,
addMoment!,
addEvalFunc!,
setMoments!,
export addParam!,
addSampledParam!,
addMoment!,
addEvalFunc!,
addEvalFuncOpts!,
setMoments!,
setValue!,
readEvalArray ,
readEvalArray ,
dataMoment,
dataMomentd,
dataMomentW,
Expand Down Expand Up @@ -75,7 +76,3 @@ include("mopt/econometrics.jl")


end # module




57 changes: 42 additions & 15 deletions src/mopt/AlgoBGP.jl
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ mutable struct BGPChain <: AbstractChain
sigma_update_steps :: Int64 # update sampling vars every sigma_update_steps iterations
sigma_adjust_by :: Float64 # adjust sampling vars by sigma_adjust_by percent up or down
smpl_iters :: Int64 # max number of trials to get a new parameter from MvNormal that lies within support
min_improve :: Float64
min_improve :: Float64
batches :: Vector{UnitRange{Int}} # vector of indices to update together.

"""
Expand Down Expand Up @@ -93,7 +93,7 @@ mutable struct BGPChain <: AbstractChain
this.m = m
# how many bundles + rest
nb, rest = divrem(np,batch_size)
this.sigma = sig
this.sigma = sig
this.batches = UnitRange{Int}[]
i = 1
for ib in 1:nb
Expand Down Expand Up @@ -262,7 +262,7 @@ end

Computes the next `Eval` for chain `c`:

1. Get last accepted param
1. Get last accepted param
2. get a new param via [`proposal`](@ref)
3. [`evaluateObjective`](@ref)
4. Accept or Reject the new value via [`doAcceptReject!`](@ref)
Expand Down Expand Up @@ -293,6 +293,28 @@ function next_eval(c::BGPChain)

end

"""
next_acceptreject(c::BGPChain, ev::Eval)

Stores `ev` to chain `c`, given already computed obj fn value:

1. Accept or Reject the new value via [`doAcceptReject!`](@ref)
2. Store `Eval` on chain `c`.

"""
function next_acceptreject(c::BGPChain, ev::Eval)
@debug "iteration = $(c.iter)"

# accept reject
doAcceptReject!(c,ev)

# save eval on STBChain
set_eval!(c,ev)

return c

end


"""
doAcceptReject!(c::BGPChain,eval_new::Eval)
Expand Down Expand Up @@ -392,7 +414,7 @@ end
"""
proposal(c::BGPChain)

Gaussian Transition Kernel centered on current parameter value.
Gaussian Transition Kernel centered on current parameter value.

1. Map all ``k`` parameters into ``\\mu \\in [0,1]^k``.
2. update all parameters by sampling from `MvNormal`, ``N(\\mu,\\sigma)``, where ``sigma`` is `c.sigma` until all params are in ``[0,1]^k``
Expand All @@ -411,8 +433,8 @@ function proposal(c::BGPChain)

# map into [0,1]
# (x-a)/(b-a) = z \in [0,1]
mu01 = mapto_01(mu,lb,ub)
mu01 = mapto_01(mu,lb,ub)

# Transition Kernel is q(.|theta(t-1)) ~ TruncatedN(theta(t-1), Sigma,lb,ub)

# if there is only one batch of params
Expand Down Expand Up @@ -571,21 +593,26 @@ function computeNextIteration!( algo::MAlgoBGP )
# incrementBGPChainIter!(algo.chains)


# TODO
# this is probably inefficeint
# ideally, would only pmap evaluateObjective, to avoid large data transfers to each worker (now we're transferring each chain back and forth to each worker.)
if get(algo.opts, "parallel", false)
for i in 1:length(algo.chains)
algo.chains[i].iter +=1 #increment iteration on master
end

# if get(algo.opts, "parallel", false)
cs = pmap( x->next_eval(x), algo.chains ) # this does proposal, evaluateObjective, doAcceptRecject
# else
pps = map(proposal, algo.chains) # proposals on master

evs = pmap(x -> evaluateObjective(algo.m, x), get(algo.opts, "worker_pool", default_worker_pool()), pps) # pmap only the objective function evaluation step

cs = map(next_acceptreject, algo.chains, evs) # doAcceptRecject, set_eval

else
# for i in algo.chains
# @debug(logger," ")
# @debug(logger," ")
# @debug(logger,"debugging chain id $(i.id)")
# next_eval!(i)
# end
# cs = map( x->next_eval(x), algo.chains ) # this does proposal, evaluateObjective, doAcceptRecject
# end
cs = map( x->next_eval(x), algo.chains ) # this does proposal, evaluateObjective, doAcceptRecject
end
# reorder and insert into algo
for i in 1:algo.opts["N"]
algo.chains[i] = cs[map(x->getfield(x,:id) == i,cs)][1]
Expand Down Expand Up @@ -624,7 +651,7 @@ function exchangeMoves!(algo::MAlgoBGP)

# algo["N"] exchange moves are proposed
props = [(i,j) for i in 1:algo["N"], j in 1:algo["N"] if (i<j)]
# N pairs of chains are chosen uniformly in all possibel pairs with replacement
# N pairs of chains are chosen uniformly in all possible pairs with replacement
samples = algo["N"] < 3 ? algo["N"]-1 : algo["N"]
pairs = sample(props,samples,replace=false)

Expand Down
21 changes: 13 additions & 8 deletions src/mopt/Eval.jl
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@


"""
# `Eval` type for managing function evaluations
# `Eval` type for managing function evaluations

## fields
## fields

* `value`: current function value
* `time`: timing of evaluation
Expand All @@ -16,7 +16,7 @@
* `accepted`: whether draw was accepted
* `options`: Dict of options and other info

"""
"""
mutable struct Eval

value :: Float64
Expand Down Expand Up @@ -92,7 +92,7 @@ mutable struct Eval
this.prob = 0.0
this.accepted = false

for kk in keys(mprob.moments)
for kk in keys(mprob.moments)
this.dataMoments[kk] = mprob.moments[kk][:value]
this.dataMomentsW[kk] = mprob.moments[kk][:weight]
end
Expand All @@ -117,7 +117,7 @@ mutable struct Eval
this.prob = 0.0
this.accepted = false

for kk in keys(mprob.moments)
for kk in keys(mprob.moments)
this.dataMoments[kk] = mprob.moments[kk][:value]
this.dataMomentsW[kk] = mprob.moments[kk][:weight]
end
Expand Down Expand Up @@ -232,7 +232,7 @@ function setMoments!(ev::Eval,d::DataFrame)
end


function getBest(evs::Array{Eval,1})
function getBest(evs::Array{Eval,1})
best_val = Inf
best = None
for ev in evs
Expand All @@ -258,11 +258,11 @@ function check_moments(ev::Eval)
r = join(d,dsim, on=:moment)
r[:distance] = r[:simulation] .- r[:data]
r[:abs_percent] = 100 .* abs.(r[:distance] ./ r[:data])
r[:abs_percent_2] = (r[:distance] ./ r[:data] ).^2
r[:abs_percent_2] = (r[:distance] ./ r[:data] ).^2
r[:abs_percent_1000] = r[:abs_percent] ./ 1000
r[:abs_percent_2_1000] = r[:abs_percent_2] ./ 1000
if length(ev.dataMomentsW) > 0
r[:MSE_SD] = (r[:distance] ./ r[:data_sd] ).^2
r[:MSE_SD] = (r[:distance] ./ r[:data_sd] ).^2
r[:abs_percent_SD_weighted] = abs.(r[:distance]./ r[:data_sd] )
r[:MSE_SD_1000] = r[:MSE_SD] ./ 1000
r[:abs_percent_SD_weighted_1000] = r[:abs_percent_SD_weighted] ./ 1000
Expand All @@ -282,3 +282,8 @@ function show(io::IO,e::Eval)
print(io,collect(keys(e.dataMoments)))
print(io,"\n===========================\n")
end


function reInit!(m::MProb, ev::Eval)
reInit!(m, ev.params)
end
Loading
Loading