Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Variational inference #494

Merged
merged 253 commits into from
Jun 17, 2016
Merged
Show file tree
Hide file tree
Changes from 246 commits
Commits
Show all changes
253 commits
Select commit Hold shift + click to select a range
c8cf275
Fix broken syntax.
null-a Mar 21, 2016
55dfa66
add daipp package
ngoodman Mar 21, 2016
8a320fb
clean up daipp library a little
ngoodman Mar 21, 2016
9c42102
Merge branch 'daipp' of https://github.com/probmods/webppl into daipp
ngoodman Mar 21, 2016
bfb6e31
Re-work relative addressing.
null-a Mar 22, 2016
6f697af
Complain if example trace includes AD nodes.
null-a Mar 22, 2016
92baef1
First pass at bringing DAIPP closer to executable state.
dritchie Mar 23, 2016
705d5e3
Correctly untapify tensor params at sample stmts.
null-a Mar 24, 2016
4e6e286
small edits in daip.js
ngoodman Mar 25, 2016
d873f38
merge
ngoodman Mar 25, 2016
edd7351
clean up daip.js, add cache fn
ngoodman Mar 25, 2016
cbca136
handle empty array in val2vec
ngoodman Mar 25, 2016
33c7c94
add (probably wrong) squishnet for get right range in erp params
ngoodman Mar 25, 2016
0601c34
maybe fix tensor stuff in squishnet
ngoodman Mar 25, 2016
0b213be
make latentSize live in daipp js, provided as daipp.latentSize
ngoodman Mar 25, 2016
424ed1a
change vec2importanceERP to vec2dist, because it sounds nice
ngoodman Mar 25, 2016
c9e1ee0
clean things up a bit
ngoodman Mar 25, 2016
b74a705
More work on getting DAIPP ready to go (also some comments/questions)
dritchie Mar 28, 2016
a924a87
Merge branch 'dev' into daipp
null-a Mar 29, 2016
543b7de
Remove unused var.
null-a Mar 29, 2016
360e78a
Remove unused code.
null-a Mar 29, 2016
368066e
Add TODO note.
null-a Mar 29, 2016
a725688
Re-work parameter registration.
null-a Mar 29, 2016
0af45e9
added comment re address vectors
ngoodman Mar 29, 2016
ee822ba
add sketch of minimal daipp example, started fixing bugs
ngoodman Mar 29, 2016
7c80c1a
make daipp package name lowercase
ngoodman Mar 29, 2016
27a9e9c
Bind callback context.
null-a Mar 30, 2016
58538ef
tensorAdaptor takes length as first arg not array.
null-a Mar 30, 2016
b88166c
Fix typo in variable name.
null-a Mar 30, 2016
88141cf
Fix indentation.
null-a Mar 30, 2016
5104bdb
Need to test for undefined in strict eq test.
null-a Mar 30, 2016
d62acc4
Simplify Gaussian guide to get things running.
null-a Mar 30, 2016
3646090
Extract scalars from singleton tensors.
null-a Mar 30, 2016
0fe2f7d
Remove debug logging.
null-a Mar 30, 2016
5eef5ff
Fix order of args to score.
null-a Mar 30, 2016
7695325
Reduce stepSize for stability.
null-a Mar 30, 2016
1a9b427
Tweak output to get a sense of what's happening.
null-a Mar 30, 2016
9aac452
Throw error on unhandled ERP type.
null-a Mar 30, 2016
18047ea
Make it clearer the cases are mutually exclusive.
null-a Mar 30, 2016
d7de7c1
Prefer strict equality tests.
null-a Mar 30, 2016
fd92b83
Make adagrad work with most recent param storage.
null-a Mar 30, 2016
8b4f1fc
Add note.
null-a Mar 30, 2016
9391f40
Update adnn dependency.
null-a Mar 31, 2016
3f5d8fd
Merge branch 'dev' into daipp
null-a Apr 1, 2016
9cfac4a
Add SampleGuide coroutine.
null-a Apr 1, 2016
9c0002a
Test output of daipp 'hello world'.
null-a Apr 1, 2016
39a9966
Add file missed from last commit.
null-a Apr 1, 2016
e73f246
Warn if gradients are zero.
null-a Apr 4, 2016
c3b194a
Enable training of embedding nets.
null-a Apr 4, 2016
3e454ad
Add Note.
null-a Apr 4, 2016
6915e1e
Handle lifted numbers/tensors in val2vec.
null-a Apr 4, 2016
8553138
Fix bug in parameter registration.
null-a Apr 4, 2016
4035e3e
Add basic daipp testing infrastructure.
null-a Apr 4, 2016
7212c0e
Rename daipp test runner.
null-a Apr 4, 2016
59555dd
Bump adnn dependency version.
dritchie Apr 4, 2016
2e1941e
Made reshape net work with most recent adnn update.
dritchie Apr 4, 2016
89ce131
Add daipp test case.
null-a Apr 5, 2016
9373f23
Make SMC+rejuv work for guided programs.
null-a Apr 5, 2016
d0f4746
Support mini-batches in EUBO.
null-a Apr 5, 2016
46400d4
Add daipp test case.
null-a Apr 5, 2016
feaeb23
Clean-up daipp test cases.
null-a Apr 5, 2016
06cf81e
Re-work registerParams.
null-a Apr 5, 2016
f1022ff
Add note.
null-a Apr 5, 2016
31a8fc8
add sampleDaipp helper
ngoodman Apr 5, 2016
4bdd0c5
add simple qmr like example, fix typo in sampleDaipp helper.
ngoodman Apr 5, 2016
453747e
soften factor in simpleqmr example. add note.
ngoodman Apr 6, 2016
0d95e45
Log warning if gradients are not finite.
null-a Apr 6, 2016
75a725f
Only issue gradient warnings once per param.
null-a Apr 6, 2016
c3034a9
Add debug option to Optimize.
null-a Apr 6, 2016
f960cce
Fix daipp qmr example.
null-a Apr 6, 2016
77cf62e
Re-jig optimization method interface.
null-a Apr 6, 2016
8f392f1
Rename vars for consistency.
null-a Apr 6, 2016
d1ae98f
Fix rmsprop & adam.
null-a Apr 6, 2016
3953707
Remove unused method.
null-a Apr 6, 2016
ef19392
Tweak param cloning in SMC.
null-a Apr 6, 2016
5dd9f6b
Bump adnn version.
dritchie Apr 6, 2016
0a70a7f
Add notes.
null-a Apr 6, 2016
54f4d00
Use logGamma AD primitive.
null-a Apr 6, 2016
b477b4d
Remove tensor reshape, now implemented in adnn.
null-a Apr 7, 2016
58ec196
Update comment.
null-a Apr 7, 2016
cb996f7
Support diagCovGaussianERP in DAIPPguide.
null-a Apr 7, 2016
c14e3bf
Add linear regression daipp example.
null-a Apr 7, 2016
dc4122e
add commented out line for testing
ngoodman Apr 8, 2016
4c17034
Merge branch 'daipp' of https://github.com/probmods/webppl into daipp
ngoodman Apr 8, 2016
2ebd017
Use soft-plus instead of exp for one-sided bounds.
dritchie Apr 11, 2016
3cfbbea
merge
ngoodman Apr 11, 2016
762f305
Sketch of model structure.
null-a Apr 12, 2016
c0543bc
Merge branch 'daipp' of https://github.com/probmods/webppl into daipp
ngoodman Apr 12, 2016
89091fc
revise wakey sleepey sketch
ngoodman Apr 12, 2016
f7124cc
Show progress in Optimize.
null-a Apr 12, 2016
b93bb18
Initial implementation of mapData.
null-a Apr 12, 2016
cd69545
Re-work mapData.
null-a Apr 13, 2016
8479339
Option to persist mini-batches across optim steps.
null-a Apr 13, 2016
b72fa9e
Quick hack to save ELBO progress to disk.
null-a Apr 13, 2016
23c711d
Initial attempt at evaluating guides.
null-a Apr 13, 2016
9fa6b86
Make logistic faster by using ad.tensor.softmax
dritchie Apr 13, 2016
c2b248e
DAIPP guide for dirichlet distributions.
dritchie Apr 13, 2016
63e59ae
Basic DAIPP LDA example. Runs, but doesn't optimize very well (yet)
dritchie Apr 13, 2016
cb514e4
Tweak forEach in lda example.
null-a Apr 14, 2016
9ab9024
Update context as described in structure.wppl.
null-a Apr 14, 2016
525695d
Increase stepSize in lda example.
null-a Apr 14, 2016
2a2f75c
Reinstate previous coroutine before exiting.
null-a Apr 19, 2016
0715c7f
Re-write EvaluateGuide.
null-a Apr 19, 2016
39d0ecf
Simple model for EvaluateGuide sanity check.
null-a Apr 19, 2016
b845f2a
trying to find where the nans come from...
ngoodman Apr 19, 2016
b92be72
Merge branch 'daipp' of https://github.com/probmods/webppl into daipp
ngoodman Apr 19, 2016
d99a302
Fix logic in ELBO mini-batch handling.
null-a Apr 19, 2016
07aaa5b
comment out some asserts that were problematic
ngoodman Apr 19, 2016
fc93889
Fix assertions.
null-a Apr 20, 2016
e22ae38
Fix bug in mapData stack addresses.
null-a Apr 20, 2016
ddc9f7d
switch daipp back to tanh
ngoodman Apr 20, 2016
5cdd0c9
Add mean-field guide to lda example.
null-a Apr 20, 2016
60a1531
make evaluateGuide allow multiple data indices
ngoodman Apr 20, 2016
7783c38
make evaluateGuide allow multiple data indices
ngoodman Apr 20, 2016
f33a34f
Bump adnn version.
dritchie Apr 20, 2016
97e4412
Bump adnn version.
dritchie Apr 20, 2016
85a31cd
Bump adnn version.
dritchie Apr 22, 2016
1fc4df9
Remove tensor ops that are now in adnn.
null-a Apr 22, 2016
d2fd145
Tweak output of daipp 'tests'.
null-a Apr 22, 2016
f534dbe
Use adnn debug checks.
null-a Apr 22, 2016
b6b6849
Remove overridden inspect.
null-a Apr 22, 2016
ca58f7f
Check for NaN in ELBO objective.
null-a Apr 22, 2016
5108f28
Bump adnn version.
dritchie Apr 22, 2016
54d2061
Merge branch 'daipp' of https://github.com/probmods/webppl into daipp
ngoodman Apr 25, 2016
478ebe8
adding simpler example and some documentation of the bayesnets expts
ngoodman Apr 25, 2016
3434c92
Bump adnn version.
dritchie Apr 25, 2016
c9dd982
Merge branch 'daipp' of github.com:probmods/webppl into daipp
dritchie Apr 25, 2016
9c9f7a2
Lint.
null-a Apr 25, 2016
7b43519
Set defaults for debug/verbose opts in estimators.
null-a Apr 25, 2016
2bf09a0
Better estimator when not doing any reparam.
null-a Apr 25, 2016
c75ac1c
Remove out-of-date usage examples.
null-a Apr 25, 2016
76f9a22
Remove out-of-date comment.
null-a Apr 25, 2016
241425b
Init. tensor params with draws from Gaussian.
null-a Apr 25, 2016
9fd9f3e
Add basic VAE example.
null-a Apr 25, 2016
2839fed
fix mapData to return something
ngoodman Apr 26, 2016
0d8674b
Lint.
null-a Apr 28, 2016
07cff0b
Remove unused helper.
null-a Apr 28, 2016
ba74a3c
Merge branch 'dev' into daipp
null-a Apr 28, 2016
c288a91
Minor tweak to LDA example.
null-a Apr 29, 2016
0528887
Init. with normalized vector rather than array.
null-a May 4, 2016
941001f
Add momentum option to gradient descent.
null-a May 4, 2016
59ac3bd
Add AIR sketch.
null-a May 16, 2016
277d14a
Update AIR commentary.
null-a May 17, 2016
74debb8
Use 'Xavier initialization' for guide nets.
null-a May 6, 2016
b94abf0
Merge branch 'dev' into daipp
null-a May 17, 2016
3171f47
Use 'distribution' rather than 'ERP' in daipp.
null-a May 17, 2016
9ad58c5
Merge branch 'dev' into daipp
null-a May 17, 2016
2abd605
Update distribution docs.
null-a May 17, 2016
e50d501
Prefer strict equality checks.
null-a May 17, 2016
b049000
Add TODO note.
null-a May 17, 2016
5bc11c0
Rename MultivariateBernoulli param p => ps.
null-a May 17, 2016
16d4b8f
Add compound nets for recurrent units.
null-a May 18, 2016
d969459
Add gradient clipping.
null-a May 18, 2016
e2393ac
Add more daipp examples.
null-a May 18, 2016
45ab64b
Use RU constructor for update and predict nets.
null-a May 19, 2016
4df8cca
Use RU constructor for array RNN.
null-a May 19, 2016
3379c79
Remove figure.
null-a May 19, 2016
4b881a5
Merge branch 'dev' into daipp
stuhlmueller May 26, 2016
3c022c5
mapData no longer breaks reparam detection.
null-a May 27, 2016
659e679
Represent all guide params as tensors internally.
null-a May 27, 2016
1ae5651
Use addr. relative to mapData at update/predict.
null-a May 27, 2016
b06e145
Have getRelativeAddress not be a method on env.
null-a May 31, 2016
278dfb3
Simplify choice look-up in EUBO.
null-a May 31, 2016
51588e9
Make daipp.js a wppl header.
null-a May 31, 2016
18505ea
Avoid exposing a global JS function.
null-a May 31, 2016
9054e59
Improve tensor handling in ad.valueRec.
null-a May 31, 2016
32d7577
Remove stale comment.
null-a May 31, 2016
a584ec4
Clean-up helper.
null-a May 31, 2016
112ee3a
Improve fn name, update commentary.
null-a May 31, 2016
920bd2a
Make EUBO work with SMC + HMC rejuv.
null-a May 31, 2016
1a8a188
Remove experimental context arg from cpsLoop.
null-a May 31, 2016
258bea8
Clean-up ops on the parameter grad data structure.
null-a May 31, 2016
5445620
Fix bug in gradient clipping.
null-a Jun 1, 2016
5b0f98d
Switch to adnn optimization methods.
null-a Jun 1, 2016
7ec8c9b
Return guide parameters from SMC.
null-a Jun 1, 2016
987d011
Extract fns for copying parameter data structure.
null-a Jun 1, 2016
2bbf8d8
Update tensors in-place.
null-a Jun 1, 2016
09c2d46
Rename paramgrad to paramStruct.
null-a Jun 1, 2016
6eb1a89
Fix edge cases in mv Bernoulli scorer.
null-a Jun 1, 2016
65f3bec
Move comment to docs.
null-a Jun 1, 2016
4f973cf
Add arg check helpers.
null-a Jun 2, 2016
d2ee870
Generalize and clean-up MatrixGaussian dist.
null-a Jun 2, 2016
37f70f3
Use more efficient TensorGaussian as reparam base.
null-a Jun 2, 2016
0b6c88c
Clean-up DiagCovGaussian distribution.
null-a Jun 2, 2016
a1d5a95
Remove stale code.
null-a Jun 2, 2016
c6d6c87
Clean-up MultivariateGaussian distribution.
null-a Jun 2, 2016
4da8b85
Remove DiscreteOneHot distribution.
null-a Jun 2, 2016
1f44d37
Merge branch 'dev' into daipp
null-a Jun 2, 2016
d3f14b3
Use TensorGaussian for VAE prior.
null-a Jun 2, 2016
77553e2
Version of VAE with priors on generative params.
null-a Jun 2, 2016
2ff1d7c
Have Discrete accept vector or array of probs.
null-a Jun 3, 2016
02f2b32
Update adnn dependency.
null-a Jun 3, 2016
14fb714
Combine util.isObject and dists.isParams.
null-a Jun 3, 2016
ada5dd7
Update the lda example to work with vectors.
null-a Jun 3, 2016
685883a
Clean-up Dirichlet.
null-a Jun 3, 2016
0d052f4
Update DirichletDrift in line with Dirichlet.
null-a Jun 3, 2016
ea9f466
Update mv regression example to work with vectors.
null-a Jun 3, 2016
7808c3f
Remove unused dependency.
null-a Jun 3, 2016
e9ad4f7
Fix util.isObject on Node <= 0.12
null-a Jun 6, 2016
c393f5a
Merge branch 'dev' into daipp
null-a Jun 7, 2016
91647fb
Clean-up LogisticNormal.
null-a Jun 7, 2016
9362e25
Tweak initialization of net bias vectors.
null-a Jun 7, 2016
3befd95
Removed old importance distribution helper.
null-a Jun 7, 2016
8c3f0e6
Move helper function to util.
null-a Jun 7, 2016
c8b18ba
Generalize sampleGuide to also sample from target.
null-a Jun 7, 2016
ad2e936
Add basic tests for VI.
null-a Jun 7, 2016
c1d10b8
Handle guides programs with zero params in elbo.
null-a Jun 7, 2016
49723e7
Fix Node <= 0.12 compatibility.
null-a Jun 8, 2016
a1fc4d9
Switch default optimization method to adagrad.
null-a Jun 8, 2016
2e346b3
Test all inference methods against guided program.
null-a Jun 8, 2016
b85c0e7
Typo.
null-a Jun 9, 2016
c468776
Default to mean-field guide in elbo/forwardSample.
null-a Jun 9, 2016
470f730
Add domain info for more distribution parameters.
null-a Jun 9, 2016
2f79ca5
Include parameter domains in distribution docs.
null-a Jun 9, 2016
a3f1837
Merge branch 'dev' into daipp
null-a Jun 10, 2016
f2ac87e
Soften condition in withCaching test.
null-a Jun 10, 2016
5e015b3
Add more optimize+elbo tests.
null-a Jun 10, 2016
cd0eff2
Add smoke tests for multivariate distributions.
null-a Jun 10, 2016
c32a3a5
Tweak Infer interface to optimization.
null-a Jun 10, 2016
26cfd42
Only show mean-field params msg in verbose mode.
null-a Jun 10, 2016
5c59f3a
Lint.
null-a Jun 10, 2016
c9a9d4c
Remove daipp.
null-a Jun 10, 2016
7a68b7c
Hack to get daipp working as a package.
null-a Jun 10, 2016
6171518
Merge branch 'dev' into daipp
null-a Jun 12, 2016
c789778
Extract domain helpers to separate file.
null-a Jun 12, 2016
c6410a9
Guide Dirichlet with LogisticNormal.
null-a Jun 12, 2016
4ca9ae3
Add missing 'use strict' directives.
null-a Jun 12, 2016
7c24e15
Basic tests for DiagCovGaussian & TensorGaussian.
null-a Jun 12, 2016
354ff47
Make daipp work without introducing new global.
null-a Jun 12, 2016
20ca180
Add basic docs for optimize/guides/params.
null-a Jun 12, 2016
1c09fe4
Clean-up TODO notes in preparation to merge.
null-a Jun 12, 2016
d23ea88
Add dedicated option for gradient checking.
null-a Jun 12, 2016
6f56447
Refactor mean-field guides.
null-a Jun 12, 2016
e881e23
Remove unused imports.
null-a Jun 13, 2016
5831d3e
Suppress noise in inference test output.
null-a Jun 13, 2016
05d0c64
Remove stale comments.
null-a Jun 13, 2016
a63d48d
Improve commentary.
null-a Jun 13, 2016
96a4d51
Move guide evaluation in to the daipp pkg.
null-a Jun 13, 2016
31ce98a
Avoid confusing the sample and forceSample params.
null-a Jun 13, 2016
3538227
Bump up the number samples following CI failures.
null-a Jun 13, 2016
316c775
Revert "Bump up the number samples following CI failures."
null-a Jun 13, 2016
7575762
Undo unintentional change made to inference test.
null-a Jun 13, 2016
224b9ce
Remove stale comment.
null-a Jun 13, 2016
5f27be8
Move helpers to util module.
null-a Jun 15, 2016
5780a3d
Remove unused imports.
null-a Jun 15, 2016
6b16f41
Merge branch 'dev' into daipp
null-a Jun 16, 2016
af40024
Don't rely on macros to transform Math.logGamma
null-a Jun 16, 2016
88071db
Document the optMethod parameter.
null-a Jun 16, 2016
5c73a71
Remove mapData.
null-a Jun 16, 2016
1ed8082
Move special.js and statistics.js to math folder.
null-a Jun 17, 2016
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,6 @@ docs/_static
docs/_templates
src/dists.js
src/inference/enumerate.js
src/inference/elbo.js
src/inference/eubo.js
src/aggregation/ScoreAggregator.js
2 changes: 1 addition & 1 deletion Gruntfile.js
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ var jslintSettings = {
src: [
'Gruntfile.js',
'src/header.wppl',
'src/**/!(dists|enumerate|ScoreAggregator).js'
'src/**/!(dists|enumerate|elbo|eubo|ScoreAggregator).js'
]
},
test: {
Expand Down
90 changes: 59 additions & 31 deletions docs/distributions.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,127 +3,155 @@ Distributions

.. js:function:: Bernoulli({p: ...})

* p: *success probability (probability in [0, 1])*
* p: success probability *(in [0,1])*

Distribution on {true,false}

.. js:function:: Beta({a: ..., b: ...})

* a: *shape (real > 0)*
* b: *shape (real > 0)*
* a: shape (real) *(>0)*
* b: shape (real) *(>0)*

Distribution on [0, 1]

.. js:function:: Binomial({p: ..., n: ...})

* p: *success probability (probability in [0,1])*
* n: *number of trials (integer > 0)*
* p: success probability *(in [0,1])*
* n: number of trials (integer > 0)

Distribution over the number of successes for n independent ``Bernoulli({p: p})`` trials

.. js:function:: Categorical({ps: ..., vs: ...})

* ps: *probabilities (array of probabilities in [0,1])*
* vs: *support (array of values)*
* ps: array of probabilities *(in [0,1])*
* vs: support (array of values)

Distribution over elements of vs with ``P(vs[i]) = ps[i]``

.. js:function:: Cauchy({location: ..., scale: ...})

* location: *(real in [-Infinity, Infinity])*
* scale: *(real > 0)*
* location: (real in [-Infinity, Infinity])
* scale: (real) *(>0)*

Distribution over ``[-Infinity, Infinity]``

.. js:function:: Delta({v: ...})

* v: *support element*
* v: support element

Discrete distribution that assigns probability one to the single element in its support. This is only useful in special circumstances as sampling from ``Delta({v: val})`` can be replaced with ``val`` itself. Furthermore, a ``Delta`` distribution parameterized by a random choice should not be used with MCMC based inference, as doing so produces incorrect results.

.. js:function:: DiagCovGaussian({mu: ..., sigma: ...})

* mu: vector of means
* sigma: vector of standard deviations *(>0)*

Multivariate Gaussian distribution with diagonal covariance matrix.

.. js:function:: Dirichlet({alpha: ...})

* alpha: *concentration parameters (array of reals > 0)*
* alpha: vector of concentration parameters *(>0)*

Distribution over arrays of probabilities.

.. js:function:: DirichletDrift({alpha: ...})

* alpha: *concentration parameters (array of reals > 0)*
* alpha: vector of concentration parameters *(>0)*

Drift version of Dirichlet. Drift kernels are used to narrow search during inference. Currently, the parameters guiding this narrowing are hard-coded.

.. js:function:: Discrete({ps: ...})

* ps: *array of probabilities in [0,1]*
* ps: array or vector of probabilities *(in [0,1])*

Distribution on ``{0,1,...,ps.length-1}`` with P(i) proportional to ``ps[i]``

.. js:function:: Exponential({a: ...})

* a: *rate (real > 0)*
* a: rate (real) *(>0)*

Distribution on ``[0, Infinity]``

.. js:function:: Gamma({shape: ..., scale: ...})

* shape: *shape parameter (real > 0)*
* scale: *scale parameter (real > 0)*
* shape: shape parameter (real) *(>0)*
* scale: scale parameter (real) *(>0)*

Distribution over positive reals.

.. js:function:: Gaussian({mu: ..., sigma: ...})

* mu: *mean (real)*
* sigma: *standard deviation (real > 0)*
* mu: mean (real)
* sigma: standard deviation (real) *(>0)*

Distribution over reals.

.. js:function:: GaussianDrift({mu: ..., sigma: ...})

* mu: *mean (real)*
* sigma: *standard deviation (real > 0)*
* mu: mean (real)
* sigma: standard deviation (real) *(>0)*

Drift version of Gaussian. Drift kernels are used to narrow search during inference. Currently, the parameters guiding this narrowing are hard-coded.

.. js:function:: LogisticNormal({mu: ..., sigma: ...})

* mu: vector of means
* sigma: vector of standard deviations *(>0)*

A distribution over probability vectors obtained by transforming a random variable drawn from ``DiagCovGaussian({mu: mu, sigma: sigma})``. If ``mu`` has length d then the distribution is over probability vectors of length d+1, i.e. the d dimensional simplex.

.. js:function:: Multinomial({ps: ..., n: ...})

* ps: *probabilities (array of reals that sum to 1)*
* n: *number of trials (integer > 0)*
* ps: probabilities (array of reals that sum to 1) *(in [0,1])*
* n: number of trials (integer > 0)

Distribution over counts for n independent ``Discrete({ps: ps})`` trials

.. js:function:: MultivariateBernoulli({ps: ...})

* ps: probabilities *(in [0,1])*

Distribution over a vector of independent Bernoulli variables. Each element of the vector takes on a value in ``{0, 1}``. Note that this differs from ``Bernoulli`` which has support ``{true, false}``.

.. js:function:: MultivariateGaussian({mu: ..., cov: ...})

* mu: *mean vector (array of reals)*
* cov: *covariance matrix (array of array of reals that must be symmetric positive semidefinite)*
* mu: mean vector (array of reals)
* cov: covariance matrix (array of array of reals that must be symmetric positive semidefinite)

n-dimensional Gaussian.

.. js:function:: Poisson({mu: ...})

* mu: *mean (real >0)*
* mu: mean (real) *(>0)*

Distribution over integers.

.. js:function:: RandomInteger({n: ...})

* n: *number of possible values (integer >= 1)*
* n: number of possible values (integer >= 1)

Uniform distribution on {0,1,...,n-1}

.. js:function:: TensorGaussian({mu: ..., sigma: ..., dims: ...})

* mu: mean
* sigma: standard deviation *(>0)*
* dims: dimension of tensor

Distribution over a tensor of independent Gaussian variables.

.. js:function:: Uniform({a: ..., b: ...})

* a: *lower bound (real)*
* b: *upper bound (real > a)*
* a: lower bound (real)
* b: upper bound (real > a)

Continuous uniform distribution on [a, b]

.. js:function:: UniformDrift({a: ..., b: ..., r: ...})

* a: *lower bound (real)*
* b: *upper bound (real > a)*
* r: *drift kernel radius*
* a: lower bound (real)
* b: upper bound (real > a)
* r: drift kernel radius

Drift version of Uniform. Drift kernels are used to narrow search during inference. UniformDrift proposes from a symmetric window around the current value x, [x-r, x+r]

40 changes: 40 additions & 0 deletions docs/guides.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
.. _guides:

Guides
======

Creating parameters
-------------------

.. js:function:: scalarParam(mean, sd)

:param real mean: mean (optional)
:param number sd: standard deviation (optional)

Creates a new scalar valued parameter initialized with a draw from
a Gaussian distribution.

If ``sd`` is omitted the initial value is ``mean``. If ``mean`` is
omitted it defaults to zero.

Example::

scalarParam(0, 1)

.. js:function:: tensorParam(dims, mean, sd)

:param array dims: dimension of tensor
:param number mu: mean (optional)
:param number sd: standard deviation (optional)

Creates a new tensor valued parameter. Each element is initialized
with an independent draw from a Gaussian distribution.

If ``sd`` is omitted the initial value of each element is ``mean``.
If ``mean`` is omitted it defaults to zero.

Example::

tensorParam([10, 10], 0, 0.01)

.. js:function:: param(arg1, arg2, arg3)
2 changes: 2 additions & 0 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,8 @@ Contents:
usage
inference
distributions
guides
tensors
debugging
packages
development/index
29 changes: 29 additions & 0 deletions docs/inference.rst
Original file line number Diff line number Diff line change
Expand Up @@ -303,6 +303,35 @@ SMC

Infer({method: 'SMC', particles: 100, rejuvSteps: 5}, thunk);

Optimization
------------

.. js:function:: Infer({method: 'optimize'[, ...]}, thunk)

This method performs inference by optimizing the parameters of the
:ref:`guide program<guides>`. The marginal distribution is a
histogram constructed from samples drawn from the guide program
using the optimized parameters.

The following options are supported:

.. describe:: steps

The number of optimization steps to take.

Default: ``1``

.. describe:: samples

The number of samples used to construct the marginal
distribution.

Default: ``1``

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Describe optMethod option (if it's applicable here)?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done!

Example usage::

Infer({method: 'optimize', samples: 100, steps: 100}, thunk);

.. rubric:: Bibliography

.. [wingate11] David Wingate, Andreas Stuhlmüller, and Noah D.
Expand Down
Loading