Skip to content

Commit

Permalink
doc: update sphinx theme (#78)
Browse files Browse the repository at this point in the history
* update index.rst

* update rst

* Update index.rst

* Update parameters.rst

* Update index.rst
  • Loading branch information
xuyxu authored Jun 6, 2021
1 parent 9574c0c commit 30e2f4e
Show file tree
Hide file tree
Showing 7 changed files with 48 additions and 76 deletions.
Binary file removed docs/_images/soft_gradient_boosting.png
Binary file not shown.
15 changes: 2 additions & 13 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,6 @@
#
import os
import sys
import guzzle_sphinx_theme

sys.path.insert(0, os.path.abspath("../"))

Expand Down Expand Up @@ -79,24 +78,14 @@
exclude_patterns = []

# The name of the Pygments (syntax highlighting) style to use.
pygments_style = "sphinx"
pygments_style = "default"

# -- Options for HTML output -------------------------------------------------

# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme_path = guzzle_sphinx_theme.html_theme_path()
html_theme = 'guzzle_sphinx_theme'

# Register the theme as an extension to generate a sitemap.xml
extensions.append("guzzle_sphinx_theme")

# Guzzle theme options (see theme.conf for more information)
html_theme_options = {
# Set the name of the project to appear in the sidebar
"project_nav_name": "Ensemble-PyTorch",
}
html_theme = 'sphinx_rtd_theme'

html_sidebars = {
'**': ['logo-text.html', 'globaltoc.html', 'searchbox.html']
Expand Down
4 changes: 2 additions & 2 deletions docs/experiment.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Experiments
Setup
~~~~~

Experiments here are designed to evaluate the performance of each ensemble implemented in Ensemble-PyTorch. We have collected four different configurations on dataset and base estimator, as shown in the table below. In addition, scripts on producing all figures below are available on `GitHub <https://github.com/xuyxu/Ensemble-Pytorch/tree/master/docs/plotting>`__.
Experiments here are designed to evaluate the performance of each ensemble implemented in Ensemble-PyTorch. We have collected four different configurations on dataset and base estimator, as shown in the table below. In addition, scripts on producing all figures below are available on `GitHub <https://github.com/TorchEnsemble-Community/Ensemble-Pytorch/tree/master/docs/plotting>`__.

.. table::
:align: center
Expand All @@ -28,7 +28,7 @@ Experiments here are designed to evaluate the performance of each ensemble imple

.. tip::

For each experiment shown below, we have added some comments that may be worthy of your attention. Feel free to open an `issue <https://github.com/xuyxu/Ensemble-Pytorch/issues>`__ if you have any question on the results.
For each experiment shown below, we have added some comments that may be worthy of your attention. Feel free to open an `issue <https://github.com/TorchEnsemble-Community/Ensemble-Pytorch/issues/new/choose>`__ if you have any question on the results.

LeNet\@MNIST
~~~~~~~~~~~~
Expand Down
71 changes: 43 additions & 28 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,9 @@ Ensemble PyTorch Documentation

Ensemble PyTorch is a unified ensemble framework for PyTorch to easily improve the performance and robustness of your deep learning model. It provides:

* |:arrow_up_small:| Easy ways to improve the performance and robustness of your deep learning model.
* |:eyes:| Easy-to-use APIs on training and evaluating the ensemble.
* |:zap:| High training efficiency with parallelization.
* Easy ways to improve the performance and robustness of your deep learning model.
* Easy-to-use APIs on training and evaluating the ensemble.
* High training efficiency with parallelization.

Guidepost
---------
Expand All @@ -23,43 +23,58 @@ Example

.. code:: python
from torchensemble import VotingClassifier # Voting is a classic ensemble strategy
from torchensemble import VotingClassifier # voting is a classic ensemble strategy
# Load data
train_loader = DataLoader(...)
test_loader = DataLoader(...)
# Define the ensemble
model = VotingClassifier(estimator=base_estimator, # your deep learning model
n_estimators=10) # the number of base estimators
ensemble = VotingClassifier(
estimator=base_estimator, # here is your deep learning model
n_estimators=10, # number of base estimators
)
# Set the optimizer
model.set_optimizer("Adam", # parameter optimizer
lr=learning_rate, # learning rate of the optimizer
weight_decay=weight_decay) # weight decay of the optimizer
# Set the scheduler
model.set_scheduler("CosineAnnealingLR", T_max=epochs) # (optional) learning rate scheduler
# Train
model.fit(train_loader,
epochs=epochs) # the number of training epochs
# Evaluate
acc = model.predict(test_loader) # testing accuracy
ensemble.set_optimizer(
"Adam", # type of parameter optimizer
lr=learning_rate, # learning rate of parameter optimizer
weight_decay=weight_decay, # weight decay of parameter optimizer
)
# Set the learning rate scheduler
ensemble.set_scheduler(
"CosineAnnealingLR", # type of learning rate scheduler
T_max=epochs, # additional arguments on the scheduler
)
# Train the ensemble
ensemble.fit(
train_loader,
epochs=epochs, # number of training epochs
)
# Evaluate the ensemble
acc = ensemble.predict(test_loader) # testing accuracy
Content
-------

.. toctree::
:maxdepth: 1
:caption: For Users

Quick Start <quick_start>
Introduction <introduction>
Guidance <guide>
Experiment <experiment>
API Reference <parameters>

.. toctree::
:maxdepth: 1
:caption: For Developers

Quick Start <quick_start>
Introduction <introduction>
Guidance <guide>
Experiment <experiment>
API Reference <parameters>
Changelog <changelog>
Contributors <contributors>
Code of Conduct <code_of_conduct>
Roadmap <roadmap>
Changelog <changelog>
Roadmap <roadmap>
Contributors <contributors>
Code of Conduct <code_of_conduct>
12 changes: 0 additions & 12 deletions docs/introduction.rst
Original file line number Diff line number Diff line change
Expand Up @@ -75,21 +75,9 @@ Fast Geometric Ensemble [4]_

Motivated by geometric insights on the loss surface of deep neural networks, Fast Geometirc Ensembling (FGE) is an efficient ensemble that uses a customized learning rate scheduler to generate base estimators, similar to snapshot ensemble.

Soft Gradient Boosting [5]_
---------------------------

The sequential training stage of gradient boosting makes it prohibitively expensive to use when large neural networks are chosen as the base estimator. The recently proposed soft gradient boosting machine mitigates this problem by concatenating all base estimators in the ensemble, and by using local and global training objectives inspired from gradient boosting. As a result, it is able to simultaneously train all base estimators, while achieving similar boosting performance as gradient boosting.

The figure below is the model architecture of soft gradient boosting.

.. image:: ./_images/soft_gradient_boosting.png
:align: center
:width: 400

**References**

.. [1] Jerome H. Friedman., "Greedy Function Approximation: A Gradient Boosting Machine." The Annals of Statistics, 2001.
.. [2] Huang Gao, Sharon Yixuan Li, Geoff Pleisset, et al., "Snapshot Ensembles: Train 1, Get M for Free." ICLR, 2017.
.. [3] Balaji Lakshminarayanan, Alexander Pritzel, Charles Blundell., "Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles." NIPS 2017.
.. [4] Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin et al., "Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs." NeurIPS, 2018.
.. [5] Ji Feng, Yi-Xuan Xu, Yuan Jiang, Zhi-Hua Zhou., "Soft Gradient Boosting Machine.", arXiv, 2020.
19 changes: 0 additions & 19 deletions docs/parameters.rst
Original file line number Diff line number Diff line change
Expand Up @@ -92,25 +92,6 @@ GradientBoostingRegressor
.. autoclass:: torchensemble.gradient_boosting.GradientBoostingRegressor
:members:

Soft Gradient Boosting
----------------------

In soft gradient boosting, all base estimators could be simultaneously
fitted, while achieving the similar boosting improvements as in gradient
boosting.

SoftGradientBoostingClassifier
******************************

.. autoclass:: torchensemble.soft_gradient_boosting.SoftGradientBoostingClassifier
:members:

SoftGradientBoostingRegressor
*****************************

.. autoclass:: torchensemble.soft_gradient_boosting.SoftGradientBoostingRegressor
:members:

Snapshot Ensemble
-----------------

Expand Down
3 changes: 1 addition & 2 deletions docs/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,5 +2,4 @@ sphinx==3.2.*
sphinx-panels==0.5.*
sphinxemoji==0.1.8
sphinx-copybutton
m2r2==0.2.7
guzzle_sphinx_theme
m2r2==0.2.7

0 comments on commit 30e2f4e

Please sign in to comment.