diff --git a/docs/_images/soft_gradient_boosting.png b/docs/_images/soft_gradient_boosting.png deleted file mode 100644 index 3854d73..0000000 Binary files a/docs/_images/soft_gradient_boosting.png and /dev/null differ diff --git a/docs/conf.py b/docs/conf.py index f83737c..baff813 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -12,7 +12,6 @@ # import os import sys -import guzzle_sphinx_theme sys.path.insert(0, os.path.abspath("../")) @@ -79,24 +78,14 @@ exclude_patterns = [] # The name of the Pygments (syntax highlighting) style to use. -pygments_style = "sphinx" +pygments_style = "default" # -- Options for HTML output ------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. # -html_theme_path = guzzle_sphinx_theme.html_theme_path() -html_theme = 'guzzle_sphinx_theme' - -# Register the theme as an extension to generate a sitemap.xml -extensions.append("guzzle_sphinx_theme") - -# Guzzle theme options (see theme.conf for more information) -html_theme_options = { - # Set the name of the project to appear in the sidebar - "project_nav_name": "Ensemble-PyTorch", -} +html_theme = 'sphinx_rtd_theme' html_sidebars = { '**': ['logo-text.html', 'globaltoc.html', 'searchbox.html'] diff --git a/docs/experiment.rst b/docs/experiment.rst index 03b1b3a..76a8e79 100644 --- a/docs/experiment.rst +++ b/docs/experiment.rst @@ -4,7 +4,7 @@ Experiments Setup ~~~~~ -Experiments here are designed to evaluate the performance of each ensemble implemented in Ensemble-PyTorch. We have collected four different configurations on dataset and base estimator, as shown in the table below. In addition, scripts on producing all figures below are available on `GitHub `__. +Experiments here are designed to evaluate the performance of each ensemble implemented in Ensemble-PyTorch. We have collected four different configurations on dataset and base estimator, as shown in the table below. In addition, scripts on producing all figures below are available on `GitHub `__. .. table:: :align: center @@ -28,7 +28,7 @@ Experiments here are designed to evaluate the performance of each ensemble imple .. tip:: - For each experiment shown below, we have added some comments that may be worthy of your attention. Feel free to open an `issue `__ if you have any question on the results. + For each experiment shown below, we have added some comments that may be worthy of your attention. Feel free to open an `issue `__ if you have any question on the results. LeNet\@MNIST ~~~~~~~~~~~~ diff --git a/docs/index.rst b/docs/index.rst index 541a001..932b315 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -7,9 +7,9 @@ Ensemble PyTorch Documentation Ensemble PyTorch is a unified ensemble framework for PyTorch to easily improve the performance and robustness of your deep learning model. It provides: -* |:arrow_up_small:| Easy ways to improve the performance and robustness of your deep learning model. -* |:eyes:| Easy-to-use APIs on training and evaluating the ensemble. -* |:zap:| High training efficiency with parallelization. +* Easy ways to improve the performance and robustness of your deep learning model. +* Easy-to-use APIs on training and evaluating the ensemble. +* High training efficiency with parallelization. Guidepost --------- @@ -23,43 +23,58 @@ Example .. code:: python - from torchensemble import VotingClassifier # Voting is a classic ensemble strategy + from torchensemble import VotingClassifier # voting is a classic ensemble strategy # Load data train_loader = DataLoader(...) test_loader = DataLoader(...) # Define the ensemble - model = VotingClassifier(estimator=base_estimator, # your deep learning model - n_estimators=10) # the number of base estimators + ensemble = VotingClassifier( + estimator=base_estimator, # here is your deep learning model + n_estimators=10, # number of base estimators + ) # Set the optimizer - model.set_optimizer("Adam", # parameter optimizer - lr=learning_rate, # learning rate of the optimizer - weight_decay=weight_decay) # weight decay of the optimizer - - # Set the scheduler - model.set_scheduler("CosineAnnealingLR", T_max=epochs) # (optional) learning rate scheduler - - # Train - model.fit(train_loader, - epochs=epochs) # the number of training epochs - - # Evaluate - acc = model.predict(test_loader) # testing accuracy + ensemble.set_optimizer( + "Adam", # type of parameter optimizer + lr=learning_rate, # learning rate of parameter optimizer + weight_decay=weight_decay, # weight decay of parameter optimizer + ) + + # Set the learning rate scheduler + ensemble.set_scheduler( + "CosineAnnealingLR", # type of learning rate scheduler + T_max=epochs, # additional arguments on the scheduler + ) + + # Train the ensemble + ensemble.fit( + train_loader, + epochs=epochs, # number of training epochs + ) + + # Evaluate the ensemble + acc = ensemble.predict(test_loader) # testing accuracy Content ------- .. toctree:: :maxdepth: 1 + :caption: For Users + + Quick Start + Introduction + Guidance + Experiment + API Reference + +.. toctree:: + :maxdepth: 1 + :caption: For Developers - Quick Start - Introduction - Guidance - Experiment - API Reference - Changelog - Contributors - Code of Conduct - Roadmap + Changelog + Roadmap + Contributors + Code of Conduct diff --git a/docs/introduction.rst b/docs/introduction.rst index f5c57f5..a1d436d 100644 --- a/docs/introduction.rst +++ b/docs/introduction.rst @@ -75,21 +75,9 @@ Fast Geometric Ensemble [4]_ Motivated by geometric insights on the loss surface of deep neural networks, Fast Geometirc Ensembling (FGE) is an efficient ensemble that uses a customized learning rate scheduler to generate base estimators, similar to snapshot ensemble. -Soft Gradient Boosting [5]_ ---------------------------- - -The sequential training stage of gradient boosting makes it prohibitively expensive to use when large neural networks are chosen as the base estimator. The recently proposed soft gradient boosting machine mitigates this problem by concatenating all base estimators in the ensemble, and by using local and global training objectives inspired from gradient boosting. As a result, it is able to simultaneously train all base estimators, while achieving similar boosting performance as gradient boosting. - -The figure below is the model architecture of soft gradient boosting. - -.. image:: ./_images/soft_gradient_boosting.png - :align: center - :width: 400 - **References** .. [1] Jerome H. Friedman., "Greedy Function Approximation: A Gradient Boosting Machine." The Annals of Statistics, 2001. .. [2] Huang Gao, Sharon Yixuan Li, Geoff Pleisset, et al., "Snapshot Ensembles: Train 1, Get M for Free." ICLR, 2017. .. [3] Balaji Lakshminarayanan, Alexander Pritzel, Charles Blundell., "Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles." NIPS 2017. .. [4] Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin et al., "Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs." NeurIPS, 2018. -.. [5] Ji Feng, Yi-Xuan Xu, Yuan Jiang, Zhi-Hua Zhou., "Soft Gradient Boosting Machine.", arXiv, 2020. diff --git a/docs/parameters.rst b/docs/parameters.rst index b256840..2e33174 100644 --- a/docs/parameters.rst +++ b/docs/parameters.rst @@ -92,25 +92,6 @@ GradientBoostingRegressor .. autoclass:: torchensemble.gradient_boosting.GradientBoostingRegressor :members: -Soft Gradient Boosting ----------------------- - -In soft gradient boosting, all base estimators could be simultaneously -fitted, while achieving the similar boosting improvements as in gradient -boosting. - -SoftGradientBoostingClassifier -****************************** - -.. autoclass:: torchensemble.soft_gradient_boosting.SoftGradientBoostingClassifier - :members: - -SoftGradientBoostingRegressor -***************************** - -.. autoclass:: torchensemble.soft_gradient_boosting.SoftGradientBoostingRegressor - :members: - Snapshot Ensemble ----------------- diff --git a/docs/requirements.txt b/docs/requirements.txt index a8418a0..18e546c 100644 --- a/docs/requirements.txt +++ b/docs/requirements.txt @@ -2,5 +2,4 @@ sphinx==3.2.* sphinx-panels==0.5.* sphinxemoji==0.1.8 sphinx-copybutton -m2r2==0.2.7 -guzzle_sphinx_theme \ No newline at end of file +m2r2==0.2.7 \ No newline at end of file