Skip to content

Commit

Permalink
update readme & fix version number
Browse files Browse the repository at this point in the history
  • Loading branch information
lrjconan committed Mar 28, 2019
1 parent fdfc8fe commit 825e055
Show file tree
Hide file tree
Showing 17 changed files with 60 additions and 80 deletions.
32 changes: 26 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Lanczos Network
This is the PyTorch implementation of [Lanczos Network](https://openreview.net/pdf?id=BkedznAqKQ) as described in the following paper:
This is the PyTorch implementation of [Lanczos Network](https://arxiv.org/abs/1901.01484) as described in the following ICLR 2019 paper:

```
@inproceedings{liao2018lanczos,
Expand All @@ -12,7 +12,7 @@ This is the PyTorch implementation of [Lanczos Network](https://openreview.net/p

We also provide our own implementation of 9 recent graph neural networks on the [QM8](https://arxiv.org/pdf/1504.01966.pdf) benchmark:

* [graph convolution networks for fingerprint](https://papers.nips.cc/paper/5954-convolutional-networks-on-graphs-for-learning-molecular-fingerprints.pdf) (GCNFP)
* [graph convolution networks for fingerprint](https://papers.nips.cc/paper/5954-convolutional-networks-on-graphs-for-learning-molecular-fingerprints.pdf) (GCN-FP)
* [gated graph neural networks](https://arxiv.org/pdf/1511.05493.pdf) (GGNN)
* [diffusion convolutional neural networks](https://arxiv.org/pdf/1511.02136.pdf) (DCNN)
* [Chebyshev networks](https://papers.nips.cc/paper/6081-convolutional-neural-networks-on-graphs-with-fast-localized-spectral-filtering.pdf) (ChebyNet)
Expand All @@ -22,6 +22,25 @@ We also provide our own implementation of 9 recent graph neural networks on the
* [graph partition neural networks](https://arxiv.org/pdf/1803.06272.pdf) (GPNN)
* [graph attention networks](https://arxiv.org/pdf/1710.10903.pdf) (GAT)

You should be able to reproduce the following results of weighted mean absolute error (MAE x 1.0e-3):

| Methods | Validation MAE | Test MAE |
| ------------- |:----------------:|:----------------:|
| GCN-FP | 15.06 +- 0.04 | 14.80 +- 0.09 |
| GGNN | 12.94 +- 0.05 | 12.67 +- 0.22 |
| DCNN | 10.14 +- 0.05 | 9.97 +- 0.09 |
| ChebyNet | 10.24 +- 0.06 | 10.07 +- 0.09 |
| GCN | 11.68 +- 0.09 | 11.41 +- 0.10 |
| MPNN | 11.16 +- 0.13 | 11.08 +- 0.11 |
| GraphSAGE | 13.19 +- 0.04 | 12.95 +- 0.11 |
| GPNN | 12.81 +- 0.80 | 12.39 +- 0.77 |
| GAT | 11.39 +- 0.09 | 11.02 +- 0.06 |
| LanczosNet | **9.65** +- 0.19 | **9.58** +- 0.14 |
| AdaLanczosNet | 10.10 +- 0.22 | 9.97 +- 0.20 |

**Note**:

* Above results are averaged over 3 runs with random seeds {1234, 5678, 9012}

## Setup
To set up experiments, we need to download the [preprocessed QM8 data](http://www.cs.toronto.edu/~rjliao/data/qm8.zip) and build our customized operators by running the following scripts:
Expand All @@ -31,19 +50,20 @@ To set up experiments, we need to download the [preprocessed QM8 data](http://ww
```

**Note**:
We also provide the script ```dataset/get_qm8_data.py``` to preprocess the [raw QM8](http://quantum-machine.org/datasets/) data which requires the installation of [DeepChem](https://github.com/deepchem/deepchem).

* We also provide the script ```dataset/get_qm8_data.py``` to preprocess the [raw QM8](http://quantum-machine.org/datasets/) data which requires the installation of [DeepChem](https://github.com/deepchem/deepchem).
It will produce a different train/dev/test split than what we used in the paper due to the randomness of DeepChem.
Therefore, we suggest using our preprocessed data for a fair comparison.


## Dependencies
Python 3, PyTorch(1.0), scipy, sklearn
Python 3, PyTorch(1.0), numpy, scipy, sklearn


## Run Demos

### Train
* To run the training of experiment ```X``` where ```X``` is one of {```qm8_lanczos_net```, ```qm8_ada_lanczos_net```, ```qm8_cheby_net```, ...}:
* To run the training of experiment ```X``` where ```X``` is one of {```qm8_lanczos_net```, ```qm8_ada_lanczos_net```, ...}:

```python run_exp.py -c config/X.yaml```

Expand All @@ -57,7 +77,7 @@ Python 3, PyTorch(1.0), scipy, sklearn

* After training, you can specify the ```test_model``` field of the configuration yaml file with the path of your best model snapshot, e.g.,

```test_model: exp/qm8_lanczos_net/LanczosNetFixBasisChem_chemistry_2018-Oct-02-11-55-54_25460/model_snapshot_best.pth```
```test_model: exp/qm8_lanczos_net/LanczosNet_chemistry_2018-Oct-02-11-55-54_25460/model_snapshot_best.pth```

* To run the test of experiments ```X```:

Expand Down
6 changes: 1 addition & 5 deletions config/qm8_ada_lanczos_net.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,6 @@ runner: QM8Runner
use_gpu: true
gpus: [0]
seed: 1234
# seed: 5678
# seed: 9012
dataset:
loader_name: QM8Data
name: chemistry
Expand Down Expand Up @@ -47,6 +45,4 @@ train:
test:
batch_size: 64
num_workers: 0
test_model: exp/qm8_ada_lanczos_net/AdaLanczosNet_chemistry_2018-Dec-28-21-46-34_11666/model_snapshot_best.pth
# test_model: exp/qm8_ada_lanczos_net/AdaLanczosNet_chemistry_2018-Dec-30-11-36-25_7857/model_snapshot_best.pth
# test_model: exp/qm8_ada_lanczos_net/AdaLanczosNet_chemistry_2018-Dec-30-11-36-48_8366/model_snapshot_best.pth
test_model:
8 changes: 2 additions & 6 deletions config/qm8_cheby_net.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,7 @@ exp_dir: exp/qm8_cheby_net
runner: QM8Runner
use_gpu: true
gpus: [0]
# seed: 1234
# seed: 5678
seed: 9012
seed: 1234
dataset:
loader_name: QM8Data
name: chemistry
Expand Down Expand Up @@ -42,6 +40,4 @@ train:
test:
batch_size: 64
num_workers: 0
# test_model: exp/qm8_cheby_net/ChebyNetChem_chemistry_2018-Sep-26-15-39-38_2519/model_snapshot_best.pth
# test_model: exp/qm8_cheby_net/ChebyNetChem_chemistry_2018-Sep-28-11-52-19_13447/model_snapshot_best.pth
test_model: exp/qm8_cheby_net/ChebyNetChem_chemistry_2018-Sep-28-12-35-37_5959/model_snapshot_best.pth
test_model:
7 changes: 3 additions & 4 deletions config/qm8_dcnn.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
exp_name: qm8_dcnn
exp_dir: exp/qm8_dcnn
runner: QM8Runner
use_gpu: false
use_gpu: true
gpus: [0]
seed: 1234
dataset:
Expand Down Expand Up @@ -37,8 +37,7 @@ train:
shuffle: true
is_resume: false
resume_model: None
test:
test:
batch_size: 64
num_workers: 0
test_model: exp/qm8_dcnn/DCNNChem_chemistry_2018-Sep-20-22-33-08_21116/model_snapshot_best.pth

test_model:
8 changes: 2 additions & 6 deletions config/qm8_gat.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,7 @@ exp_dir: exp/qm8_gat
runner: QM8Runner
use_gpu: true
gpus: [0]
# seed: 1234
# seed: 5678
seed: 9012
seed: 1234
dataset:
loader_name: QM8Data
name: chemistry
Expand Down Expand Up @@ -42,6 +40,4 @@ train:
test:
batch_size: 64
num_workers: 0
# test_model: exp/qm8_gat/GATChem_chemistry_2018-Sep-26-16-28-53_5748/model_snapshot_best.pth
# test_model: exp/qm8_gat/GATChem_chemistry_2018-Sep-30-17-47-57_21569/model_snapshot_best.pth
test_model: exp/qm8_gat/GATChem_chemistry_2018-Oct-01-15-03-58_5262/model_snapshot_best.pth
test_model:
8 changes: 2 additions & 6 deletions config/qm8_gcn.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,7 @@ exp_dir: exp/qm8_gcn
runner: QM8Runner
use_gpu: true
gpus: [0]
# seed: 1234
# seed: 5678
seed: 9012
seed: 1234
dataset:
loader_name: QM8Data
name: chemistry
Expand Down Expand Up @@ -41,6 +39,4 @@ train:
test:
batch_size: 64
num_workers: 0
# test_model: exp/qm8_gcn/GCNChem_chemistry_2018-Sep-26-15-38-02_455/model_snapshot_best.pth
# test_model: exp/qm8_gcn/GCNChem_chemistry_2018-Sep-28-15-35-42_1367/model_snapshot_best.pth
test_model: exp/qm8_gcn/GCNChem_chemistry_2018-Sep-28-16-26-21_3359/model_snapshot_best.pth
test_model:
4 changes: 2 additions & 2 deletions config/qm8_gcnfp.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
exp_name: qm8_gcnfp
exp_dir: exp/qm8_gcnfp
runner: QM8Runner
use_gpu: false
use_gpu: true
gpus: [0]
seed: 1234
dataset:
Expand Down Expand Up @@ -38,4 +38,4 @@ train:
test:
batch_size: 64
num_workers: 0
test_model: exp/qm8_gcnfp/GCNFPChem_chemistry_2018-Sep-22-10-57-35_7511/model_snapshot_best.pth
test_model:
8 changes: 2 additions & 6 deletions config/qm8_ggnn.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,7 @@ exp_dir: exp/qm8_ggnn
runner: QM8Runner
use_gpu: true
gpus: [0]
# seed: 1234
# seed: 5678
seed: 9012
seed: 1234
dataset:
loader_name: QM8Data
name: chemistry
Expand Down Expand Up @@ -44,6 +42,4 @@ train:
test:
batch_size: 64
num_workers: 0
# test_model: exp/qm8_ggnn/GGNNChem_chemistry_2018-Sep-21-20-14-02_16607/model_snapshot_best.pth
# test_model: exp/qm8_ggnn/GGNNChem_chemistry_2018-Sep-27-17-58-19_26297/model_snapshot_best.pth
test_model: exp/qm8_ggnn/GGNNChem_chemistry_2018-Sep-27-21-28-43_26955/model_snapshot_best.pth
test_model:
8 changes: 2 additions & 6 deletions config/qm8_gpnn.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,7 @@ exp_dir: exp/qm8_gpnn
runner: QM8Runner
use_gpu: true
gpus: [0]
# seed: 1234
# seed: 5678
seed: 9012
seed: 1234
dataset:
loader_name: QM8Data
name: chemistry
Expand Down Expand Up @@ -47,6 +45,4 @@ train:
test:
batch_size: 64
num_workers: 0
# test_model: exp/qm8_gpnn/GPNN_chemistry_2019-Jan-02-20-37-00_18321/model_snapshot_best.pth
# test_model: exp/qm8_gpnn/GPNN_chemistry_2019-Jan-02-23-18-03_18654/model_snapshot_best.pth
test_model: exp/qm8_gpnn/GPNN_chemistry_2019-Jan-02-23-18-20_19110/model_snapshot_best.pth
test_model:
8 changes: 2 additions & 6 deletions config/qm8_graphsage.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,7 @@ exp_dir: exp/qm8_graphsage
runner: QM8Runner
use_gpu: true
gpus: [0]
# seed: 1234
# seed: 5678
seed: 9012
seed: 1234
dataset:
loader_name: QM8Data
name: chemistry
Expand Down Expand Up @@ -42,6 +40,4 @@ train:
test:
batch_size: 64
num_workers: 0
# test_model: exp/qm8_graphsage/GraphSAGEChem_chemistry_2018-Sep-26-16-20-06_26936/model_snapshot_best.pth
# test_model: exp/qm8_graphsage/GraphSAGEChem_chemistry_2018-Sep-29-23-26-32_6431/model_snapshot_best.pth
test_model: exp/qm8_graphsage/GraphSAGEChem_chemistry_2018-Sep-30-12-11-12_26494/model_snapshot_best.pth
test_model:
12 changes: 3 additions & 9 deletions config/qm8_lanczos_net.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,6 @@ runner: QM8Runner
use_gpu: true
gpus: [0]
seed: 1234
# seed: 5678
# seed: 9012
dataset:
loader_name: QM8Data
name: chemistry
Expand All @@ -16,10 +14,8 @@ dataset:
num_bond_type: 6
model:
name: LanczosNet
# short_diffusion_dist: []
# long_diffusion_dist: [1, 2, 3, 5, 7, 10, 20, 30]
short_diffusion_dist: [3, 5, 7]
long_diffusion_dist: [10, 20, 30]
short_diffusion_dist: []
long_diffusion_dist: [1, 2, 3, 5, 7, 10, 20, 30]
num_eig_vec: 20
spectral_filter_kind: MLP
input_dim: 64
Expand Down Expand Up @@ -47,6 +43,4 @@ train:
test:
batch_size: 64
num_workers: 0
test_model: exp/qm8_lanczos_net/LanczosNetFixBasisChem_chemistry_2018-Oct-02-11-55-54_25460/model_snapshot_best.pth
# test_model: exp/qm8_lanczos_net/LanczosNetFixBasisChem_chemistry_2018-Oct-02-14-04-53_18123/model_snapshot_best.pth
# test_model: exp/qm8_lanczos_net/LanczosNetFixBasisChem_chemistry_2018-Oct-02-15-53-14_20776/model_snapshot_best.pth
test_model:
8 changes: 2 additions & 6 deletions config/qm8_mpnn.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,7 @@ exp_dir: exp/qm8_mpnn
runner: QM8Runner
use_gpu: true
gpus: [0]
# seed: 1234
# seed: 5678
seed: 9012
seed: 1234
dataset:
loader_name: QM8Data
name: chemistry
Expand Down Expand Up @@ -46,6 +44,4 @@ train:
test:
batch_size: 64
num_workers: 0
# test_model: exp/qm8_mpnn/MPNNChem_chemistry_2018-Sep-26-23-35-54_6509/model_snapshot_best.pth
# test_model: exp/qm8_mpnn/MPNNChem_chemistry_2018-Sep-28-11-52-40_14073/model_snapshot_best.pth
test_model: exp/qm8_mpnn/MPNNChem_chemistry_2018-Sep-28-22-01-22_25409/model_snapshot_best.pth
test_model:
2 changes: 1 addition & 1 deletion operators/build_segment_reduction.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
import torch
from subprocess import call

if torch.__version__ == '1.0.0':
if torch.__version__[0] == '1':
from setuptools import setup
from torch.utils.cpp_extension import BuildExtension, CUDAExtension

Expand Down
3 changes: 0 additions & 3 deletions runner/qm8_runner.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,7 @@
import torch.nn as nn
import torch.utils.data
import torch.optim as optim
import torch.backends.cudnn as cudnn
from torch.utils.data import Dataset, DataLoader
# from torchvision import datasets, transforms
from tensorboardX import SummaryWriter

from model import *
Expand All @@ -20,7 +18,6 @@
from utils.train_helper import data_to_gpu, snapshot, load_model, EarlyStopper

logger = get_logger('exp_logger')

__all__ = ['QM8Runner']


Expand Down
4 changes: 2 additions & 2 deletions setup.sh
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
#!/bin/bash

curl -O data/qm8.zip http://www.cs.toronto.edu/~rjliao/data/qm8.zip
unzip data/qm8.zip -d data/QM8
wget http://www.cs.toronto.edu/~rjliao/data/qm8.zip -P data
unzip data/qm8.zip -d data/

cd operators
python build_segment_reduction.py build_ext --inplace
3 changes: 2 additions & 1 deletion utils/logger.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,8 @@ def setup_logging(log_level, log_file, logger_name="exp_logger"):
logging.basicConfig(
filename=log_file,
filemode="w",
format="%(levelname)-5s | %(asctime)s | File %(filename)-20s | Line %(lineno)-5d | %(message)s",
format=
"%(levelname)-5s | %(asctime)s | File %(filename)-20s | Line %(lineno)-5d | %(message)s",
datefmt="%m/%d/%Y %I:%M:%S %p",
level=numeric_level)

Expand Down
11 changes: 6 additions & 5 deletions utils/train_helper.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ def data_to_gpu(*input_data):
for dd in input_data:
if type(dd).__name__ == 'Tensor':
return_data += [dd.cuda()]

return tuple(return_data)


Expand All @@ -18,10 +18,11 @@ def snapshot(model, optimizer, config, step, gpus=[0], tag=None):
"step": step
}

torch.save(model_snapshot,
os.path.join(config.save_dir, "model_snapshot_{}.pth".format(tag)
if tag is not None else
"model_snapshot_{:07d}.pth".format(step)))
torch.save(
model_snapshot,
os.path.join(
config.save_dir, "model_snapshot_{}.pth".format(tag)
if tag is not None else "model_snapshot_{:07d}.pth".format(step)))


def load_model(model, file_name, optimizer=None):
Expand Down

0 comments on commit 825e055

Please sign in to comment.