Skip to content
This repository has been archived by the owner on Jul 30, 2024. It is now read-only.

Commit

Permalink
Fix headings
Browse files Browse the repository at this point in the history
Signed-off-by: Krishna Murthy <[email protected]>
  • Loading branch information
krrish94 committed Apr 17, 2020
1 parent a3d63f9 commit 6189b96
Showing 1 changed file with 6 additions and 2 deletions.
8 changes: 6 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ The current implementation is **_blazing fast!_** (**~5-9x faster** than the [or
## Sample results from the repo


#### On synthetic data
### On synthetic data

<p align="center">
<img src="assets/blender-lowres.gif">
Expand Down Expand Up @@ -80,19 +80,23 @@ conda env create
conda activate nerf
```

### Run training!

Once everything is setup, to run experiments, first edit `config/lego.yml` to specify your own parameters.

The training script can be invoked by running
```bash
python train_nerf.py --config config/lego.yml
```

### Optional: Resume training from a checkpoint

Optionally, if resuming training from a previous checkpoint, run
```bash
python train_nerf.py --config config/lego.yml --load-checkpoint path/to/checkpoint.ckpt
```

### Cache rays from the dataset (Optional)
### Optional: Cache rays from the dataset

An optional, yet simple preprocessing step of caching rays from the dataset results in substantial compute time savings (reduced carbon footprint, yay!), especially when running multiple experiments. It's super-simple: run
```bash
Expand Down

0 comments on commit 6189b96

Please sign in to comment.