-
Notifications
You must be signed in to change notification settings - Fork 216
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Add visualization for COCO eval results and annotations (#82)
* add visualization for COCO eval * update README * add visualize_data * refine format * refine requirements * add docs for practical tools * update cond-detr r50 pretrained results * add docs for visualization * refine changelog Co-authored-by: ntianhe ren <[email protected]>
- Loading branch information
Showing
12 changed files
with
364 additions
and
4 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,104 @@ | ||
# Practical Tools and Scripts | ||
Apart from training and evaluation scripts, detrex also provides lots of practical tools under [tools/](https://github.com/IDEA-Research/detrex/tree/main/tools) directory or useful scripts. | ||
|
||
## Tensorboard Log Analysis | ||
`detrex` automatically saves tensorboard logs in `cfg.train.output_dir`, users can directly analyze the training logs using: | ||
|
||
```bash | ||
tensorboard --logdir /path/to/cfg.train.output_dir | ||
``` | ||
|
||
|
||
## Model Analysis | ||
Analysis tool for FLOPs, parameters, activations of detrex models. | ||
|
||
- Analyze FLOPs | ||
|
||
```bash | ||
cd detrex | ||
python tools/analyze_model.py --num-inputs 100 \ | ||
--tasks flop \ | ||
--config-file /path/to/config.py \ | ||
train.init_checkpoint=/path/to/model.pkl | ||
``` | ||
|
||
- Analyze parameters | ||
|
||
```bash | ||
cd detrex | ||
python tools/analyze_model.py --tasks parameter \ | ||
--config-file /path/to/config.py \ | ||
``` | ||
|
||
- Analyze activations | ||
|
||
```bash | ||
cd detrex | ||
python tools/analyze_model.py --num-inputs 100 \ | ||
--tasks activation \ | ||
--config-file /path/to/config.py \ | ||
train.init_checkpoint=/path/to/model.pkl | ||
``` | ||
|
||
- Analyze model structure | ||
|
||
```bash | ||
cd detrex | ||
python tools/analyze_model.py --tasks structure \ | ||
--config-file /path/to/config.py \ | ||
``` | ||
|
||
## Visualization | ||
Here are some useful tools for visualizing the model predictions or dataset. | ||
|
||
### Visualize Predictions | ||
To visualize the json instance detection/segmentation results dumped by `COCOEvaluator`, you should firstly specify the `output_dir` args for `Evaluator` in your config files, default to `None` in detrex. | ||
|
||
```python | ||
# your config.py | ||
dataloader = get_config("common/data/coco_detr.py").dataloader | ||
|
||
# dump the testing results into output_dir for visualization | ||
# save in the same directory as training logs | ||
dataloader.evaluator.output_dir = /path/to/dir | ||
``` | ||
|
||
Then run the following scripts: | ||
```bash | ||
cd detrex | ||
python tools/visualize_json_results.py --input /path/to/x.json \ # path to the saved testing results | ||
--output dir/ \ | ||
--dataset coco_2017_val | ||
``` | ||
|
||
**Note**: the visualization results will be saved in `dir/`, here's the example of the visualization for prediction results: | ||
|
||
 | ||
|
||
### Visualize Datasets | ||
Visualize ground truth raw annotations or training data (after preprocessing/augmentations). | ||
|
||
- Visualize raw annotations | ||
|
||
```bash | ||
cd detrex | ||
python tools/visualize_data.py --config-file /path/to/config.py \ | ||
--source annotation | ||
--output_dir dir/ | ||
[--show] | ||
``` | ||
|
||
- Visualize training data | ||
|
||
```bash | ||
cd detrex | ||
python tools/visualize_data.py --config-file /path/to/config.py \ | ||
--source dataloader | ||
--output_dir dir/ | ||
[--show] | ||
``` | ||
|
||
**Note**: The visualization results will be saved in `dir/`, here's the example of the visualization for annotations: | ||
|
||
 | ||
|
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -10,6 +10,7 @@ Tutorials | |
Config_System.md | ||
Converters.md | ||
Using_Pretrained_Backbone.md | ||
Tools.md | ||
Model_Zoo.md | ||
FAQs.md | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -9,4 +9,5 @@ autoflake | |
timm | ||
pytest | ||
scipy==1.7.3 | ||
psutil | ||
psutil | ||
opencv-python |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,92 @@ | ||
#!/usr/bin/env python | ||
# Copyright (c) Facebook, Inc. and its affiliates. | ||
import argparse | ||
import os | ||
from itertools import chain | ||
import cv2 | ||
import tqdm | ||
|
||
from detectron2.config import LazyConfig, instantiate | ||
from detectron2.data import DatasetCatalog, MetadataCatalog | ||
from detectron2.data import detection_utils as utils | ||
from detectron2.utils.logger import setup_logger | ||
from detectron2.utils.visualizer import Visualizer | ||
|
||
|
||
def setup(args): | ||
cfg = LazyConfig.load(args.config_file) | ||
cfg = LazyConfig.apply_overrides(cfg, args.opts) | ||
cfg.dataloader.train.num_workers = 0 | ||
return cfg | ||
|
||
|
||
def parse_args(in_args=None): | ||
parser = argparse.ArgumentParser(description="Visualize ground-truth data") | ||
parser.add_argument( | ||
"--source", | ||
choices=["annotation", "dataloader"], | ||
required=True, | ||
help="visualize the annotations or the data loader (with pre-processing)", | ||
) | ||
parser.add_argument("--config-file", metavar="FILE", help="path to config file") | ||
parser.add_argument("--output-dir", default="./", help="path to output directory") | ||
parser.add_argument("--show", action="store_true", help="show output in a window") | ||
parser.add_argument( | ||
"opts", | ||
help="Modify config options using the command-line", | ||
default=None, | ||
nargs=argparse.REMAINDER, | ||
) | ||
return parser.parse_args(in_args) | ||
|
||
|
||
if __name__ == "__main__": | ||
args = parse_args() | ||
logger = setup_logger() | ||
logger.info("Arguments: " + str(args)) | ||
cfg = setup(args) | ||
|
||
dirname = args.output_dir | ||
os.makedirs(dirname, exist_ok=True) | ||
if isinstance(cfg.dataloader.train.dataset.names, str): | ||
names = [cfg.dataloader.train.dataset.names] | ||
else: | ||
names = cfg.dataloader.train.dataset.names | ||
metadata = MetadataCatalog.get(names[0]) | ||
|
||
def output(vis, fname): | ||
if args.show: | ||
print(fname) | ||
cv2.imshow("window", vis.get_image()[:, :, ::-1]) | ||
cv2.waitKey() | ||
else: | ||
filepath = os.path.join(dirname, fname) | ||
print("Saving to {} ...".format(filepath)) | ||
vis.save(filepath) | ||
|
||
scale = 1.0 | ||
if args.source == "dataloader": | ||
train_data_loader = instantiate(cfg.dataloader.train) | ||
for batch in train_data_loader: | ||
for per_image in batch: | ||
# Pytorch tensor is in (C, H, W) format | ||
img = per_image["image"].permute(1, 2, 0).cpu().detach().numpy() | ||
img = utils.convert_image_to_rgb(img, cfg.dataloader.train.mapper.img_format) | ||
|
||
visualizer = Visualizer(img, metadata=metadata, scale=scale) | ||
target_fields = per_image["instances"].get_fields() | ||
labels = [metadata.thing_classes[i] for i in target_fields["gt_classes"]] | ||
vis = visualizer.overlay_instances( | ||
labels=labels, | ||
boxes=target_fields.get("gt_boxes", None), | ||
masks=target_fields.get("gt_masks", None), | ||
keypoints=target_fields.get("gt_keypoints", None), | ||
) | ||
output(vis, str(per_image["image_id"]) + ".jpg") | ||
else: | ||
dicts = list(chain.from_iterable([DatasetCatalog.get(k) for k in names])) | ||
for dic in tqdm.tqdm(dicts): | ||
img = utils.read_image(dic["file_name"], "RGB") | ||
visualizer = Visualizer(img, metadata=metadata, scale=scale) | ||
vis = visualizer.draw_dataset_dict(dic) | ||
output(vis, os.path.basename(dic["file_name"])) |
Oops, something went wrong.