Skip to content

Commit

Permalink
Update README.md with Adaptor docs
Browse files Browse the repository at this point in the history
  • Loading branch information
mranzinger authored Sep 4, 2024
1 parent 8e5b42b commit 41ec552
Showing 1 changed file with 8 additions and 1 deletion.
9 changes: 8 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -305,7 +305,7 @@ RADIO allows non-square inputs. In fact, both RADIOv1 and RADIOv2 achieve higher
### Adaptors
_(Currently only supported with TorchHub)_

You may additionally specify model adaptors to achieve extra behaviors. Currently, 'clip' is the only supported adaptor. In this mode, radio will return a dict of tuples:
You may additionally specify model adaptors to achieve extra behaviors. In this mode, radio will return a dict of tuples:

```Python
model = torch.hub.load(..., adaptor_names='clip', ...)
Expand All @@ -318,6 +318,13 @@ clip_summary, clip_features = output['clip']

Refer to `examples/zero_shot_imagenet.py` for example usage.

#### Supported Adaptors:

- RADIOv2.5: `clip`, `siglip`, `dino_v2`, `sam`
- RADIOv2\[.1\]: `clip`, `dino_v2`, `sam`

The `clip` and `siglip` adaptors have the additional functionality of supporting tokenization and language encoding. Refer to `examples/zero_shot_imagenet.py` for this use, as well as the [API](https://github.com/NVlabs/RADIO/blob/main/radio/open_clip_adaptor.py#L33-L36).

### Preprocessing

By default, RADIO expects the input images to have normalized values in the `[0, 1]` range. If you already have an existing data pipeline, and you'd like conditioning to occur there instead of within the RADIO model, you can call this function:
Expand Down

0 comments on commit 41ec552

Please sign in to comment.