Skip to content

Commit

Permalink
Update documentation with latest features (#890)
Browse files Browse the repository at this point in the history
Update documentation
  • Loading branch information
anwai98 authored Feb 25, 2025
1 parent 05fbe45 commit 2c2878c
Show file tree
Hide file tree
Showing 6 changed files with 10 additions and 4 deletions.
4 changes: 4 additions & 0 deletions doc/annotation_tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -146,3 +146,7 @@ You can select the image data via `Path to images`. You can either load images f
You can select the label data via `Path to labels` and `Label data key`, following the same logic as for the image data. The label masks are expected to have the same size as the image data. You can for example use annotations created with one of the `micro_sam` annotation tools for this, they are stored in the correct format. See [the FAQ](#fine-tuning-questions) for more details on the expected label data.

The `Configuration` option allows you to choose the hardware configuration for training. We try to automatically select the correct setting for your system, but it can also be changed. Details on the configurations can be found [here](#training-your-own-model).

NOTE: We recommend to fine-tune Segment Anything models on your data by
- running `$ micro_sam.train` in the command line.
- calling `micro_sam.training.train_sam` in a python script. Check out [examples/finetuning/finetune_hela.py](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/finetuning/finetune_hela.py) OR [notebooks/sam_finetuning.ipynb](https://github.com/computational-cell-analytics/micro-sam/blob/master/notebooks/sam_finetuning.ipynb) for details.
2 changes: 2 additions & 0 deletions doc/cli_tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ The supported CLIs can be used by
- Running `$ micro_sam.annotator_3d` for starting the 3d annotator.
- Running `$ micro_sam.annotator_tracking` for starting the tracking annotator.
- Running `$ micro_sam.image_series_annotator` for starting the image series annotator.
- Running `$ micro_sam.train` for finetuning Segment Anything models on your data.
- Running `$ micro_sam.automatic_segmentation` for automatic instance segmentation.
- We support all post-processing parameters for automatic instance segmentation (for both AMG and AIS).
- The automatic segmentation mode can be controlled by: `--mode <MODE_NAME>`, where the available choice for `MODE_NAME` is `amg` / `ais`.
Expand All @@ -20,5 +21,6 @@ The supported CLIs can be used by
```
- Remember to specify the automatic segmentation mode using `--mode <MODE_NAME>` when using additional post-processing parameters.
- You can check details for supported parameters and their respective default values at `micro_sam/instance_segmentation.py` under the `generate` method for `AutomaticMaskGenerator` and `InstanceSegmentationWithDecoder` class.
- A good practice is to set `--ndim <NDIM>`, where `<NDIM>` corresponds to the number of dimensions of input images.

NOTE: For all CLIs above, you can find more details by adding the argument `-h` to the CLI script (eg. `$ micro_sam.annotator_2d -h`).
2 changes: 1 addition & 1 deletion doc/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ The installer should work out-of-the-box on Windows and Linux platforms. Please

### 3. What is the minimum system requirement for `micro_sam`?
From our experience, the `micro_sam` annotation tools work seamlessly on most laptop or workstation CPUs and with > 8GB RAM.
You might encounter some slowness for $\leq$ 8GB RAM. The resources `micro_sam`'s annotation tools have been tested on are:
You might encounter some slowness for >= 8GB RAM. The resources `micro_sam`'s annotation tools have been tested on are:
- Windows:
- Windows 10 Pro, Intel i5 7th Gen, 8GB RAM
- Windows 10 Enterprise LTSC, Intel i7 13th Gen, 32GB RAM
Expand Down
2 changes: 1 addition & 1 deletion doc/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ conda activate micro-sam
```

This will also install `pytorch` from the `conda-forge` channel. If you have a recent enough operating system, it will automatically install the best suitable `pytorch` version on your system.
This means it will install the CPU version if you don't have a nVidia GPU, and will install a GPU version if you have.
This means it will install the CPU version if you don't have a nvidia GPU, and will install a GPU version if you have.
However, if you have an older operating system, or a CUDA version older than 12, than it may not install the correct version. In this case you will have to specify you're CUDA version, for example for CUDA 11, like this:
```bash
conda install -c conda-forge micro_sam "libtorch=*=cuda11*"
Expand Down
2 changes: 1 addition & 1 deletion doc/python_library.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ We reimplement the training logic described in the [Segment Anything publication
We use this functionality to provide the [finetuned microscopy models](#finetuned-models) and it can also be used to train models on your own data.
In fact the best results can be expected when finetuning on your own data, and we found that it does not require much annotated training data to get significant improvements in model performance.
So a good strategy is to annotate a few images with one of the provided models using our interactive annotation tools and, if the model is not working as good as required for your use-case, finetune on the annotated data.
We recommend checking out our latest [preprint](https://doi.org/10.1101/2023.08.21.554208) for details on the results on how much data is required for finetuning Segment Anything.
We recommend checking out our [paper](https://www.nature.com/articles/s41592-024-02580-4) for details on the results on how much data is required for finetuning Segment Anything.

The training logic is implemented in `micro_sam.training` and is based on [torch-em](https://github.com/constantinpape/torch-em). Check out [the finetuning notebook](https://github.com/computational-cell-analytics/micro-sam/blob/master/notebooks/sam_finetuning.ipynb) to see how to use it.
We also support training an additional decoder for automatic instance segmentation. This yields better results than the automatic mask generation of segment anything and is significantly faster.
Expand Down
2 changes: 1 addition & 1 deletion doc/start_page.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,6 @@ You can also train models on your own data, see [here for details](#training-you
## Citation

If you are using `micro_sam` in your research please cite
- our [preprint](https://doi.org/10.1101/2023.08.21.554208)
- our [paper](https://www.nature.com/articles/s41592-024-02580-4) (now published in Nature Methods!)
- and the original [Segment Anything publication](https://arxiv.org/abs/2304.02643).
- If you use a `vit-tiny` models, please also cite [Mobile SAM](https://arxiv.org/abs/2306.14289).

0 comments on commit 2c2878c

Please sign in to comment.