Skip to content

Commit

Permalink
tweak mkdocs (#6)
Browse files Browse the repository at this point in the history
  • Loading branch information
ceesem authored Apr 15, 2024
1 parent 595d60f commit bd33d25
Show file tree
Hide file tree
Showing 78 changed files with 227 additions and 14,520 deletions.
1 change: 0 additions & 1 deletion docs/.gitignore

This file was deleted.

60 changes: 0 additions & 60 deletions docs/_quarto.yml

This file was deleted.

19 changes: 0 additions & 19 deletions docs/_sidebar.yml

This file was deleted.

4 changes: 0 additions & 4 deletions docs/_variables.yml

This file was deleted.

472 changes: 0 additions & 472 deletions docs/getting_started.html

This file was deleted.

52 changes: 52 additions & 0 deletions docs/getting_started.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
---
title: Getting Started
---

!!! important
If using imageryclient on a CAVE-hosted dataset, we recommend installing [CAVEclient](https://caveclient.readthedocs.io/) for easier access to data. If so, please see the [CAVEclient documentation](caveconnectome.github.io/CAVEclient/) for more information.

We make use of [Numpy arrays](https://numpy.org/doc/stable/) and [Pillow Images](https://pillow.readthedocs.io/) to represent data.
Both are extremely rich tools, and to learn more about them please see the appropriate documentation for information about saving data to image files and more.

## Installation

ImageryClient can be installed with pip:

`pip install imageryclient`

While not required, if you are using a CAVE-hosted dataset, installing [CAVEclient](https://caveclient.readthedocs.io/) will make your life much easier.

## Troubleshooting

If you have installation issues due to CloudVolume, which can have a complex set of requirements, we recommend looking at its github [issues page](https://github.com/seung-lab/cloud-volume/issues) for help.


## Basic example

A small example that uses all of the major components of ImageryClient: Downloading aligned images and segmentation, specifying specific segmentations to visualize, and generating an image overlay.
This uses the pubically available [Kasthuri et al. 2014 dataset](https://neuroglancer-demo.appspot.com/#!%7B%22dimensions%22:%7B%22x%22:%5B6.000000000000001e-9%2C%22m%22%5D%2C%22y%22:%5B6.000000000000001e-9%2C%22m%22%5D%2C%22z%22:%5B3.0000000000000004e-8%2C%22m%22%5D%7D%2C%22position%22:%5B5523.99072265625%2C8538.9384765625%2C1198.0423583984375%5D%2C%22projectionOrientation%22:%5B-0.0040475670248270035%2C-0.9566215872764587%2C-0.22688281536102295%2C-0.18271005153656006%5D%2C%22layers%22:%5B%7B%22type%22:%22image%22%2C%22source%22:%22precomputed://gs://neuroglancer-public-data/kasthuri2011/image%22%2C%22tab%22:%22source%22%2C%22name%22:%22original-image%22%2C%22visible%22:false%7D%2C%7B%22type%22:%22image%22%2C%22source%22:%22precomputed://gs://neuroglancer-public-data/kasthuri2011/image_color_corrected%22%2C%22tab%22:%22source%22%2C%22name%22:%22corrected-image%22%7D%2C%7B%22type%22:%22segmentation%22%2C%22source%22:%22precomputed://gs://neuroglancer-public-data/kasthuri2011/ground_truth%22%2C%22tab%22:%22source%22%2C%22selectedAlpha%22:0.63%2C%22notSelectedAlpha%22:0.14%2C%22segments%22:%5B%223208%22%2C%224901%22%2C%2213%22%2C%224965%22%2C%224651%22%2C%222282%22%2C%223189%22%2C%223758%22%2C%2215%22%2C%224027%22%2C%223228%22%2C%22444%22%2C%223207%22%2C%223224%22%2C%223710%22%5D%2C%22name%22:%22ground_truth%22%7D%5D%2C%22layout%22:%224panel%22%7D).

```python
import imageryclient as ic

img_src = 'precomputed://gs://neuroglancer-public-data/kasthuri2011/image_color_corrected'
seg_src = 'precomputed://gs://neuroglancer-public-data/kasthuri2011/ground_truth'

img_client = ic.ImageryClient(image_source=img_src, segmentation_source=seg_src)

bounds = [
[5119, 8477, 1201],
[5519, 8877, 1202]
]
root_ids = [2282, 4845]

image, segs = img_client.image_and_segmentation_cutout(bounds,
split_segmentations=True,
root_ids=root_ids)


ic.composite_overlay(segs, imagery=image)
```

![Expected imagery overlay from the code above](images/seg_overlay_0.png)

50 changes: 0 additions & 50 deletions docs/getting_started.qmd

This file was deleted.

14 changes: 7 additions & 7 deletions docs/tutorials/images.qmd → docs/images.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ from PIL import Image
Image.fromarray(image.T)
```

![A 2d image cutout at full resolution specified by bounds](../example_images/img_base.png)
![A 2d image cutout at full resolution specified by bounds](images/img_base.png)

Since often we are using analysis points to center an image on, we can alternatively define a center point and the width/height/depth of the bounding box (in voxels).
The same image could be achived from this specification.
Expand Down Expand Up @@ -83,7 +83,7 @@ image = img_client.image_cutout(bounds, mip=3)
Image.fromarray(image.T)
```

![Imagery at MIP 3 resolution within specific bounds](../example_images/scaled_mip_3.png)
![Imagery at MIP 3 resolution within specific bounds](images/scaled_mip_3.png)

And using specified pixel dimensions:
```python
Expand All @@ -92,7 +92,7 @@ image = img_client.image_cutout(ctr, mip=3, image_size=img_size)
Image.fromarray(image.T)
```

![Imagery at MIP 3 with the specified image size](../example_images/exact_mip_3.png)
![Imagery at MIP 3 with the specified image size](images/exact_mip_3.png)

You can also use the `scale_to_bounds=True` argument to upscale an image to the size specified in the bounding box, equivalent to having one pixel for each voxel as measured by the resolution parameter.

Expand All @@ -110,7 +110,7 @@ import numpy as np
Image.fromarray( (seg.T / np.max(seg) * 255).astype('uint8') )
```

![Segmentation of all objects within bounds](../example_images/seg_base.png)
![Segmentation of all objects within bounds](images/seg_base.png)

Specific root ids can also be specified. All pixels outside those root ids have a value of 0.

Expand All @@ -120,7 +120,7 @@ seg = img_client.segmentation_cutout(bounds, root_ids=root_ids)
Image.fromarray( (seg.T / np.max(seg) * 255).astype('uint8') )
```

![segmentation of specific objects within bounds](../example_images/seg_specific.png)
![segmentation of specific objects within bounds](images/seg_specific.png)


### Split segmentations
Expand All @@ -133,7 +133,7 @@ split_seg = img_client.split_segmentation_cutout(bounds, root_ids=root_ids)
Image.fromarray((split_seg[ root_ids[0] ].T * 255).astype('uint8'))
```

![A single object's segmentation mask from the dictionary of segmentation masks.](../example_images/seg_single.png)
![A single object's segmentation mask from the dictionary of segmentation masks.](images/seg_single.png)

### Aligned cutouts

Expand Down Expand Up @@ -176,6 +176,6 @@ image, segs = img_client.image_and_segmentation_cutout(ctr,

ic.composite_overlay(segs, imagery=image, palette='husl')
```
![Example segmentation cutout from the MICrONs dataset](../example_images/microns_example.png)
![Example segmentation cutout from the MICrONs dataset](images/microns_example.png)

Note that the following code requires setting up a CAVE token to access the server. [See here for details](https://github.com/AllenInstitute/MicronsBinder/blob/master/notebooks/mm3_intro/CAVEsetup.ipynb).
File renamed without changes
File renamed without changes
File renamed without changes
Binary file added docs/images/logo-cleanest.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/logo-inverted.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
Loading

0 comments on commit bd33d25

Please sign in to comment.