Skip to content

Commit

Permalink
Add Read the Docs documentation (#31)
Browse files Browse the repository at this point in the history
* Update badges

* Update python versions

* Initial commit of docs

* Initial commit of requirements-docs.txt

* Add README to RTD landing page

* Update theme

* Add API to index TOC

* Update name and copyright year

* Rearrange README

* Add link to Kiosk docs

* Add example notebooks

* Update notebook titles

* Add descriptions to __init__.py files

* Update imports in __init__.py

* Add skimage as mock import

* Add deepcell-toolbox as mock import

* Add sklearn as mock import

* Add cv2 as mock import

* Add keras_preprocessing as mock import

* Add pyro as mock import

* Remove pyro version assertion

* Update README and add example image

* Add example image to README

* Update branch of example image
  • Loading branch information
elaubsch authored Jun 11, 2022
1 parent 252c683 commit aba8f0c
Show file tree
Hide file tree
Showing 36 changed files with 1,254 additions and 638 deletions.
20 changes: 20 additions & 0 deletions .readthedocs.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# .readthedocs.yml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details

# Required
version: 2

# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: docs/source/conf.py

# Optionally build your docs in additional formats such as PDF and ePub
formats:
- htmlzip

# Optionally set the version of Python and requirements required to build your docs
python:
version: 3.7
install:
- requirements: docs/requirements-docs.txt
34 changes: 18 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,24 @@
[![PyPi Monthly Downloads](https://img.shields.io/pypi/dm/deepcell-spots)](https://pypistats.org/packages/deepcell-spots)
[![Python Versions](https://img.shields.io/pypi/pyversions/deepcell-spots.svg)](https://pypi.org/project/deepcell-spots/)

`deepcell-spots` is a deep learning library for fluorescent spot detection image analysis. It allows you to apply pre-existing models and train new deep learning models for spot detection. It is written in Python and built using [TensorFlow](https://github.com/tensorflow/tensorflow), [Keras](https://www.tensorflow.org/guide/keras) and [DeepCell](https://github.com/vanvalenlab/deepcell-tf).
`deepcell-spots` is a deep learning library for fluorescent spot detection image analysis. It allows you to apply pre-existing models and train new deep learning models for spot detection. It is written in Python and built using [TensorFlow](https://github.com/tensorflow/tensorflow), [Keras](https://www.tensorflow.org/guide/keras) and [DeepCell](https://github.com/vanvalenlab/deepcell-tf). More detailed documentation is available [here](https://deepcell-spots.readthedocs.io/).

# ![Spot Detection Example](https://raw.githubusercontent.com/vanvalenlab/deepcell-spots/master/docs/images/spot_montage.png)

## DeepCell Spots Application

`deepcell-spots` contains an applications that greatly simplify the implementation of deep learning models for spot detection. `deepcell-spots.applications.SpotDetection` contains a pre-trained model for fluorescent spot detection on images derived from assays such as RNA FISH and in-situ sequencing. This model returns a list of coordinate locations for fluorescent spots detected in the input image. `deepcell-spots.applications.Polaris` pairs this spot detection model with [DeepCell](https://github.com/vanvalenlab/deepcell-tf) models for nuclear and cytoplasmic segmentation.

### How to Use

```python
from deepcell_spots.applications import SpotDetection

app = SpotDetection()
# image is an np array with dimensions (batch,x,y,channel)
# threshold is the probability threshold that a pixel must exceed to be considered a spot
coords = app.predict(image,threshold=0.9)
```

## DeepCell-Spots for Developers

Expand Down Expand Up @@ -42,21 +59,6 @@ docker run --gpus '"device=0"' -it \
$USER/deepcell-spots
```

## DeepCell Spots Application

`deepcell-spots` contains an application that greatly simplifies the implementation of deep learning models for spot detection. `deepcell-spots.applications` contains a pre-trained model for fluorescent spot detection on images derived from assays such as RNA FISH and in-situ sequencing. This model returns a list of coordinate locations for fluorescent spots detected in the input image.

### How to Use

```python
from deepcell_spots.applications import Polaris

app = Polaris()
# image is an np array with dimensions (batch,x,y,channel)
# threshold is the probability threshold that a pixel must exceed to be considered a spot
coords = app.predict(image,threshold=0.9)
```

## Copyright

Copyright © 2019-2022 [The Van Valen Lab](http://www.vanvalen.caltech.edu/) at the California Institute of Technology (Caltech), with support from the Shurl and Kay Curci Foundation, Google Research Cloud, the Paul Allen Family Foundation, & National Institutes of Health (NIH) under Grant U24CA224309-01.
Expand Down
32 changes: 17 additions & 15 deletions deepcell_spots/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,21 +24,23 @@
# limitations under the License.
# ==============================================================================

"""Package for fluorescent spot detection with convolutional neural networks"""

from deepcell_spots import applications
from deepcell_spots._version import __version__

# from deepcell_spots import cluster_vis
# from deepcell_spots import data_utils
# from deepcell_spots import dotnet_losses
# from deepcell_spots import dotnet
# from deepcell_spots import image_alignment
# from deepcell_spots import image_generators
# from deepcell_spots import multiplex
# from deepcell_spots import point_metrics
# from deepcell_spots import postprocessing_utils
# from deepcell_spots import preprocessing_utils
# from deepcell_spots import simulate_data
# from deepcell_spots import singleplex
# from deepcell_spots import spot_em
# from deepcell_spots import training
# from deepcell_spots import utils
from deepcell_spots import cluster_vis
from deepcell_spots import data_utils
from deepcell_spots import dotnet_losses
from deepcell_spots import dotnet
from deepcell_spots import image_alignment
from deepcell_spots import image_generators
from deepcell_spots import multiplex
from deepcell_spots import point_metrics
from deepcell_spots import postprocessing_utils
from deepcell_spots import preprocessing_utils
from deepcell_spots import simulate_data
from deepcell_spots import singleplex
from deepcell_spots import spot_em
from deepcell_spots import training
from deepcell_spots import utils
2 changes: 2 additions & 0 deletions deepcell_spots/applications/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,5 +24,7 @@
# limitations under the License.
# ==============================================================================

"""Applications for pre-trained spot detection models"""

from deepcell_spots.applications.spot_detection import SpotDetection
from deepcell_spots.applications.polaris import Polaris
9 changes: 9 additions & 0 deletions deepcell_spots/applications/polaris.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,26 +40,35 @@
class Polaris(object):
"""Loads spot detection and cell segmentation applications
from deepcell_spots and deepcell_tf, respectively.
The ``predict`` method calls the predict method of each
application.
Example:
.. code-block:: python
from skimage.io import imread
from deepcell_spots.applications import Polaris
# Load the images
spots_im = imread('spots_image.png')
cyto_im = imread('cyto_image.png')
# Expand image dimensions to rank 4
spots_im = np.expand_dims(spots_im, axis=[0,-1])
cyto_im = np.expand_dims(cyto_im, axis=[0,-1])
# Create the application
app = Polaris()
# Find the spot locations
result = app.predict(spots_image=spots_im,
segmentation_image=cyto_im)
spots_dict = result[0]['spots_assignment']
labeled_im = result[0]['cell_segmentation']
coords = result[0]['spot_locations']
Args:
segmentation_model (tf.keras.Model): The model to load.
If ``None``, a pre-trained model will be downloaded.
Expand Down
34 changes: 32 additions & 2 deletions deepcell_spots/applications/spot_detection.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================

"""Spot detection application"""

from __future__ import absolute_import, division, print_function
Expand All @@ -43,28 +44,47 @@


def output_to_dictionary(output_images, output_names):
"""Formats model output from list to dictionary.
Args:
output_images (list): Model output list of length 2 containing
classification prediction and regression prediction
output_names (list): Model output names
Returns:
Dictionary with output names as keys and output images as values
"""
return {name: pred for name, pred in zip(output_names,
output_images)}


class SpotDetection(Application):
"""Loads a :mod:`deepcell.model_zoo.featurenet.FeatureNet` model
for fluorescent spot detection with pretrained weights.
The ``predict`` method handles prep and post processing steps
to return a list of spot locations.
Example:
.. code-block:: python
from skimage.io import imread
from deepcell_spots.applications import SpotDetection
# Load the image
im = imread('spots_image.png')
# Expand image dimensions to rank 4
im = np.expand_dims(im, axis=-1)
im = np.expand_dims(im, axis=0)
# Create the application
app = SpotDetection()
# Find spot locations
coords = app.predict(im)
Args:
model (tf.keras.Model): The model to load. If ``None``,
a pre-trained model will be downloaded.
Expand Down Expand Up @@ -169,9 +189,12 @@ def _predict(self,
"""Generates a list of coordinate spot locations of the input running
prediction with appropriate pre and post processing functions.
This differs from parent Application class which returns a labeled image.
Input images are required to have 4 dimensions
``[batch, x, y, channel]``. Additional empty dimensions can be added
using ``np.expand_dims``.
``[batch, x, y, channel]``.
Additional empty dimensions can be added using ``np.expand_dims``.
Args:
image (numpy.array): Input image with shape
``[batch, x, y, channel]``.
Expand All @@ -181,10 +204,12 @@ def _predict(self,
pre-processing function.
postprocess_kwargs (dict): Keyword arguments to pass to the
post-processing function.
Raises:
ValueError: Input data must match required rank, calculated as one
dimension more (batch dimension) than expected by the model.
ValueError: Input data must match required number of channels.
Returns:
numpy.array: Coordinate spot locations
"""
Expand Down Expand Up @@ -221,9 +246,12 @@ def predict(self,
"""Generates a list of coordinate spot locations of the input
running prediction with appropriate pre and post processing
functions.
Input images are required to have 4 dimensions
``[batch, x, y, channel]``.
Additional empty dimensions can be added using ``np.expand_dims``.
Args:
image (numpy.array): Input image with shape
``[batch, x, y, channel]``.
Expand All @@ -236,12 +264,14 @@ def predict(self,
threshold (float): Probability threshold for a pixel to be
considered as a spot.
clip (bool): Determines if pixel values will be clipped by percentile.
Raises:
ValueError: Input data must match required rank of the application,
calculated as one dimension more (batch dimension) than
expected by the model.
ValueError: Input data must match required number of channels.
ValueError: Threshold value must be between 0 and 1.
Returns:
numpy.array: Coordinate locations of detected spots.
"""
Expand Down
27 changes: 1 addition & 26 deletions deepcell_spots/cluster_vis.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,31 +49,6 @@ def jitter(coords, size):
return np.array(result)


def ca_to_adjacency_matrix(ca_matrix):
num_clusters = np.shape(ca_matrix)[0]
num_annnotators = np.shape(ca_matrix)[1]
tot_det_list = [sum(ca_matrix[:, i]) for i in range(num_annnotators)]
tot_num_detections = int(sum(tot_det_list))

A = np.zeros((tot_num_detections, tot_num_detections))
for i in range(num_clusters):
det_list = np.ndarray.flatten(np.argwhere(ca_matrix[i] == 1))
combos = list(combinations(det_list, 2))

for ii in range(len(combos)):
ann_index0 = combos[ii][0]
ann_index1 = combos[ii][1]
det_index0 = int(
sum(tot_det_list[:ann_index0]) + sum(ca_matrix[:i, ann_index0]))
det_index1 = int(
sum(tot_det_list[:ann_index1]) + sum(ca_matrix[:i, ann_index1]))

A[det_index0, det_index1] += 1
A[det_index1, det_index0] += 1

return A


def label_graph_ann(G, coords_df, exclude_last=False):
"""Labels the annotator associated with each node in the graph
Expand Down Expand Up @@ -117,7 +92,7 @@ def label_graph_gt(G, detection_data, gt):
Intended for simulated data.
Args:
G (networkx graph): Graph with edges indicating clusters of points
G (networkx.Graph): Graph with edges indicating clusters of points
assumed to be derived from the same ground truth detection
detection_data (numpy.array): Matrix with dimensions (number of clusters) x
(number of algorithms) with value of 1 if an algorithm detected
Expand Down
10 changes: 1 addition & 9 deletions deepcell_spots/cluster_vis_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@
from scipy.spatial import distance
from tensorflow.python.platform import test

from deepcell_spots.cluster_vis import ca_to_adjacency_matrix, jitter
from deepcell_spots.cluster_vis import jitter


class TestClusterVis(test.TestCase):
Expand All @@ -43,14 +43,6 @@ def test_jitter(self):
self.assertEqual(np.shape(coords), np.shape(noisy_coords))
self.assertNotEqual(coords.all(), noisy_coords.all())

def test_ca_to_adjacency_matrix(self):
num_clusters = 10
num_annotators = 3
ca_matrix = np.ones((num_clusters, num_annotators))
A = ca_to_adjacency_matrix(ca_matrix)

self.assertEqual(np.shape(A)[0], np.shape(A)[1], ca_matrix[0])


if __name__ == '__main__':
test.main()
Loading

0 comments on commit aba8f0c

Please sign in to comment.