Skip to content

Commit

Permalink
Streamlines (#2)
Browse files Browse the repository at this point in the history
* Add inverse transform

* Initial commit of streamlines

* prebaked v1dd streamline

* Integrate datasets

* simplify streamline api

* fix streamline distance function

* pass function correctly

* working and efficient streamline depth functions

* text change

* Fix arbitrary streamlines

* improve streamline tests

* documentation improvements
  • Loading branch information
ceesem authored Apr 10, 2023
1 parent 3c1cc13 commit df995e9
Show file tree
Hide file tree
Showing 9 changed files with 539 additions and 15 deletions.
3 changes: 2 additions & 1 deletion MANIFEST.in
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
include requirements.txt
include requirements.txt
include standard_transform/data/*.json
76 changes: 71 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,14 @@
# standard_transforms

Orient and scale points in EM datasets the same way!
Orient and scale points in EM datasets consistently and easily.

When working with EM data, often the orientation of the dataset does not match the desired orientation in space. For example, in cortical data you might want "down" to correspond to the direction orthogonal to the pial surface. This package includes prebaked affine transforms for two datasets, Minnie65 and v1dd, to convert from voxel or nanometer coordinates to a consistent oriented frame in microns.
When working with EM data, often the orientation of the dataset does not match the desired orientation in space. For example, in cortical data you might want "down" to correspond to the direction orthogonal to the pial surface. This package includes prebaked affine transforms for two datasets, Minnie65 and V1dd, to convert from voxel or nanometer coordinates to a consistent oriented frame in microns.

Install via `pip install standard-transform`.

## Usage
# Usage

## Transforms

At its simplest, we import the transform we want, initialize and object, and then are ready to rotate, scale, and translate away!
Let's start with going from nanometer space in Minnie to the oriented space.
Expand Down Expand Up @@ -49,9 +51,73 @@ To get the projection functionality with the `apply_dataframe` method, pass it a
pts_out_x = tform_nm.apply_dataframe(column_name, df, projection='x')
```

## Available transforms
Finally, if you have points in the transformed space and want to back them out to the original coordinates, you can invert the transform of any TransformSequence.

```python
pts_orig = tform.invert(pts_transformed)
```

## Streamlines

Because mammalian cortex is strongly organized in a laminar manner, it's often useful to distinguish radial distance from distance along the depth axis.
However, when comparing the radial distance of points at different depths, the situation is complicated because the orientation of the depth axis, which we call a "streamline" is curvilinear, and changes with depth.
The Streamline class helps with computing radial distances between arbitrary locations, as well as to store particular streamlines in a consistent way.

Currently we just offer streamlines for the v1dd dataset, but a minnie65 one is forthcoming.
Like transforms, you must specify if the original data is in nm or voxels, and they are associated with a particular transform in order to use its y-axis orientation.
```python
from standard_transform import v1dd_streamline_nm, v1dd_streamline_vx
```

A Streamline object has a few key functions.
To get the x,z locations of a streamline passing through a core point `xyz` at a depth or depths `y` in the post-transform space:
```python
streamline.streamline_at(xyz, y)
```
The argument `return_as_point` can be set to `True` to return an nx3-array instead of the x, z components separately.

To translate the points defining the streamline to intersect with a point xyz in the pre-transform space:
```python
streamline.streamline_points_tform(xyz)
```

To compute the radial distance in the post-transform space from two points in the pre-transform space:
```python
streamline.radial_distance(xyz0, xyz1)
```

One can also get the curvilinear distance, or depth along the streamline of a point in the pre or post-transform space and y=0 in the post-transform space:
```python
streamline.depth_along(xyz, depth_from=0)
```

Similarly, you can put all of these together to get a point in the curvilienar space of the streamline, akin to cylindrical coordinates, where the x axis is radial distance and the y axis is depth along the streamline. For each point, an equivalent location in x,z space is found with the same radial distance and angle as the original x and z.
```python
xyz_rad = streamline.radial_points(xyz0, pts)
```

## Datasets

Datasets are a convenient way to store a transform and streamline together, and are the easiest way to get started.
There are two datasets currently available, `minnie65` and `v1dd`.
To get a dataset, simply import it and initialize it.
Datasets have a transform and streamline, and can be used to transform points or get streamline information.
For example, to transform a particular location in v1dd
```python
from standard_transform import minnie65_ds, v1dd_ds

v1dd_ds.transform_nm.apply(xyz)
```

Or to get the streamline at a particular point:
```python
v1dd_ds.streamline_nm.radial_distance(xyz0, xyz1)
```

A particular resolution can be used by passing it as an argument to `dataset.transform_res(voxel_resolution)` or `dataset.streamline_res(voxel_resolution)`.

There are four transforms currently available, two for each dataset.
Note: Currently a data-driven streamline is only available for v1dd.
Minnie65 is using a placeholder "identity streamline" that is just a straight line in the y direction.

### Minnie65

Expand Down
1 change: 1 addition & 0 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,7 @@ def find_version(*file_paths):
long_description_content_type="text/markdown",
install_requires=required,
include_package_data=True,
package_data=['standard_transform/data/*.json'],
dependency_links=dependency_links,
url="https://github.com/ceesem/standard_transform",
packages=["standard_transform"],
Expand Down
15 changes: 13 additions & 2 deletions standard_transform/__init__.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,14 @@
from .datasets import minnie_transform_nm, minnie_transform_vx, v1dd_transform_nm, v1dd_transform_vx, identity_transform
from .datasets import (
minnie_transform_nm,
minnie_transform_vx,
v1dd_transform_nm,
v1dd_transform_vx,
identity_transform,
identity_streamline,
v1dd_streamline_nm,
v1dd_streamline_vx,
v1dd_ds,
minnie_ds,
)

__version__ = "1.1.0"
__version__ = "1.1.0"
52 changes: 52 additions & 0 deletions standard_transform/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,9 @@ def __init__(self, scaling):

def apply(self, pts):
return np.atleast_2d(pts) * self._scaling

def invert(self, pts):
return np.atleast_2d(pts) / self._scaling

def __repr__(self):
return f"Scale by {self._scaling}"
Expand All @@ -32,6 +35,9 @@ def __init__(self, translate):
def apply(self, pts):
return np.atleast_2d(pts) + self._translate

def invert(self, pts):
return np.atleast_2d(pts) - self._translate

def __repr__(self):
return f"Translate by {self._translate}"

Expand All @@ -45,6 +51,9 @@ def __init__(self, *params, **param_kwargs):
def apply(self, pts):
return self._transform.apply(np.atleast_2d(pts))

def invert(self, pts):
return self._transform.inv().apply(np.atleast_2d(pts))

def __repr__(self):
return f"Rotate with params {self._params} and {self._param_kwargs}"

Expand Down Expand Up @@ -76,6 +85,26 @@ def apply(self, pts, as_int=False):
else:
return self.list_apply(pts, as_int=as_int)

def invert(self, pts_tf, as_int=False):
"""Invert points post-transform back into the original coordinate system
Parameters
----------
pts_tf : array-like
Points in the post-transform coordinate system
as_int : bool, optional
Return locations as integers, by default False
Returns
-------
array-like
Points in the original coordinate system
"""
if isinstance(pts_tf, pd.Series):
return self.column_invert(pts_tf, as_int=as_int)
else:
return self.list_invert(pts_tf, as_int=as_int)

def list_apply(self, pts, as_int=False):
pts = np.array(pts)
orig_shape = pts.shape
Expand All @@ -87,6 +116,17 @@ def list_apply(self, pts, as_int=False):
else:
return pts

def list_invert(self, pts, as_int=False):
pts = np.array(pts)
orig_shape = pts.shape
for t in self._transforms[::-1]:
pts = t.invert(pts)
pts = np.reshape(pts, orig_shape)
if as_int:
return pts.astype(int)
else:
return pts

def column_apply(self, col, return_array=False, as_int=False):
pts = np.vstack(col)
out = self.apply(pts)
Expand All @@ -95,6 +135,14 @@ def column_apply(self, col, return_array=False, as_int=False):
else:
return self.apply(pts, as_int=as_int).tolist()

def column_invert(self, col, return_array=False, as_int=False):
pts = np.vstack(col)
out = self.apply(pts)
if return_array:
return self.invert(pts, as_int=as_int)
else:
return self.invert(pts, as_int=as_int).tolist()

def apply_project(self, projection, pts, as_int=False):
"""Apply transform and extract one dimension (e.g. depth)
Expand Down Expand Up @@ -161,3 +209,7 @@ def apply_dataframe(self, col, df, projection=None, return_array=False, as_int=F
else:
return out.tolist()


def identity_transform():
"Returns the same points provided"
return TransformSequence()
1 change: 1 addition & 0 deletions standard_transform/data/v1dd_um_streamline.json
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
[[838.8385131370085,-70.4369110927448,139.57027435166526],[838.8017573771021,-61.54823965900002,137.94152324819842],[838.7650615777368,-52.64752242093272,136.35612990070393],[838.7193658898686,-43.746760384015275,134.77073655320945],[838.682670090503,-34.846043145947974,133.18534320571496],[838.6459742911377,-25.945325907880672,131.59994985822047],[838.6002786032694,-17.044563870963117,130.01455651072598],[838.5635228433631,-8.155892437218455,128.38580540725917],[838.5178271554948,0.7448695996990428,126.80041205976467],[838.4811313561295,9.645586837766343,125.2150187122702],[838.4444355567639,18.546304075833586,123.62962536477572],[838.4077397573985,27.447021313901,122.0442320172812],[838.3709839974922,36.335692747645666,120.41548091381439],[838.3432880866297,45.23636518686294,118.8300875663199],[838.3245321037291,54.124947022907556,117.20133646285309],[838.3416557537579,62.9892580549065,115.48586984744166],[838.403778846301,71.8533450926551,113.77040323203023],[838.5286613162003,80.66893532116245,111.88150559272957],[838.7164230845377,89.46012034907413,109.90589244148424],[838.9670641513131,98.2269001763899,107.84356377829431],[839.2356450345534,106.98154460168287,105.73787735913209],[839.4861062197064,115.71218701603078,103.54547542802521],[839.7096276998917,124.45500963125151,101.39643125289068],[839.8971496260659,133.1980114418727,99.24738707775612],[840.1297909158358,141.9648808668885,97.18505841456617],[840.4074316481197,150.73152629765409,95.12272975137624],[840.7571914095075,159.5219049462643,93.14711660013093],[841.1610704229943,168.33610641041918,91.25821896083026],[841.6100088395356,177.1621296846462,89.41267907750192],[842.1130065476347,185.99992997009534,87.61049695014586],[842.6791833568758,194.87355407656176,85.93838809070674],[843.317599116303,203.7950030095181,84.43971025515687],[844.0281938653749,212.75223096464174,83.07110568752391],[844.8200274531355,221.75723894740486,81.87593214378025],[845.7111596171312,230.82198316443018,80.89754737989813],[846.6925904688591,239.94650841456797,80.13595139587757],[847.7911997527459,249.10658869262244,79.50442867977395],[848.9709879147807,258.302403193994,79.00297923158725],[850.2048353683731,267.5099947065877,78.54488753937288],[851.4838021855611,276.74145383357626,78.17351135910316],[852.7808287402956,285.9848691671872,77.8454929348057],[854.1048549605385,295.228150104248,77.51747451050828],[855.4560007673715,304.4953882534038,77.27617159815547],[856.8792056427679,313.7743138160816,77.07822644177497],[858.4104691407389,323.0647475968807,76.92363904136677],[860.09491062488,332.39055721019656,76.89912490887552],[861.9504699116562,341.73960725400593,76.9613262883289],[863.9500873750183,351.09998632053663,77.06688542375456],[866.0398236444901,360.4840090072116,77.25916007112492],[868.2015590219845,369.867673303086,77.45143471849522],[870.4264135400799,379.2751156156551,77.73042487781014],[872.6963274612305,388.69437973829645,78.0527727930974],[874.9842411593866,398.11355426323746,78.37512070838469],[877.2631549690399,407.53277358702854,78.6974686236719],[879.5600685556988,416.9519033131197,79.01981653895918],[881.8929217358282,426.3588080394879,79.29880669827406],[884.3427734664941,435.76513038080486,79.57779685758902],[886.9906227442218,445.17046714741997,79.85678701690394],[889.926648335662,454.6105077638003,80.26585044413582],[893.231789276799,464.07280323597223,80.76162938331233],[896.9512848523103,473.6053127869763,81.51755485832271],[901.0401356196818,483.20826041106255,82.53362686916698],[905.4535819785629,492.93005331977207,83.98327643973431],[910.11956486039,502.75900409958285,85.82314581405251],[914.9572051897196,512.7196075487907,88.13995050406612],[919.8762642393602,522.7641284386063,90.76025948588592],[924.7415038396065,532.8450555344904,93.51064173562267],[929.4535054931417,542.8785609935352,96.08759296147018],[933.9489102172012,552.7926835817563,98.23096662759451],[938.3087170083102,562.587020109502,99.94076273399578],[942.6231046331194,572.2972600012396,101.3470545485908],[946.955252192771,581.9592270779868,102.57991533929662],[951.350518893024,591.6449721714287,103.89949164194708],[955.8090846155005,601.3906326945329,105.43585672445909],[960.3761286843375,611.2321220660173,107.31908385474958],[965.1055305094702,621.1450798841358,109.46245752087391],[969.9699906226857,631.0694115238255,111.64918894297061],[974.9784489519462,640.9930263819143,113.83592036506728],[979.8964886723929,650.8327685982453,115.71914749535777],[984.5802314890624,660.613446563065,117.38558584578672],[988.7236212323181,670.3245376329545,118.79187766038177],[992.4527163017406,679.977460428335,119.98138069511529],[995.8396357264336,689.5959481670506,121.04081046193186],[999.0284976435241,699.2033756761451,122.05688247277604],[1002.1183008265427,708.7992501682679,123.02959672764804],[1005.1540447180037,718.383347649169,123.95895322654762],[1008.1718487929999,727.9795805320924,124.93166748141957],[1011.1805930189524,737.5638124095436,125.8610239803192],[1014.1893972054459,747.1600900913177,126.83373823519113],[1017.1891415428955,756.7443667676185,127.76309473409071],[1020.197885768848,766.3285986450699,128.6924512329904],[1023.1976900668385,775.9249211256937,129.6651654878623],[1026.2064342927913,785.5091530031448,130.59452198676195],[1029.206178630241,795.0934296794458,131.52387848566156],[1032.2149828167342,804.6897073612195,132.49659274053343]]
117 changes: 110 additions & 7 deletions standard_transform/datasets.py
Original file line number Diff line number Diff line change
@@ -1,12 +1,19 @@
from .base import TransformSequence, R
from .base import TransformSequence, R, identity_transform
from .streamlines import Streamline, identity_streamline
import numpy as np
import os

V1DD_STREAMLINE_POINT_FILE = os.path.join(
os.path.dirname(__file__),
'data',
'v1dd_um_streamline.json',
)
MINNIE_VOXEL_RESOLUTION = np.array([4, 4, 40])
V1DD_VOXEL_RESOLUTION = np.array([9, 9, 45])

MINNIE_PIA_POINT_NM = np.array([183013, 83535, 21480]) * [4,4,45]
V1DD_PIA_POINT_NM = np.array([101249, 32249, 9145]) * [9,9,45]

def identity_transform():
"Returns the same points provided"
return TransformSequence()

def _minnie_transforms( tform, pia_point ):
angle_offset = 5
Expand All @@ -19,7 +26,6 @@ def _minnie_transforms( tform, pia_point ):
return tform

def _v1dd_transforms( tform, pia_point):
# ctr = np.array([[910302.55274889, 273823.89004458, 411543.78900446]])
up = np.array([-0.00497765, 0.96349375, 0.26768454])
rot, _ = R.align_vectors(np.array([[0, 1, 0]]), [up])

Expand All @@ -32,7 +38,7 @@ def _v1dd_transforms( tform, pia_point):
tform.add_scaling(1 / 1000)
return tform

def minnie_transform_vx(voxel_resolution=[4,4,40]):
def minnie_transform_vx(voxel_resolution=MINNIE_VOXEL_RESOLUTION):
"Transform for minnie65 dataset from voxels to oriented microns"
column_transform = TransformSequence()
column_transform.add_scaling(voxel_resolution)
Expand All @@ -44,7 +50,7 @@ def minnie_transform_nm():
column_transform = TransformSequence()
return _minnie_transforms(column_transform, MINNIE_PIA_POINT_NM)

def v1dd_transform_vx(voxel_resolution=[9,9,45]):
def v1dd_transform_vx(voxel_resolution=V1DD_VOXEL_RESOLUTION):
"Transform for v1dd dataset from voxelsto oriented microns"
v1dd_transform = TransformSequence()
v1dd_transform.add_scaling(voxel_resolution)
Expand All @@ -55,3 +61,100 @@ def v1dd_transform_nm():
"Transform for v1dd dataset from nanometers to oriented microns"
v1dd_transform = TransformSequence()
return _v1dd_transforms(v1dd_transform, V1DD_PIA_POINT_NM)

#### STREAMLINES


def v1dd_streamline_nm():
"Streamline for v1dd dataset for nm coordinates"
import json
with open(V1DD_STREAMLINE_POINT_FILE, 'r') as f:
points = np.array(json.load(f))
return Streamline(points, tform=v1dd_transform_nm(), transform_points=False)

def v1dd_streamline_vx(voxel_resolution=V1DD_VOXEL_RESOLUTION):
"Streamline for v1dd dataset for voxel coordinates"
import json
with open(V1DD_STREAMLINE_POINT_FILE, 'r') as f:
points = np.array(json.load(f))
return Streamline(points, tform=v1dd_transform_vx(voxel_resolution), transform_points=False)

def minnie_streamline_nm():
"Streamline for minnie65 dataset for nm coordinates"
return Streamline(
np.array([[0, 0, 0], [0, 1, 0]]),
tform=minnie_transform_nm(),
transform_points=False,
)

def minnie_streamline_vx(voxel_resolution=MINNIE_VOXEL_RESOLUTION):
"Streamline for minnie65 dataset for voxel coordinates"
return Streamline(
np.array([[0, 0, 0], [0, 1000, 0]]),
tform=minnie_transform_vx(voxel_resolution),
transform_points=False,
)

## Dataset object
class Dataset(object):
def __init__(
self,
name,
transform_nm,
transform_vx,
streamline_nm,
streamline_vx,
):
self.name = name
self._transform_arbitrary = transform_vx
self.transform_nm = transform_nm()
self.trasnform_vx = transform_vx()

self._streamline_arbitrary = streamline_vx
self.streamline_nm = streamline_nm()
self.streamline_vx = streamline_vx()

def transform_res(self, resolution):
"""Transform from arbitrary resolution to oriented microns
Parameters
----------
resolution : list-like
Resolution of original data
Returns
-------
Transform object
"""
return self._transform_arbitrary(resolution)

def streamline_res(self, resolution):
"""Streamline for dataset at arbitrary resolution
Parameters
----------
resolution : list-like
Resolution of original data
Returns
-------
Streamline object
"""
return self._streamline_arbitrary(resolution)


v1dd_ds = Dataset(
'v1dd',
v1dd_transform_nm,
v1dd_transform_vx,
v1dd_streamline_nm,
v1dd_streamline_vx,
)

minnie_ds = Dataset(
'minnie65',
minnie_transform_nm,
minnie_transform_vx,
minnie_streamline_nm,
minnie_streamline_vx,
)
Loading

0 comments on commit df995e9

Please sign in to comment.