Skip to content

Commit

Permalink
initial commit
Browse files Browse the repository at this point in the history
  • Loading branch information
ceesem committed Oct 13, 2022
1 parent 3a4bdfb commit 10bff85
Show file tree
Hide file tree
Showing 9 changed files with 406 additions and 0 deletions.
120 changes: 120 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so
.DS_Store
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
.pytest_cache/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
.python-version

# celery beat schedule file
celerybeat-schedule

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json
.vscode/settings.json

movies/
*_meshes/
notebooks/
*.pkl
*.mp4
*.png
*.tiff
1 change: 1 addition & 0 deletions MANIFEST.in
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
include requirements.txt
50 changes: 50 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
# standard_transforms

Orient and scale points in EM datasets the same way!

When working with EM data, often the orientation of the dataset does not match the desired orientation in space. For example, in cortical data you might want "down" to correspond to the direction orthogonal to the pial surface. This package includes prebaked affine transforms for two datasets, Minnie65 and v1dd, to convert from voxel or nanometer coordinates to a consistent oriented frame in microns.

## Usage

At its simplest, we import the transform we want, initialize and object, and then are ready to rotate, scale, and translate away!
Let's start with going from nanometer space in Minnie to the oriented space.
We can make the pre-baked transform by importing one of the generating functions, in this case `minnie_transform_nm`.

```python
from standard_transform import minnie_transform_nm

tform_nm = minnie_transform_nm()
```

There are two main useful functions, `apply` and `apply_project`.
Both functions transform an `n x 3` array or pandas series with 3-element vectors to points in the new space, with the y-axis oriented along axis from pia to white matter and the units in microns.
Using `apply` alone returns another `n x 3` array, while `apply_project` takes both an axis and points and returns just the values along that axis.
For example, if you have skeleton vertices in nm, you can produce transformed ones with:

```python
new_vertices = tform_nm.apply(sk.vertices)
```

while if you just want the depth:

```python
sk_depth = tform_nm.apply_project('y', sk.vertices)
```

## Available transforms

There are four transforms currently available, two for each dataset.

### Minnie65

* `minnie_transform_nm` : Transform from nanometer units in the original Minnie65 space to microns in a space where the pial surface is flat in x and z along y=0.

* `minnie_transform_vx` : Transform from voxel units in the original Minnie65 space to microns in a space where the pial surface is flat in x and z along y=0. By default, `minnie_transform_vx()` assumes a voxel size of `[4,4,40]` nm/voxel, but specifying a voxel resolution (for example, with `minnie_transform_vx(voxel_resolution=[8,8,40])`) will use a different scale.

Both functions will also take dataframe columns, for example `tform.apply(df['pt_position])`.

### V1dd

* `v1dd_transform_nm` : Transform from nanometer units in the original V1dd space to microns in a space where the pial surface is flat in x and z along y=0.

* `v1dd_transform_vx` : Transform from voxel units in the original Minnie65 space to microns in a space where the pial surface is flat in x and z along y=0. By default, `v1dd_transform_vx()` assumes a voxel size of `[9,9,45]` nm/voxel, but specifying a voxel resolution (for example, with `minnie_transform_vx(voxel_resolution=[8,8,40])`) will use a different scale.
3 changes: 3 additions & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
scipy
numpy
pandas
49 changes: 49 additions & 0 deletions setup.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
import setuptools
import re
import os
import codecs

here = os.path.abspath(os.path.dirname(__file__))

with codecs.open(os.path.join(here, "README.md"), encoding="utf-8") as f:
long_description = f.read()

def read(*parts):
with codecs.open(os.path.join(here, *parts), "r") as fp:
return fp.read()

def find_version(*file_paths):
version_file = read(*file_paths)
version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]", version_file, re.M)
if version_match:
return version_match.group(1)
raise RuntimeError("Unable to find version string.")

with open("requirements.txt", "r") as f:
required = f.read().splitlines()
dependency_links = []
del_ls = []
for i_l in range(len(required)):
l = required[i_l]
if l.startswith("-e"):
dependency_links.append(l.split("-e ")[-1])
del_ls.append(i_l)
required.append(l.split("=")[-1])

for i_l in del_ls[::-1]:
del required[i_l]

setuptools.setup(
name="standard_transform",
version=find_version("src", "__init__.py"),
author="Casey Schneider-Mizell",
author_email="[email protected]",
description="Define and repeat basic affine transformation tasks for datasets",
long_description=long_description,
long_description_content_type="text/markdown",
install_requires=required,
include_package_data=True,
dependency_links=dependency_links,
url="https://github.com/ceesem/standard_transform",
packages=["src"],
)
6 changes: 6 additions & 0 deletions src/.bumpversion.cfg
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
[bumpversion]
current_version = 0.0.1
commit = True
tag = True

[bumpversion:file:src/__init__.py]
3 changes: 3 additions & 0 deletions src/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
from datasets import minnie_transform_nm, minnie_transform_vx, v1dd_transform_nm, v1dd_transform_vx

__version__ = "0.0.1"
127 changes: 127 additions & 0 deletions src/base.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,127 @@
from scipy.spatial.transform import Rotation as R
import pandas as pd
import numpy as np
from collections.abc import Iterable

class ScaleTransform(object):
def __init__(self, scaling):
if not isinstance(scaling, Iterable):
scaling = np.array(3 * [scaling]).reshape(1, 3)
else:
if len(scaling) != 3:
raise ValueError("Scaling must be single number or have three elements")
scaling = np.array(scaling).reshape(1, 3)
self._scaling = scaling

def apply(self, pts):
return np.atleast_2d(pts) * self._scaling

def __repr__(self):
return f"Scale by {self._scaling}"


class TranslateTransform(object):
def __init__(self, translate):
if not isinstance(translate, Iterable):
raise ValueError("Translate must be a three element vector")
if len(translate) != 3:
raise ValueError("Translate must be a three element vector")
self._translate = np.array(translate)

def apply(self, pts):
return np.atleast_2d(pts) + self._translate

def __repr__(self):
return f"Translate by {self._translate}"


class RotationTransform(object):
def __init__(self, *params, **param_kwargs):
self._params = params
self._param_kwargs = param_kwargs
self._transform = R.from_euler(*self._params, **self._param_kwargs)

def apply(self, pts):
return self._transform.apply(np.atleast_2d(pts))

def __repr__(self):
return f"Rotate with params {self._params} and {self._param_kwargs}"


class TransformSequence(object):
def __init__(self):
self._transforms = []

def __repr__(self):
return "Transformation Sequence:\n\t" + "\n\t".join(
[t.__repr__() for t in self._transforms]
)

def add_transform(self, transform):
self._transforms.append(transform)

def add_scaling(self, scaling):
self.add_transform(ScaleTransform(scaling))

def add_translation(self, translate):
self.add_transform(TranslateTransform(translate))

def add_rotation(self, *rotation_params, **rotation_kwargs):
self.add_transform(RotationTransform(*rotation_params, **rotation_kwargs))

def apply(self, pts, as_int=False):
if isinstance(pts, pd.Series):
return self.column_apply(pts, as_int=as_int)
else:
return self.list_apply(pts, as_int=as_int)

def list_apply(self, pts, as_int=False):
pts = np.array(pts).copy()
for t in self._transforms:
pts = t.apply(pts)
if as_int:
return pts.astype(int)
else:
return pts

def column_apply(self, col, return_array=False, as_int=False):
pts = np.vstack(col)
out = self.apply(pts)
if return_array:
return self.apply(pts, as_int=as_int)
else:
return self.apply(pts, as_int=as_int).tolist()

def apply_project(self, projection, pts, as_int=False):
"""Apply transform and extract one dimension (e.g. depth)
Parameters
----------
projection : str
Which dimension to project out of the transformed data. One of "x","y", or "z".
pts : np.ndarray or pd.Series
Either an n x 3 array or pandas Series object with 3-element arrays as elements.
as_int : bool, optional
Return locations as integers, by default False
Returns
-------
_type_
_description_
Raises
------
ValueError
_description_
"""
proj_map = {
"x": 0,
"y": 1,
"z": 2,
}

if projection not in proj_map:
raise ValueError('Projection must be one of "x", "y", or "z"')
return np.array(self.apply(pts, as_int=as_int))[:, proj_map.get(projection, projection)]


Loading

0 comments on commit 10bff85

Please sign in to comment.