Skip to content

Commit

Permalink
Merge pull request #2 from maestroque/migrate-physio
Browse files Browse the repository at this point in the history
Migrate Physio class from peakdet
maestroque authored Jul 11, 2024
2 parents f426a21 + e7c66b4 commit 2ef56b8
Showing 29 changed files with 46,382 additions and 748 deletions.
92 changes: 92 additions & 0 deletions .autorc
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
{
"plugins": [
"git-tag",
"conventional-commits",
"first-time-contributor",
"released"
],
"owner": "physiopy",
"repo": "physutils",
"name": "Stefano Moia",
"email": "s.moia@bcbl.eu",
"labels": [
{
"name": "Majormod",
"changelogTitle": "💥 Breaking Change",
"description": "This PR breaks compatibility, and increments the major version (+1.0.0)",
"releaseType": "major",
"overwrite": true,
"color": "#05246d"
},
{
"name": "Minormod",
"changelogTitle": "🚀 Enhancement",
"description": "This PR generally closes an `Enhancement` issue. It increments the minor version (0.+1.0)",
"releaseType": "minor",
"overwrite": true,
"color": "#05246d"
},
{
"name": "Minormod-breaking",
"changelogTitle": "💥 Breaking Change during development",
"description": "For development only, this PR increments the minor version (0.+1.0) but breaks compatibility",
"releaseType": "minor",
"overwrite": true,
"color": "#05246d"
},
{
"name": "BugFIX",
"changelogTitle": "🐛 Bug Fix",
"description": "This PR generally closes a `Bug` issue, and increments the patch version (0.0.+1)",
"releaseType": "patch",
"overwrite": true,
"color": "#d73a4a"
},
{
"name": "Documentation",
"changelogTitle": "📝 Documentation",
"description": "This issue or PR is about the documentation ",
"releaseType": "none",
"overwrite": true,
"color": "#1D70CF"
},
{
"name": "Testing",
"changelogTitle": "⚠️ Tests",
"description": "This is for testing features, writing tests or producing testing code",
"releaseType": "none",
"overwrite": true,
"color": "#ffb5b4"
},
{
"name": "Internal",
"changelogTitle": "🏠 Internal",
"description": "Changes affect the internal API. It doesn't increase the version, but produces a changelog",
"releaseType": "none",
"overwrite": true,
"color": "#ffffff"
},
{
"name": "Outreach",
"changelogTitle": "🖋️ Outreach",
"description": "Issue about outreaching of any form",
"releaseType": "none",
"overwrite": true,
"color": "#0e8a16"
},
{
"name": "Skip release",
"description": "This PR preserves the current version when merged, and doesn't appear in the changelog",
"releaseType": "skip",
"overwrite": true,
"color": "#ffffff"
},
{
"name": "Release",
"description": "For PR only, trigger a release at the merge",
"releaseType": "release",
"overwrite": true,
"color": "#FFFFFF"
}
]
}
34 changes: 34 additions & 0 deletions .github/ISSUE_TEMPLATE/ISSUE_TEMPLATE_BUGS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
---
name: Bug issue
about: Use this template to report bugs.
title: ''
labels: Bug
assignees:
---

<!--- Provide a general summary of the issue in the Title above -->

## Expected Behavior
<!--- NECESSARY -->
<!--- Describe what one would expect from the buggy code -->

## Actual Behavior
<!--- NECESSARY -->
<!--- Describe what the buggy code is actually doing/returning -->
<!--- Do not hesitate and share screenshots and code snippets that could help understand the issue -->

## Steps to Reproduce the Problem
<!--- Briefly point out the steps we should take to reproduce the problem -->

1.
2.
3.

## Specifications
<!--- Point out the version of peakdet you are running and your OS version -->
- Python version:
- peakdet version:
- Platform:

## Possible solution
<!--- Describe a possible approach to solve the issue -->
27 changes: 27 additions & 0 deletions .github/ISSUE_TEMPLATE/ISSUE_TEMPLATE_DISCUSSION.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
---
name: Discussion
about: Use this template to start a discussion issue, i.e. an issue meant to open a community debate over a topic
title: ''
labels: Discussion
assignees: ''
---

<!--- Provide the detailed description of the idea to discuss
This section should present:
- What is the topic
- Why you want to spark such a discussion (what is the problem you're trying to solve)
- If you thought about them, what are the possible positions on the topic you came up with - invite others to add some!
It could contain questions. Don't be afraid to ping users that could be more interested in this topic! -->

I'm opening this discussion because/I think that/I noticed that...




## Outstanding questions
<!--- Repeat the concept in a very few, short (max 3) questions -->
<!--- Remember that the aim is not to give a "tldr", but to help address salient points. -->

-
-
-
20 changes: 20 additions & 0 deletions .github/ISSUE_TEMPLATE/ISSUE_TEMPLATE_FEATURE_REQUEST.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
---
name: Feature request
about: Use this template to request new features.
title: ''
labels: Enhancement
assignees: ''
---

<!--- Provide a general summary of the issue in the Title above -->

## Detailed Description
<!--- Provide a detailed description of the change or addition you are proposing -->

## Context / Motivation
<!--- Why is this change important to you? How would you use it? -->
<!--- How can it benefit other users? -->

## Possible Implementation
<!--- Not obligatory, but suggest an idea for implementing addition or change -->
<!--- If you already have worked on the idea, please share a link to the branch in your forked project -->
23 changes: 23 additions & 0 deletions .github/ISSUE_TEMPLATE/ISSUE_TEMPLATE_GENERAL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
---
name: General issue
about: Use this template for any issues not related to bugs or feature requests
title: ''
labels:
assignees: ''
---

<!--- Provide a general summary of the issue in the Title above -->
<!--- Please label the issue with one of the following: Documentation, Outreach or Question -->

## Summary
<!--- Describe the motivation behind the issue -->

## Additional detail
<!--- Provide a additional details that could help developers understand the issue -->

## Next Steps
<!--- Provide possible steps to take in order to address the issue -->

*
*
*
4 changes: 4 additions & 0 deletions .github/ISSUE_TEMPLATE/config.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
contact_links:
- name: Usage question
url: https://neurostars.org/tag/physiopy
about: Please ask questions about using physiopy libraries here.
40 changes: 40 additions & 0 deletions .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
<!-- Write all of the issues that are linked to this pull request. -->
<!-- If this PR is enough to close them you can write something like "Closes #314 and closes #42" -->
<!-- If you just want to reference them without closing them, you can add something like "References #112" -->
Closes #

<!-- Add a short description of the PR content here-->


## Proposed Changes
<!-- List major points of changes here, so that the reviewers can have a bit more context while looking at your work! -->
-
-
-

## Change Type
<!-- Indicate the type of change you think your pull request is -->
- [ ] `bugfix` (+0.0.1)
- [ ] `minor` (+0.1.0)
- [ ] `major` (+1.0.0)
- [ ] `refactoring` (no version update)
- [ ] `test` (no version update)
- [ ] `infrastructure` (no version update)
- [ ] `documentation` (no version update)
- [ ] `other`

## Checklist before review
<!-- If this section is not clear, please read this part of the docs: https://phys2bids.readthedocs.io/en/latest/contributorfile.html#pr -->
<!-- You're invited to open a draft PR ASAP, but before marking it "ready for review", check that you have done the following: -->
- [ ] I added everything I wanted to add to this PR.
- [ ] \[Code or tests only\] I wrote/updated the necessary docstrings.
- [ ] \[Code or tests only\] I ran and passed tests locally.
- [ ] \[Documentation only\] I built the docs locally.
- [ ] My contribution is harmonious with the rest of the code: I'm not introducing repetitions.
- [ ] My code respects the adopted style, especially linting conventions.
- [ ] The title of this PR is explanatory on its own, enough to be understood as part of a changelog.
- [ ] I added or indicated the right labels.
<!-- If relevant, you can add a milestone label or indicate an ideal timeline for this PR, including whether the progress of this PR is linked to other PRs being completed before or after it. -->
- [ ] I added information regarding the timeline of completion for this PR.
<!-- If you want, you can ask for reviews on a draft PR -->
- [ ] Please, comment on my PR while it's a draft and give me feedback on the development!
11 changes: 11 additions & 0 deletions .github/labeler.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
Documentation:
- changed-files:
- any-glob-to-any-file: ['docs/*', '.readthedocs.yml', 'README.md', 'LICENSE', 'MANIFEST.in']

Internal:
- changed-files:
- any-glob-to-any-file: ['.*', 'codecov.yml', 'setup.cfg', 'setup.py', 'versioneer.py', '.github/*', '.circleci/*', 'physutils/_version.py', 'requirements.txt', 'pyproject.toml']

Testing:
- changed-files:
- any-glob-to-any-file: ['physutils/tests/*', '.circleci/*']
17 changes: 17 additions & 0 deletions .github/workflows/auto-author-assign.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
name: Auto Author Assign

on:
issues:
types: [ opened, reopened ]
pull_request_target:
types: [ opened, reopened ]

permissions:
pull-requests: write
issues: write

jobs:
assign-author:
runs-on: ubuntu-latest
steps:
- uses: toshimaru/auto-author-assign@v2.1.0
15 changes: 15 additions & 0 deletions .github/workflows/auto-label.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
name: auto-label
concurrency:
group: ${{ github.workflow }}-${{ github.event.number }}-${{ github.event.ref }}
cancel-in-progress: true
on: # yamllint disable-line rule:truthy
pull_request_target

jobs:
pr:
permissions:
contents: read
pull-requests: write
runs-on: ubuntu-latest
steps:
- uses: actions/labeler@v5
43 changes: 43 additions & 0 deletions .github/workflows/auto-release.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
# This workflows will create a release using auto when a PR is merged in master.

name: Auto-release on PR merge

on:
# ATM, this is the closest trigger to a PR merging
push:
branches:
- master

jobs:
auto-release:
runs-on: ubuntu-22.04
# Set skip ci to avoid loops
if: "!contains(github.event.head_commit.message, 'ci skip') && !contains(github.event.head_commit.message, 'skip ci')"
# Set bash as default shell for jobs
defaults:
run:
shell: bash
steps:
- name: Checkout source
uses: actions/checkout@v2
with:
# Fetch all history for all branches and tags
fetch-depth: 0
# Use token with write access to the repo
token: ${{ secrets.GH_TOKEN }}
- name: Download and install latest auto
env:
# OS can be linux, macos, or win
OS: linux
# Retrieve URL of latest auto, download it, unzip it, and give exec permissions.
run: |
curl -vkL -o - $( curl -s https://api.github.com/repos/intuit/auto/releases/latest \
| grep browser_download_url | grep ${OS} | awk -F'"' '{print $4}') \
| gunzip > ~/auto
chmod a+x ~/auto
- name: Create release without version prefix
env:
GITHUB_TOKEN: ${{ secrets.GH_TOKEN }}
# Run auto release, don't use 'v' prefix, and be verbose
run: |
~/auto shipit --no-version-prefix -v
32 changes: 32 additions & 0 deletions .github/workflows/python-publish.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# This workflows will upload a Python Package using Twine when a release is created
# For more information see: https://help.github.com/en/actions/language-and-framework-guides/using-python-with-github-actions#publishing-to-package-registries

name: Upload Python Package

on:
release:
types: [created]

jobs:
deploy:

runs-on: ubuntu-22.04

steps:
- name: Checkout source
uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.6'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install setuptools wheel twine
- name: Build and publish
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_PASSWORD }}
run: |
python setup.py sdist bdist_wheel
twine upload dist/*
39 changes: 39 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
# See https://pre-commit.com for more information
# See https://pre-commit.com/hooks.html for more hooks
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.6.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
- id: check-added-large-files
- id: check-case-conflict
- id: check-merge-conflict
- repo: https://github.com/psf/black
rev: 24.4.2
hooks:
- id: black
- repo: https://github.com/pycqa/isort
rev: 5.13.2
hooks:
- id: isort
- repo: https://github.com/pycqa/flake8
rev: 6.1.0
hooks:
- id: flake8
- repo: https://github.com/pycqa/pydocstyle
rev: 6.3.0
hooks:
- id: pydocstyle
- repo: https://github.com/pre-commit/pygrep-hooks
rev: v1.10.0
hooks:
- id: rst-backticks
- id: rst-directive-colons
- id: rst-inline-touching-normal
- repo: https://github.com/codespell-project/codespell
rev: v2.2.6
hooks:
- id: codespell
args: ["-L", "trough,troughs"]
14 changes: 13 additions & 1 deletion physutils/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,16 @@
__all__ = [
"load_physio",
"save_physio",
"load_history",
"save_history",
"Physio",
"__version__",
]

from physutils.io import load_history, load_physio, save_history, save_physio
from physutils.physio import Physio

from ._version import get_versions
__version__ = get_versions()['version']

__version__ = get_versions()["version"]
del get_versions
156 changes: 96 additions & 60 deletions physutils/_version.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@

# This file helps to compute a version number in source trees obtained from
# git-archive tarball (such as those provided by githubs download-from-tag
# feature). Distribution tarballs (built by setup.py sdist) and build
@@ -58,28 +57,32 @@ class NotThisMethod(Exception):

def register_vcs_handler(vcs, method): # decorator
"""Decorator to mark a method as the handler for a particular VCS."""

def decorate(f):
"""Store f in HANDLERS[vcs][method]."""
if vcs not in HANDLERS:
HANDLERS[vcs] = {}
HANDLERS[vcs][method] = f
return f

return decorate


def run_command(commands, args, cwd=None, verbose=False, hide_stderr=False,
env=None):
def run_command(commands, args, cwd=None, verbose=False, hide_stderr=False, env=None):
"""Call the given command(s)."""
assert isinstance(commands, list)
p = None
for c in commands:
try:
dispcmd = str([c] + args)
# remember shell=False, so use git.cmd on windows, not just git
p = subprocess.Popen([c] + args, cwd=cwd, env=env,
stdout=subprocess.PIPE,
stderr=(subprocess.PIPE if hide_stderr
else None))
p = subprocess.Popen(
[c] + args,
cwd=cwd,
env=env,
stdout=subprocess.PIPE,
stderr=(subprocess.PIPE if hide_stderr else None),
)
break
except EnvironmentError:
e = sys.exc_info()[1]
@@ -116,16 +119,22 @@ def versions_from_parentdir(parentdir_prefix, root, verbose):
for i in range(3):
dirname = os.path.basename(root)
if dirname.startswith(parentdir_prefix):
return {"version": dirname[len(parentdir_prefix):],
"full-revisionid": None,
"dirty": False, "error": None, "date": None}
return {
"version": dirname[len(parentdir_prefix) :],
"full-revisionid": None,
"dirty": False,
"error": None,
"date": None,
}
else:
rootdirs.append(root)
root = os.path.dirname(root) # up a level

if verbose:
print("Tried directories %s but none started with prefix %s" %
(str(rootdirs), parentdir_prefix))
print(
"Tried directories %s but none started with prefix %s"
% (str(rootdirs), parentdir_prefix)
)
raise NotThisMethod("rootdir doesn't start with parentdir_prefix")


@@ -181,7 +190,7 @@ def git_versions_from_keywords(keywords, tag_prefix, verbose):
# starting in git-1.8.3, tags are listed as "tag: foo-1.0" instead of
# just "foo-1.0". If we see a "tag: " prefix, prefer those.
TAG = "tag: "
tags = set([r[len(TAG):] for r in refs if r.startswith(TAG)])
tags = set([r[len(TAG) :] for r in refs if r.startswith(TAG)])
if not tags:
# Either we're using git < 1.8.3, or there really are no tags. We use
# a heuristic: assume all version tags have a digit. The old git %d
@@ -190,27 +199,34 @@ def git_versions_from_keywords(keywords, tag_prefix, verbose):
# between branches and tags. By ignoring refnames without digits, we
# filter out many common branch names like "release" and
# "stabilization", as well as "HEAD" and "master".
tags = set([r for r in refs if re.search(r'\d', r)])
tags = set([r for r in refs if re.search(r"\d", r)])
if verbose:
print("discarding '%s', no digits" % ",".join(refs - tags))
if verbose:
print("likely tags: %s" % ",".join(sorted(tags)))
for ref in sorted(tags):
# sorting will prefer e.g. "2.0" over "2.0rc1"
if ref.startswith(tag_prefix):
r = ref[len(tag_prefix):]
r = ref[len(tag_prefix) :]
if verbose:
print("picking %s" % r)
return {"version": r,
"full-revisionid": keywords["full"].strip(),
"dirty": False, "error": None,
"date": date}
return {
"version": r,
"full-revisionid": keywords["full"].strip(),
"dirty": False,
"error": None,
"date": date,
}
# no suitable tags, so version is "0+unknown", but full hex is still there
if verbose:
print("no suitable tags, using unknown + full revision id")
return {"version": "0+unknown",
"full-revisionid": keywords["full"].strip(),
"dirty": False, "error": "no suitable tags", "date": None}
return {
"version": "0+unknown",
"full-revisionid": keywords["full"].strip(),
"dirty": False,
"error": "no suitable tags",
"date": None,
}


@register_vcs_handler("git", "pieces_from_vcs")
@@ -225,19 +241,27 @@ def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command):
if sys.platform == "win32":
GITS = ["git.cmd", "git.exe"]

out, rc = run_command(GITS, ["rev-parse", "--git-dir"], cwd=root,
hide_stderr=True)
out, rc = run_command(GITS, ["rev-parse", "--git-dir"], cwd=root, hide_stderr=True)
if rc != 0:
if verbose:
print("Directory %s not under git control" % root)
raise NotThisMethod("'git rev-parse --git-dir' returned error")

# if there is a tag matching tag_prefix, this yields TAG-NUM-gHEX[-dirty]
# if there isn't one, this yields HEX[-dirty] (no NUM)
describe_out, rc = run_command(GITS, ["describe", "--tags", "--dirty",
"--always", "--long",
"--match", "%s*" % tag_prefix],
cwd=root)
describe_out, rc = run_command(
GITS,
[
"describe",
"--tags",
"--dirty",
"--always",
"--long",
"--match",
"%s*" % tag_prefix,
],
cwd=root,
)
# --long was added in git-1.5.5
if describe_out is None:
raise NotThisMethod("'git describe' failed")
@@ -260,17 +284,16 @@ def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command):
dirty = git_describe.endswith("-dirty")
pieces["dirty"] = dirty
if dirty:
git_describe = git_describe[:git_describe.rindex("-dirty")]
git_describe = git_describe[: git_describe.rindex("-dirty")]

# now we have TAG-NUM-gHEX or HEX

if "-" in git_describe:
# TAG-NUM-gHEX
mo = re.search(r'^(.+)-(\d+)-g([0-9a-f]+)$', git_describe)
mo = re.search(r"^(.+)-(\d+)-g([0-9a-f]+)$", git_describe)
if not mo:
# unparseable. Maybe git-describe is misbehaving?
pieces["error"] = ("unable to parse git-describe output: '%s'"
% describe_out)
# unparsable. Maybe git-describe is misbehaving?
pieces["error"] = "unable to parse git-describe output: '%s'" % describe_out
return pieces

# tag
@@ -279,10 +302,12 @@ def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command):
if verbose:
fmt = "tag '%s' doesn't start with prefix '%s'"
print(fmt % (full_tag, tag_prefix))
pieces["error"] = ("tag '%s' doesn't start with prefix '%s'"
% (full_tag, tag_prefix))
pieces["error"] = "tag '%s' doesn't start with prefix '%s'" % (
full_tag,
tag_prefix,
)
return pieces
pieces["closest-tag"] = full_tag[len(tag_prefix):]
pieces["closest-tag"] = full_tag[len(tag_prefix) :]

# distance: number of commits since tag
pieces["distance"] = int(mo.group(2))
@@ -293,13 +318,13 @@ def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command):
else:
# HEX: no tags
pieces["closest-tag"] = None
count_out, rc = run_command(GITS, ["rev-list", "HEAD", "--count"],
cwd=root)
count_out, rc = run_command(GITS, ["rev-list", "HEAD", "--count"], cwd=root)
pieces["distance"] = int(count_out) # total number of commits

# commit date: see ISO-8601 comment in git_versions_from_keywords()
date = run_command(GITS, ["show", "-s", "--format=%ci", "HEAD"],
cwd=root)[0].strip()
date = run_command(GITS, ["show", "-s", "--format=%ci", "HEAD"], cwd=root)[
0
].strip()
pieces["date"] = date.strip().replace(" ", "T", 1).replace(" ", "", 1)

return pieces
@@ -330,8 +355,7 @@ def render_pep440(pieces):
rendered += ".dirty"
else:
# exception #1
rendered = "0+untagged.%d.g%s" % (pieces["distance"],
pieces["short"])
rendered = "0+untagged.%d.g%s" % (pieces["distance"], pieces["short"])
if pieces["dirty"]:
rendered += ".dirty"
return rendered
@@ -445,11 +469,13 @@ def render_git_describe_long(pieces):
def render(pieces, style):
"""Render the given version pieces into the requested style."""
if pieces["error"]:
return {"version": "unknown",
"full-revisionid": pieces.get("long"),
"dirty": None,
"error": pieces["error"],
"date": None}
return {
"version": "unknown",
"full-revisionid": pieces.get("long"),
"dirty": None,
"error": pieces["error"],
"date": None,
}

if not style or style == "default":
style = "pep440" # the default
@@ -469,9 +495,13 @@ def render(pieces, style):
else:
raise ValueError("unknown style '%s'" % style)

return {"version": rendered, "full-revisionid": pieces["long"],
"dirty": pieces["dirty"], "error": None,
"date": pieces.get("date")}
return {
"version": rendered,
"full-revisionid": pieces["long"],
"dirty": pieces["dirty"],
"error": None,
"date": pieces.get("date"),
}


def get_versions():
@@ -485,8 +515,7 @@ def get_versions():
verbose = cfg.verbose

try:
return git_versions_from_keywords(get_keywords(), cfg.tag_prefix,
verbose)
return git_versions_from_keywords(get_keywords(), cfg.tag_prefix, verbose)
except NotThisMethod:
pass

@@ -495,13 +524,16 @@ def get_versions():
# versionfile_source is the relative path from the top of the source
# tree (where the .git directory might live) to this file. Invert
# this to find the root from __file__.
for i in cfg.versionfile_source.split('/'):
for i in cfg.versionfile_source.split("/"):
root = os.path.dirname(root)
except NameError:
return {"version": "0+unknown", "full-revisionid": None,
"dirty": None,
"error": "unable to find root of source tree",
"date": None}
return {
"version": "0+unknown",
"full-revisionid": None,
"dirty": None,
"error": "unable to find root of source tree",
"date": None,
}

try:
pieces = git_pieces_from_vcs(cfg.tag_prefix, root, verbose)
@@ -515,6 +547,10 @@ def get_versions():
except NotThisMethod:
pass

return {"version": "0+unknown", "full-revisionid": None,
"dirty": None,
"error": "unable to compute version", "date": None}
return {
"version": "0+unknown",
"full-revisionid": None,
"dirty": None,
"error": "unable to compute version",
"date": None,
}
258 changes: 258 additions & 0 deletions physutils/io.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,258 @@
# -*- coding: utf-8 -*-
"""
Functions for loading and saving data and analyses
"""

import importlib
import json
import os.path as op

import numpy as np
from loguru import logger

from physutils import physio

EXPECTED = ["data", "fs", "history", "metadata"]


def load_physio(data, *, fs=None, dtype=None, history=None, allow_pickle=False):
"""
Returns `Physio` object with provided data
Parameters
----------
data : str or array_like or Physio_like
Input physiological data. If array_like, should be one-dimensional
fs : float, optional
Sampling rate of `data`. Default: None
dtype : data_type, optional
Data type to convert `data` to, if conversion needed. Default: None
history : list of tuples, optional
Functions that have been performed on `data`. Default: None
allow_pickle : bool, optional
Whether to allow loading if `data` contains pickled objects. Default:
False
Returns
-------
data: :class:`peakdet.Physio`
Loaded physiological data
Raises
------
TypeError
If provided `data` is unable to be loaded
"""

# first check if the file was made with `save_physio`; otherwise, try to
# load it as a plain text file and instantiate a history
if isinstance(data, str):
try:
inp = dict(np.load(data, allow_pickle=allow_pickle))
for attr in EXPECTED:
try:
inp[attr] = inp[attr].dtype.type(inp[attr])
except KeyError:
raise ValueError(
"Provided npz file {} must have all of "
"the following attributes: {}".format(data, EXPECTED)
)
# fix history, which needs to be list-of-tuple
if inp["history"] is not None:
inp["history"] = list(map(tuple, inp["history"]))
except (IOError, OSError, ValueError):
inp = dict(data=np.loadtxt(data), history=[physio._get_call(exclude=[])])
logger.debug("Instantiating Physio object from a file")
phys = physio.Physio(**inp)
# if we got a numpy array, load that into a Physio object
elif isinstance(data, np.ndarray):
logger.debug("Instantiating Physio object from numpy array")
if history is None:
logger.warning(
"Loading data from a numpy array without providing a"
"history will render reproducibility functions "
"useless! Continuing anyways."
)
phys = physio.Physio(np.asarray(data, dtype=dtype), fs=fs, history=history)
# create a new Physio object out of a provided Physio object
elif isinstance(data, physio.Physio):
logger.debug(
"Instantiating a new Physio object from the provided Physio object"
)
phys = physio.new_physio_like(data, data.data, fs=fs, dtype=dtype)
phys._history += [physio._get_call()]
else:
raise TypeError("Cannot load data of type {}".format(type(data)))

# reset sampling rate, as requested
if fs is not None and fs != phys.fs:
if not np.isnan(phys.fs):
logger.warning(
"Provided sampling rate does not match loaded rate. "
"Resetting loaded sampling rate {} to provided {}".format(phys.fs, fs)
)
phys._fs = fs
# coerce datatype, if needed
if dtype is not None:
phys._data = np.asarray(phys[:], dtype=dtype)

return phys


def save_physio(fname, data):
"""
Saves `data` to `fname`
Parameters
----------
fname : str
Path to output file; .phys will be appended if necessary
data : Physio_like
Data to be saved to file
Returns
-------
fname : str
Full filepath to saved output
"""

from physutils.physio import check_physio

data = check_physio(data)
fname += ".phys" if not fname.endswith(".phys") else ""
with open(fname, "wb") as dest:
hist = data.history if data.history != [] else None
np.savez_compressed(
dest, data=data.data, fs=data.fs, history=hist, metadata=data._metadata
)
logger.info(f"Saved {data} in {fname}")

return fname


def load_history(file, verbose=False):
"""
Loads history from `file` and replays it, creating new Physio instance
Parameters
----------
file : str
Path to input JSON file
verbose : bool, optional
Whether to print messages as history is being replayed. Default: False
Returns
-------
file : str
Full filepath to saved output
"""

# import inside function for safety!
# we'll likely be replaying some functions from within this module...

# TODO: These will need to be imported in order to replay history from this module. Unless another way is found
# import peakdet
# import phys2denoise
pkg_str = ""
peakdet_imported = True
phys2denoise_imported = True

try:
import peakdet # noqa
except ImportError:
peakdet_imported = False
pkg_str += "peakdet"

try:
import phys2denoise # noqa
except ImportError:
phys2denoise_imported = False
if not peakdet_imported:
pkg_str += ", "
pkg_str += "phys2denoise"

if not peakdet_imported or not phys2denoise_imported:
logger.warning(
f"The following packages are not installed: ({pkg_str}). "
"Note that loading history that uses those modules will not be possible"
)

# grab history from provided JSON file
with open(file, "r") as src:
history = json.load(src)

# replay history from beginning and return resultant Physio object
logger.info(f"Replaying history from {file}")
data = None
for func, kwargs in history:
if verbose:
logger.info("Rerunning {}".format(func))
# loading functions don't have `data` input because it should be the
# first thing in `history` (when the data was originally loaded!).
# for safety, check if `data` is None; someone could have potentially
# called load_physio on a Physio object (which is a valid, albeit
# confusing, thing to do)
if "load" in func and data is None:
if not op.exists(kwargs["data"]):
if kwargs["data"].startswith("/"):
msg = (
"Perhaps you are trying to load a history file "
"that was generated with an absolute path?"
)
else:
msg = (
"Perhaps you are trying to load a history file "
"that was generated from a different directory?"
)
raise FileNotFoundError(
"{} does not exist. {}".format(kwargs["data"], msg)
)
name_parts = func.split(".")
func = name_parts[-1]
module_name = ".".join(name_parts[:-1])
module_object = importlib.import_module(module_name)
data = getattr(module_object, func)(**kwargs)
else:
name_parts = func.split(".")
func = name_parts[-1]
module_name = ".".join(name_parts[:-1])
module_object = importlib.import_module(module_name)
data = getattr(module_object, func)(data, **kwargs)

return data


def save_history(file, data):
"""
Saves history of physiological `data` to `file`
Saved file can be replayed with `peakdet.load_history`
Parameters
----------
file : str
Path to output file; .json will be appended if necessary
data : Physio_like
Data with history to be saved to file
Returns
-------
file : str
Full filepath to saved output
"""

from physutils.physio import check_physio

data = check_physio(data)
if len(data.history) == 0:
logger.warning(
"History of provided Physio object is empty. Saving "
"anyway, but reloading this file will result in an "
"error."
)
file += ".json" if not file.endswith(".json") else ""
with open(file, "w") as dest:
json.dump(data.history, dest, indent=4)
logger.info(f"Saved {data} history in {file}")

return file
544 changes: 544 additions & 0 deletions physutils/physio.py

Large diffs are not rendered by default.

453 changes: 0 additions & 453 deletions physutils/physio_obj.py

This file was deleted.

16 changes: 16 additions & 0 deletions physutils/tests/conftest.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
import pytest
from _pytest.logging import LogCaptureFixture
from loguru import logger


@pytest.fixture
def caplog(caplog: LogCaptureFixture):
handler_id = logger.add(
caplog.handler,
format="{message}",
level=20,
filter=lambda record: record["level"].no >= caplog.handler.level,
enqueue=False,
)
yield caplog
logger.remove(handler_id)
44,611 changes: 44,611 additions & 0 deletions physutils/tests/data/ECG.csv

Large diffs are not rendered by default.

Binary file added physutils/tests/data/ECG.phys
Binary file not shown.
12 changes: 12 additions & 0 deletions physutils/tests/data/history.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
[
[
"physutils.tests.utils.filter_physio",
{
"cutoffs": [
5.0,
15.0
],
"method": "bandpass"
}
]
]
93 changes: 93 additions & 0 deletions physutils/tests/test_io.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
# -*- coding: utf-8 -*-

import json
import os

import numpy as np
import pytest

from physutils import io, physio
from physutils.tests.utils import filter_physio, get_test_data_path


def test_load_physio(caplog):
# try loading pickle file (from io.save_physio)
pckl = io.load_physio(get_test_data_path("ECG.phys"), allow_pickle=True)
assert isinstance(pckl, physio.Physio)
assert pckl.data.size == 44611
assert pckl.fs == 1000.0
pckl = io.load_physio(get_test_data_path("ECG.phys"), fs=500.0, allow_pickle=True)
assert caplog.text.count("WARNING") == 1
assert pckl.fs == 500.0

# try loading CSV file
csv = io.load_physio(get_test_data_path("ECG.csv"))
assert isinstance(csv, physio.Physio)
assert np.allclose(csv, pckl)
assert np.isnan(csv.fs)
assert csv.history[0][0] == "physutils.io.load_physio"

# try loading array
arr = io.load_physio(np.loadtxt(get_test_data_path("ECG.csv")))
assert caplog.text.count("WARNING") == 2
assert isinstance(arr, physio.Physio)
arr = io.load_physio(
np.loadtxt(get_test_data_path("ECG.csv")),
history=[("np.loadtxt", {"fname": "ECG.csv"})],
)
assert isinstance(arr, physio.Physio)

# try loading physio object (and resetting dtype)
out = io.load_physio(arr, dtype="float32")
assert out.data.dtype == np.dtype("float32")
assert out.history[0][0] == "np.loadtxt"
assert out.history[-1][0] == "physutils.io.load_physio"
with pytest.raises(TypeError):
io.load_physio([1, 2, 3])


def test_save_physio(tmpdir):
pckl = io.load_physio(get_test_data_path("ECG.phys"), allow_pickle=True)
out = io.save_physio(tmpdir.join("tmp").purebasename, pckl)
assert os.path.exists(out)
assert isinstance(io.load_physio(out, allow_pickle=True), physio.Physio)


def test_load_history(tmpdir):
# get paths of data, new history
fname = get_test_data_path("ECG.csv")
temp_history = tmpdir.join("tmp").purebasename

# make physio object and perform some operations
phys = io.load_physio(fname, fs=1000.0)
filt = filter_physio(phys, [5.0, 15.0], "bandpass")

# save history to file and recreate new object from history
path = io.save_history(temp_history, filt)
replayed = io.load_history(path, verbose=True)

# ensure objects are the same
assert np.allclose(filt, replayed)
assert filt.history == replayed.history
assert filt.fs == replayed.fs


def test_save_history(tmpdir, caplog):
# get paths of data, original history, new history
fname = get_test_data_path("ECG.csv")
orig_history = get_test_data_path("history.json")
temp_history = tmpdir.join("tmp").purebasename

# make physio object and perform some operations
phys = physio.Physio(np.loadtxt(fname), fs=1000.0)
io.save_history(temp_history, phys)
assert caplog.text.count("WARNING") == 1 # no history = warning
filt = filter_physio(phys, [5.0, 15.0], "bandpass")
path = io.save_history(temp_history, filt) # dump history=

# load both original and new json and ensure equality
with open(path, "r") as src:
hist = json.load(src)
with open(orig_history, "r") as src:
orig = json.load(src)
assert hist == orig
49 changes: 49 additions & 0 deletions physutils/tests/test_physio.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# -*- coding: utf-8 -*-

import numpy as np
import pytest

from physutils.physio import Physio
from physutils.tests import utils as testutils

DATA = np.loadtxt(testutils.get_test_data_path("ECG.csv"))
PROPERTIES = ["data", "fs", "history", "peaks", "troughs", "_masked"]
PHYSIO_TESTS = [
# accepts "correct" inputs for history
dict(kwargs=dict(data=DATA, history=[("good", "history")])),
# fails on bad inputs for history
dict(kwargs=dict(data=DATA, history=["malformed", "history"]), raises=TypeError),
dict(kwargs=dict(data=DATA, history="not real history"), raises=TypeError),
# accepts "correct" for metadata
dict(kwargs=dict(data=DATA, metadata=dict())),
dict(kwargs=dict(data=DATA, metadata=dict(peaks=[], reject=[], troughs=[]))),
# fails on bad inputs for metadata
dict(kwargs=dict(data=DATA, metadata=[]), raises=TypeError),
dict(kwargs=dict(data=DATA, metadata=dict(peaks={})), raises=TypeError),
# fails on bad inputs for data
dict(kwargs=dict(data=np.column_stack([DATA, DATA])), raises=ValueError),
dict(kwargs=dict(data="hello"), raises=ValueError),
]


def test_physio():
phys = Physio(DATA, fs=1000)
assert len(np.hstack((phys[:10], phys[10:-10], phys[-10:])))
assert str(phys) == "Physio(size=44611, fs=1000.0)"
assert len(np.exp(phys)) == 44611


class TestPhysio:
tests = PHYSIO_TESTS

def test_physio_creation(self):
for test in PHYSIO_TESTS:
if test.get("raises") is not None:
with pytest.raises(test["raises"]):
phys = Physio(**test["kwargs"])
else:
phys = Physio(**test["kwargs"])
for prop in PROPERTIES:
assert hasattr(phys, prop)
for prop in ["peaks", "reject", "troughs"]:
assert isinstance(phys._metadata.get(prop), np.ndarray)
123 changes: 0 additions & 123 deletions physutils/tests/test_physio_obj.py

This file was deleted.

79 changes: 79 additions & 0 deletions physutils/tests/utils.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
"""
Utilities for testing
"""

from os.path import join as pjoin

import numpy as np
from pkg_resources import resource_filename
from scipy import signal

from physutils import physio


def get_test_data_path(fname=None):
"""Function for getting `peakdet` test data path"""
path = resource_filename("physutils", "tests/data")
return pjoin(path, fname) if fname is not None else path


def get_sample_data():
"""Function for generating tiny sine wave form for testing"""
data = np.sin(np.linspace(0, 20, 40))
peaks, troughs = np.array([3, 15, 28]), np.array([9, 21, 34])

return data, peaks, troughs


@physio.make_operation()
def filter_physio(data, cutoffs, method, *, order=3):
"""
Applies an `order`-order digital `method` Butterworth filter to `data`
Parameters
----------
data : Physio_like
Input physiological data to be filtered
cutoffs : int or list
If `method` is 'lowpass' or 'highpass', an integer specifying the lower
or upper bound of the filter (in Hz). If method is 'bandpass' or
'bandstop', a list specifying the lower and upper bound of the filter
(in Hz).
method : {'lowpass', 'highpass', 'bandpass', 'bandstop'}
The type of filter to apply to `data`
order : int, optional
Order of filter to be applied. Default: 3
Returns
-------
filtered : :class:`peakdet.Physio`
Filtered input `data`
"""

_valid_methods = ["lowpass", "highpass", "bandpass", "bandstop"]

data = physio.check_physio(data, ensure_fs=True)
if method not in _valid_methods:
raise ValueError(
"Provided method {} is not permitted; must be in {}.".format(
method, _valid_methods
)
)

cutoffs = np.array(cutoffs)
if method in ["lowpass", "highpass"] and cutoffs.size != 1:
raise ValueError("Cutoffs must be length 1 when using {} filter".format(method))
elif method in ["bandpass", "bandstop"] and cutoffs.size != 2:
raise ValueError("Cutoffs must be length 2 when using {} filter".format(method))

nyq_cutoff = cutoffs / (data.fs * 0.5)
if np.any(nyq_cutoff > 1):
raise ValueError(
"Provided cutoffs {} are outside of the Nyquist "
"frequency for input data with sampling rate {}.".format(cutoffs, data.fs)
)

b, a = signal.butter(int(order), nyq_cutoff, btype=method)
filtered = physio.new_physio_like(data, signal.filtfilt(b, a, data))

return filtered
44 changes: 41 additions & 3 deletions setup.cfg
Original file line number Diff line number Diff line change
@@ -22,7 +22,10 @@ provides =
[options]
python_requires = >=3.6.1
install_requires =
matplotlib >=3.9
numpy >=1.9.3
scipy
loguru
tests_require =
pytest >=3.6
test_suite = pytest
@@ -37,23 +40,58 @@ doc =
sphinx_rtd_theme
style =
flake8 >=3.7
flake8-docstrings >=1.5
black
isort <6.0.0
pydocstyle
codespell
test =
scipy
pytest >=5.3
pytest-cov
%(style)s
devtools =
pre-commit
all =
%(doc)s
%(style)s
%(test)s
%(devtools)s

[flake8]
doctest = True
exclude=
*build/
tests
ignore = E126, E402, W503
max-line-length = 99
versioneer.py
ignore = E126, E402, W503, W401, W811
extend-ignore = E203, E501
extend-select = B950
max-line-length = 88
per-file-ignores =
*/__init__.py:F401

[isort]
profile = black
skip_gitignore = true
extend_skip =
setup.py
versioneer.py
physutils/_version.py
skip_glob =
docs/*

[pydocstyle]
convention = numpy
match =
physutils/*.py
match_dir = physutils/[^tests]*

[codespell]
skip = venvs,.venv,versioneer.py,.git,build,./docs/_build
write-changes =
count =
quiet-level = 3

[tool:pytest]
doctest_optionflags = NORMALIZE_WHITESPACE
xfail_strict = true
14 changes: 8 additions & 6 deletions setup.py
Original file line number Diff line number Diff line change
@@ -4,11 +4,13 @@
from setuptools import setup
import versioneer

SETUP_REQUIRES = ['setuptools >= 30.3.0']
SETUP_REQUIRES += ['wheel'] if 'bdist_wheel' in sys.argv else []
SETUP_REQUIRES = ["setuptools >= 30.3.0"]
SETUP_REQUIRES += ["wheel"] if "bdist_wheel" in sys.argv else []

if __name__ == "__main__":
setup(name='physutils',
setup_requires=SETUP_REQUIRES,
version=versioneer.get_version(),
cmdclass=versioneer.get_cmdclass())
setup(
name="physutils",
setup_requires=SETUP_REQUIRES,
version=versioneer.get_version(),
cmdclass=versioneer.get_cmdclass(),
)
267 changes: 165 additions & 102 deletions versioneer.py

Large diffs are not rendered by default.

0 comments on commit 2ef56b8

Please sign in to comment.