Skip to content

Commit

Permalink
fix conflicts
Browse files Browse the repository at this point in the history
  • Loading branch information
CodyCBakerPhD committed Dec 3, 2023
1 parent 84c3087 commit b88c7fa
Show file tree
Hide file tree
Showing 5 changed files with 8 additions and 16 deletions.
5 changes: 1 addition & 4 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@

### Features
* Changed the `Suite2pSegmentationInterface` to support multiple plane segmentation outputs. The interface now has a `plane_name` and `channel_name` arguments to determine which plane output and channel trace add to the NWBFile. [PR #601](https://github.com/catalystneuro/neuroconv/pull/601)
* Added tool function `configure_datasets` for configuring all datasets of an in-memory `NWBFile` to be backend specific. [PR #571](https://github.com/catalystneuro/neuroconv/pull/571)

### Improvements
* `nwbinspector` has been removed as a minimal dependency. It becomes an extra (optional) dependency with `neuroconv[dandi]`. [PR #672](https://github.com/catalystneuro/neuroconv/pull/672)
Expand All @@ -21,9 +22,6 @@
* Modify the filtering of traces to also filter out traces with empty values. [PR #649](https://github.com/catalystneuro/neuroconv/pull/649)
* Added tool function `get_default_dataset_configurations` for identifying and collecting all fields of an in-memory `NWBFile` that could become datasets on disk; and return instances of the Pydantic dataset models filled with default values for chunking/buffering/compression. [PR #569](https://github.com/catalystneuro/neuroconv/pull/569)
* Added tool function `get_default_backend_configuration` for conveniently packaging the results of `get_default_dataset_configurations` into an easy-to-modify mapping from locations of objects within the file to their correseponding dataset configuration options, as well as linking to a specific backend DataIO. [PR #570](https://github.com/catalystneuro/neuroconv/pull/570)
<<<<<<< HEAD
* Added tool function `configure_datasets` for configuring all datasets of an in-memory `NWBFile` to be backend specific. [PR #571](https://github.com/catalystneuro/neuroconv/pull/571)
=======
* Added `set_probe()` method to `BaseRecordingExtractorInterface`. [PR #639](https://github.com/catalystneuro/neuroconv/pull/639)
* Changed default chunking of `ImagingExtractorDataChunkIterator` to select `chunk_shape` less than the chunk_mb threshold while keeping the original image size. The default `chunk_mb` changed to 10MB. [PR #667](https://github.com/catalystneuro/neuroconv/pull/667)

Expand Down Expand Up @@ -85,7 +83,6 @@
### Fixes

* Reorganize timeintervals schema to reside in `schemas/` dir to ensure its inclusion in package build. [PR #573](https://github.com/catalystneuro/neuroconv/pull/573)
>>>>>>> main



Expand Down
7 changes: 4 additions & 3 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,13 @@
testing_suite_dependencies = f.readlines()

extras_require = defaultdict(list)

extras_require["dandi"].append("dandi>=0.58.1")
extras_require["full"].extend(extras_require["dandi"])

extras_require.update(compressors=["hdf5plugin"])
extras_require["full"].extend(["hdf5plugin"])

extras_require.update(test=testing_suite_dependencies, docs=documentation_dependencies)
for modality in ["ophys", "ecephys", "icephys", "behavior", "text"]:
modality_path = root / "src" / "neuroconv" / "datainterfaces" / modality
Expand All @@ -44,9 +48,6 @@
extras_require[modality].extend(format_requirements)
extras_require[format_subpath.name].extend(format_requirements)

extras_require.update(compressors=["hdf5plugin"])
extras_require["full"].extend(["hdf5plugin"])

# Create a local copy for the gin test configuration file based on the master file `base_gin_test_config.json`
gin_config_file_base = Path("./base_gin_test_config.json")
gin_config_file_local = Path("./tests/test_on_data/gin_test_config.json")
Expand Down
2 changes: 1 addition & 1 deletion src/neuroconv/tools/nwb_helpers/_dataset_configuration.py
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,7 @@ def configure_backend(
data_io_class = backend_configuration.data_io_class
for dataset_configuration in backend_configuration.datset_configurations:
object_id = dataset_configuration.dataset_info.object_id
dataset_name = dataset_configuration.dataset_info.location.strip("/")[-1]
dataset_name = dataset_configuration.dataset_info.dataset_name
data_io_kwargs = dataset_configuration.get_data_io_kwargs()

# TODO: update buffer shape in iterator, if present
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,7 @@
from pynwb.testing.mock.file import mock_NWBFile

from neuroconv.tools.hdmf import SliceableDataChunkIterator
from neuroconv.tools.nwb_helpers import (
configure_backend,
get_default_backend_configuration,
)
from neuroconv.tools.nwb_helpers import configure_backend, get_default_backend_configuration


def test_unwrapped_time_series_hdf5(tmpdir):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,7 @@
from pynwb.testing.mock.file import mock_NWBFile

from neuroconv.tools.hdmf import SliceableDataChunkIterator
from neuroconv.tools.nwb_helpers import (
configure_backend,
get_default_backend_configuration,
)
from neuroconv.tools.nwb_helpers import configure_backend, get_default_backend_configuration


def test_unwrapped_time_series_hdf5(tmpdir):
Expand Down

0 comments on commit b88c7fa

Please sign in to comment.