Skip to content

Commit

Permalink
Add a docs page in the user guide about adding trials (#1002)
Browse files Browse the repository at this point in the history
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
  • Loading branch information
bendichter and pre-commit-ci[bot] authored Aug 15, 2024
1 parent f266297 commit d00f6d8
Show file tree
Hide file tree
Showing 4 changed files with 67 additions and 6 deletions.
30 changes: 30 additions & 0 deletions docs/user_guide/adding_trials.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
.. _adding_trials:

Adding Trials to NWB Files
==========================

NWB allows you to store information about time intervals in a structured way. These structure are often used to store
information about trials, epochs, or other time intervals in the data.
You can add time intervals to an NWBFile object before writing it using PyNWB.
Here is an example of how to add trials to an NWBFile object.
Here is how you would add trials to an NWB file:

.. code-block:: python
# you can add custom columns to the trials table
nwbfile.add_trials_column(name="trial_type", description="the type of trial")
nwbfile.add_trial(start_time=0.0, stop_time=1.0, trial_type="go")
nwbfile.add_trial(start_time=1.0, stop_time=2.0, trial_type="nogo")
You can also add epochs or other types of time intervals to an NWB File. See
`PyNWB Annotating Time Intervals <https://pynwb.readthedocs.io/en/stable/tutorials/general/plot_timeintervals.html>`_
for more information.

Once this information is added, you can write the NWB file to disk.

.. code-block:: python
from neuroconv.tools.nwb_helpers import configure_and_write_nwbfile
configure_and_write_nwbfile(nwbfile, save_path="path/to/destination.nwb", backend="hdf5")
25 changes: 23 additions & 2 deletions docs/user_guide/datainterfaces.rst
Original file line number Diff line number Diff line change
Expand Up @@ -143,8 +143,8 @@ Here we can see that ``metadata["Ecephys"]["ElectrodeGroup"][0]["location"]`` is
Use ``.get_metadata_schema()`` to get the schema of the metadata dictionary. This schema is a JSON-schema-like dictionary that specifies required and optional fields in the metadata dictionary. See :ref:`metadata schema <metadata_schema>` for more information.

4. Run conversion
~~~~~~~~~~~~~~~~~
4a. Run conversion
~~~~~~~~~~~~~~~~~~
The ``.run_conversion`` method takes the (edited) metadata dictionary and
the path of an NWB file, and launches the actual data conversion into NWB.

Expand All @@ -159,3 +159,24 @@ This method reads and writes large datasets piece-by-piece, so you
can convert large datasets without overloading the computer's available RAM.
It also uses good defaults for data chunking and lossless compression, reducing
the file size of the output NWB file and optimizing the file for cloud compute.

4b. Create an in-memory NWB file
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you want to create an in-memory NWB file, you can use the ``.create_nwbfile`` method.

.. code-block:: python
nwbfile = spikeglx_interface.create_nwbfile(metadata=metadata)
This is useful for add data such as trials, epochs, or other time intervals to the NWB file. See
:ref:`Adding Time Intervals to NWB Files <adding_trials>` for more information.

This does not load large datasets into memory. Those remain in the source files and are read piece-by-piece during the
write process. Once you make all the modifications you want to the NWBfile, you can save it to disk. The following code
automatically optimizes datasets for cloud compute and writes the file to disk.

.. code-block:: python
from neuroconv.tools.nwb_helpers import configure_and_write_nwbfile
configure_and_write_nwbfile(nwbfile, save_path="path/to/destination.nwb", backend="hdf5")
1 change: 1 addition & 0 deletions docs/user_guide/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ and synchronize data across multiple sources.

datainterfaces
nwbconverter
adding_trials
temporal_alignment
csvs
expand_path
Expand Down
17 changes: 13 additions & 4 deletions docs/user_guide/nwbconverter.rst
Original file line number Diff line number Diff line change
Expand Up @@ -44,21 +44,30 @@ keys of``data_interface_classes``.
This creates an :py:class:`.NWBConverter` object that can aggregate and distribute across
the data interfaces. To fetch metadata across all of the interfaces and merge
them together, call::
them together, call.

.. code-block:: python
metadata = converter.get_metadata()
The metadata can then be manually modified with any additional user-input, just like ``DataInterface`` objects::
The metadata can then be manually modified with any additional user-input, just like ``DataInterface`` objects.

.. code-block:: python
metadata["NWBFile"]["session_description"] = "NeuroConv tutorial."
metadata["NWBFile"]["experimenter"] = "My name"
metadata["Subject"]["subject_id"] = "ID of experimental subject"
The final metadata dictionary should follow the form defined by
``converter.get_metadata_schema()``. Now run the entire conversion with::
The final metadata dictionary should follow the form defined by :meth:`.NWBConverter.get_metadata_schema`.
Now run the entire conversion with.

.. code-block:: python
converter.run_conversion(metadata=metadata, nwbfile_path="my_nwbfile.nwb")
Like ``DataInterface`` objects, :py:class:`.NWBConverter` objects can output an in-memory NWBFile object by
calling :meth:`.NWBConverter.create_nwbfile`. This can be useful for debugging or for further processing.

Though this example was only for two data streams (recording and spike-sorted
data), it can easily extend to any number of sources, including video of a
subject, extracted position estimates, stimuli, or any other data source.

0 comments on commit d00f6d8

Please sign in to comment.