DataInterfaces#

The BaseDataInterface class provides a unified API for converting data from any single input stream. See the Conversion Gallery for existing DataInterface classes and example usage. The standard workflow for using a DataInterface is as follows:

1. Installation#

Each DataInterface may have custom dependencies for reading that specific file format. To ensure that you have all the appropriate dependencies, you can install NeuroConv in this specific configuration using pip extra requirements. For instance, to install the dependencies for SpikeGLX, run:

pip install neuroconv[spikeglx]

Note

If you are using a Z-shell (zsh) terminal (the default for MacOS), then you will have to use quotes to specify the custom dependency.

pip install 'neuroconv[spikeglx]'

2. Construction#

Initialize a class and direct it to the appropriate source data. This will open the files and read header information, setting up the system for conversion, but generally will not read the underlying data.

from neuroconv.datainterfaces import SpikeGLXRecordingInterface

spikeglx_interface = SpikeGLXRecordingInterface(file_path="path/to/towersTask_g0_t0.imec0.ap.bin")

Note

To get the form of source_data, run BaseDataInterface.get_source_schema(), which returns the source schema as a JSON-schema-like dictionary informing the user of the required and optional input arguments to the downstream readers.

3. Get metadata#

Each DataInterface can extract relevant metadata from the source files and organize it for writing to NWB in a hierarchical dictionary. This dictionary can be edited to include data not available in the source files.

metadata = spikeglx_interface.get_metadata()
metadata["NWBFile"]["experimenter"] = ["Darwin, Charles"]
metadata["Subject"] = dict(
    subject_id="M001",
    sex="M",
    age="P30D",
)

4. Run conversion#

The .run_conversion method takes the (edited) metadata dictionary and the path of an NWB file, and launches the actual data conversion into NWB. This process generally reads and writes large datasets piece-by-piece, so you can convert large datasets without overloading the computer’s available RAM. It also uses good defaults for data chunking and lossless compression, reducing the file size of the output NWB file.

spikeglx_interface.run_conversion(
    save_path="path/to/destination.nwb",
    metadata=metadata
)