Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace current NIfTI viewer with NiiVue? #274

Open
kabilar opened this issue Feb 24, 2025 · 11 comments
Open

Replace current NIfTI viewer with NiiVue? #274

kabilar opened this issue Feb 24, 2025 · 11 comments

Comments

@kabilar
Copy link

kabilar commented Feb 24, 2025

Hi @magland, NiiVue is a WebGL2 viewer for NIfTI files that is quite performant. Feel free to checkout their demos. Perhaps NiiVue would be a good replacement for the current viewer which only allows viewing NIfTI files along a single axis and doesn't provide for much interactivity? Thanks for considering.

cc @satra

@magland
Copy link
Collaborator

magland commented Feb 25, 2025

Thanks @kabilar

I've done a basic integration, although haven't tested it on more than a couple examples.

It's working for this anatomical example:

https://neurosift.app/openneuro-dataset/ds005920?tab=sub-001/anat/sub-001_T1w.nii.gz|36aea485f861d1bed3273476a6b34d9619d4535f

Image

Image

For the fmri (BOLD) in that dataset it's takes a long time to load (around 1GB of data), and then only shows the first volume/timepoint. Since this is gzipped, there's no way to do a partial load, unfortunately. So maybe I'll disable viewing if file is larger than some threshold.

@satra
Copy link

satra commented Feb 25, 2025

nice! i would suggest making this work not by default but with a click of a button, as this view does pull data from the remote resource. also @neurolabusc could suggest a default view perhaps that makes most sense.

for bold, it could be significantly more data, and i would stay away from a default view.

@magland
Copy link
Collaborator

magland commented Feb 25, 2025

Okay, it now requires clicking a load button... and if the file is >100 MB, there's a warning alert and a "load anyway" button.

Image

Image

@neurolabusc
Copy link

neurolabusc commented Feb 25, 2025

@magland

  1. For large 4D datasets, I would use the limitFrames4D option as shown in this live demo. Note that the timeline has an ellipsis (...) that allows a user to click to load the entire dataset.
  2. This issue is timely, as NiiVue is adopting the compression streams API via NiiVue PR 1136, NiiVue PR 1189 and NIFTI-Reader-JS PR. This will move all loaders to be asynchronous (non-blocking), faster (use native browser decompression rather than fflate javascript), reduced dependencies (smaller package size). In particular, new functions like decompressHeaderAsync() will aid your case. This will take about a fortnight for the reviews and merging, but the upcoming release of NiiVue will be dramatically better. It will not only be NIfTI images - all loading (voxels, meshes, streamlines, connectomes) adopt these features, combined with loading optimizations (e.g. see streamline loading).

@satra
Copy link

satra commented Feb 25, 2025

@magland - for bold i would use the view @neurolabsc suggested.

@magland
Copy link
Collaborator

magland commented Feb 25, 2025

Thanks @neurolabusc , I look forward to the new version.

What's the best way to determine whether to use the 3D or 4D views? Here's what I am using right now:

https://github.com/flatironinstitute/neurosift/blob/main-v2/src/pages/common/DatasetWorkspace/plugins/nifti/components/NiftiViewer.tsx

@neurolabusc
Copy link

@magland I think the choice of views depends on your use case - I tend to like volume rendering to ensure defacing removes recognizable features (e.g. many clinical scans have craniotomy scars, some people have dermoid cysts, etc.). I think the layout live demo provides an interactive way to evaluate options. The core niivue live demos are all pure HTML, so you can view their code to have a nice recipe regardless of your framework (Vue, React, Angular).

@magland
Copy link
Collaborator

magland commented Feb 25, 2025

But I think the user will want a different layout depending on the type of image (anat, func, etc). I'm wondering if there's a way to get the array dimensions of the volume prior to specifying the layout. Looking at the examples I don't see an obvious way to get that information.

@kabilar
Copy link
Author

kabilar commented Feb 25, 2025

Thanks @magland. This looks great.

I'm wondering if there's a way to get the array dimensions of the volume prior to specifying the layout.

Perhaps using nv.volumes[0].hdr.dims?

@satra
Copy link

satra commented Feb 25, 2025

@magland - the BIDS mnemonics + nifti header would give you the info you need. certain entities are only single volumes (T1, T2, ...) and certain entities (bold, dwi, ...) are multi volumes. most datasets don't have derivatives so a lot of the downstream more useful visualizations are not going to be applicable to the majority of openneuro datasets that have collected (not processed) data. there can also be an agentic mapping between the semantics of the path + information in the corresponding json + nifti header to pass to an agent to tell you what visualization code to use :)

@neurolabusc
Copy link

@magland my personal preference is to leverage the fact that 4D images (fMRI, DWI) tend to be low resolution, and you want to show a timeline, while 3D volumes tend to be higher resolution and are ideal for volume rendering. To see this selection on the layout live demo make sure the timeline checkbox is checked and then set the rendering pulldown to Always. You can now drag and drop your volumes to see that 4D images show a timeline and 3D images show a rendering, regardless of canvas size or aspect ratio. Viewing the HTML source code you will see this is achieved with:

nv1.opts.multiplanarShowRender = 1 //ENUM: SHOW_RENDER ALWAYS
nv1.graph.autoSizeMultiplanar = true

Alternatively, as @satra notes, you could do this via the filename. If you want to implement a custom mechanism to define layout based on an image properties not only if a user loads an image from your interface, but also when they drag and drop a new image, you could use an onImageLoaded callback. The hero live demo illustrates this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants