You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
How does ClearVolume handle time-lapse data, internally?
Does it load only the currently active time-point into RAM and then onto the Graphics card?
I am asking because we are currently using ImageJ's virtual stack here for large data sets where
a single time-point is not a big issue (~2-4GB)
but all time-points together (100 time-points) would be an issue
Thus, an internal logic in ClearVolume, where it only "requests", i.e. calls the imp.getProcessor() methods, for the currently active time-point would make it big data compatible in a very easy way.
The text was updated successfully, but these errors were encountered:
the way this works at the moment is that only a single timepoint is submitted to the GPU at any given moment. You are right that a 2-4GiB image does not pose a large problem.
However, you also need to take transfer latency into consideration, which becomes an issue with larger volumes, as they are handled as textures, and are processed by the GPU on arrival. A while ago, @tpietzsch and me did some optimisations how the images are transferred from imglib2 to ClearVolume, which yielded quite a substantial improvement, but we did not have time yet to test this really in-depth. In case you'd like to have a look, the code is in the LoadOptimizationsAndProfiling branch of imglib2-clearvolume, which you can find here.
Hello,
This is not an issue report but a question:
How does ClearVolume handle time-lapse data, internally?
Does it load only the currently active time-point into RAM and then onto the Graphics card?
I am asking because we are currently using ImageJ's virtual stack here for large data sets where
Thus, an internal logic in ClearVolume, where it only "requests", i.e. calls the imp.getProcessor() methods, for the currently active time-point would make it big data compatible in a very easy way.
The text was updated successfully, but these errors were encountered: