-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Reading units
table is very slow when expanding electrodes
table
#237
Comments
@rly here's the issue we discussed about! |
electrodes
tableunits
table is very slow when expanding electrodes
table
@alejoe91 Is the file available to share via Dandi or elsewhere? If not, we can make a dummy file. |
@alejoe91 I was able to reproduce with my own: |
@alejoe91 Found the source. Exploring ideas to make it quicker. |
@alejoe91 So I found an issue with my toy example. The time is being inflated by some stuff with the spiketimes, which reveals a separate optimization issue to tackle in terms of resolving vector index info. That being said, with that gone, the timing I get does show that zarr takes twice as long, but I do not have the wait time of 10 mins. Rather it is a difference of 4 seconds (zarr) vs 2 seconds (hdf5). I assume this is just the nature of the differences in our files. That being said zarr shouldn't be twice as long. I can continue working on this, but we would need to explore this with you work to make sure the fix is impactful. |
@alejoe91 You mention "copies the entire dynamic table region into the dataframe." It begs the question what would you want to see in the to_dataframe() visual of the units table. Would you want the electrodes column to just be a column of indices to the electrodes table? Would you want this as an arg in to_dataframe to toggle? |
Hi @mavaylon1 Thank you so much for working on this! The new times are perfectly acceptable :) There's already an option not to expand the electrodes, using |
@alejoe91 Is it possible for me to get access to the nwbfile? Those times aren't new times (sadly). I was saying that the example i created to mimic your issue was being inflated in time by other issues that I will make a ticket for. I created a new example to focus on the electrodes, but those are the times I am getting (800 units and 400 electrodes). |
Yep, here's a link to a test file where the issue is quite relevant because it has over 1000 units! It's still uploasing, so give it a few minutes! |
@alejoe91 What is the file size? I tried downloading to my large memory workstation via gdown but it has too many files. I can download it locally, but I need to know the size of the zarr file. Drive doesn't show it, do you know the size? |
@alejoe91 Ryan and I talked on Friday. We have one idea to try. I am implementing that now and will test with your file once we figure out where to put it due to file size. |
It should be 248.5 MB |
@alejoe91 Can't seem to read the file. Let me investigate. It has some kind of dateutils parser issue |
Maybe some zarr version? What zarr/hdmf-zarr version are you using? |
@alejoe91 I see that you're subject information is all empty. This is leading somewhere to set these values to 0 (looked at the builder). This is returning issues in the objectmapper as the type is wrong. As for versioning, we do have different versions, but this isn't anything on release milestones that I can recall. I will use your versions to see if I get the same issue. |
Let me get back to you on this. I'll double check the file and that the upload was correct. |
Ok, not 10 minutes, but it still takes a while: I'm just running this:
and getting:
Note that there is subject info! |
Okay so I tried with your env and it still doesn't work. I see in that image you do actually have species info. I don't think the file is correct. @alejoe91 |
Hi think that the upload probably didn't finish. I'll try to reupload tomorrow and send you an updated link |
Try putting it in a tar first. @alejoe91 |
@alejoe91 It looks to be good now. I will continue. Can I also have the hdf5 version you were using as well(i want to avoid differences in exporting)? I want to compare chunks and compression. Yes I can go on ahead with our idea and it may help. However, it would be useful to see why there is a difference. From my digging, there is a lot of nested loops which would cause things to slow down when the index is False. However, the question remains why is hdf5 faster. As for the file you provided, I can reproduce. |
If this is related to references, then the speed difference probably due to the fact that we need to resolve references manually in Zarr |
@alejoe91 With my update it is 9.9 seconds. With the old method it is a TBD of so far 2+mins and counting. I do see some other areas to fix as well. But a fix seems to have been found. Will keep you updated. |
@alejoe91 Try this: hdmf-dev/hdmf#1248 |
@alejoe91 How is it on your end? |
Testing now |
Well this just got even weirder. I will escalate this to a group discussion because I don't know how to approach this now that it is faster for me but slower for you. @rly |
I'll do some more testing on my end and get back to you! |
@mavaylon1 I think I was using the wrong branch, sorry for the confusion! It seems that the fix speeds things up!!!! hdmf opt_get branch
hdmf 4.0.0
And here's the dataset I showed yesterday, which went from >400s to ~8s! |
@alejoe91 Okay great!!! Let me finish the PR and we will get it merged asap |
@alejoe91 The fix has been merged into dev. The next bug release is first week of March. Are you okay with just using the dev branch for now? |
Sure, no problem at all! Thanks for the help on this @mavaylon1!!! |
What happened?
We're mainly using NWB-Zarr and found that reading the units table and rendering as a dataframe is prohibitevly slow. This is mainly due to the
electrodes
column that copies the entire dynamic table region into the dataframe.To give some numbers, reading a dataset with ~758 units takse over ~10 minutes. When
index=True
, reading time goes down to ~6s.To investigate this performance issue, we also ran the same tests with the same file saved as HDF5, and here are the results (see steps to reproduce).
In general, Zarr is slower, but this could be due to the fact that everything is compressed by default, with no compression is applied in HDF5.
This barplot shows the reading time for each column in the units table, obtained with:
load_times_hdf5_zarr.pdf
Steps to Reproduce
Traceback
Operating System
Linux
Python Executable
Conda
Python Version
3.9
Package Versions
pynwb 2.8.2
hdmf 3.14.5
hdmf_zarr 0.9.0
Code of Conduct
The text was updated successfully, but these errors were encountered: