You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a dataset of size with 487409 samples and 571622 SNPs. Is it possible to load this entire dataset with seqminer? I tried requesting 2000 Gb memory on compute Canada but didn't work although 4874095716224 bytes per float is equal to 1,114.45483 gigabytes. Theoretically this should work. So seqminer is using a lot more memory than it theoretically should. I found readBGENToMatrixByRange is more efficient than readBGENToListByRange, but still readBGENToMatrixByRange doesn't work.
The text was updated successfully, but these errors were encountered:
I have a dataset of size with 487409 samples and 571622 SNPs. Is it possible to load this entire dataset with seqminer? I tried requesting 2000 Gb memory on compute Canada but didn't work although 4874095716224 bytes per float is equal to 1,114.45483 gigabytes. Theoretically this should work. So seqminer is using a lot more memory than it theoretically should. I found readBGENToMatrixByRange is more efficient than readBGENToListByRange, but still readBGENToMatrixByRange doesn't work.
The text was updated successfully, but these errors were encountered: