You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
On machines with very large amounts of RAM, the java VM appears to try to put more memory in its non-heap RAM. (it seems to increase roughly linearly with the heap size with the constant being around 4GB on our cluster)
The way java jobs were being called by hugeseq in the past causes an instant fail because the VM cannot allocate memory for its non-heap parts. (12GB given to the job and 11GB given to the heap didn't leave enough room for the VM)
There ought to be an option to tell hugeseq how much difference it needs to give between the heap & total memory so we can tweak it if the defaults fail. It could be an input to the python script or even a value in the modulefile.
The text was updated successfully, but these errors were encountered:
On machines with very large amounts of RAM, the java VM appears to try to put more memory in its non-heap RAM. (it seems to increase roughly linearly with the heap size with the constant being around 4GB on our cluster)
The way java jobs were being called by hugeseq in the past causes an instant fail because the VM cannot allocate memory for its non-heap parts. (12GB given to the job and 11GB given to the heap didn't leave enough room for the VM)
There ought to be an option to tell hugeseq how much difference it needs to give between the heap & total memory so we can tweak it if the defaults fail. It could be an input to the python script or even a value in the modulefile.
The text was updated successfully, but these errors were encountered: