-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Coredump for OCRSearch feature extraction #359
Comments
and what did the core dump say? |
Unfortunately there is no further information than a generic core dumped message. I did check in the user's home or the working direktory and none of the usual error logs appeared. My assumption (since a proper java coredump would produce a log file), that some native library had an issue. |
Yes, that's usually the only way to trigger a core dump. It should still dump something though. If it were killed by the OS before it was able to do so, you would get a different message. |
Probably the same issue as #273, where we did not have sufficient information to reproduce the bug. Thanks for the information! |
I'm observing a similar issue, thus I'm not sure if this is the same as described here or in #273. It seems to me, that the extraction process loads the objects into memory one-by-one. As soon as the sum of objects is larger than the available memory, the process goes OOM. |
If this is indeed the problem, possible workarounds would include:
Can you check if one or both of these measures resolves your issue? |
|
This is expected behavior in this case and what you would want to happen. This only tells you that the feature extraction is slower than the decoder, and if you have more compute resources, you could increase throughput. In case you are compute - or in this case memory - limited, you'd want the pipeline to slow down rather than trying to consume more resources than are available. Were you able to run your extraction without anything crashing? |
The video resolution is already limited to 640x480. The affected instance has 32 CPU cores, 64G RAM, 4352 GPU cores and 11G GRAM. What parameters would you recommend to run at full capacity? |
Whatever works 😉 |
In case others have this issue as well, I document it here:
Using the OCRSearch feature in an extraction configured as follows:
after roughly 7000 segments, a coredump stopped the extraction, which was executed using:
This occurred on a Ubuntu 20.04.5 LTS, using openjdk
with 40 cores of type Intel(R) Xeon(R) Silver 4210 CPU @ 2.20GHz
GPU:
NVIDIA GeForce RTX 2080 TI, nvidia-driver 450.51.06 and CUDA 11.0.228
The text was updated successfully, but these errors were encountered: