INSFV Memory Usage #17874
-
Hi All Quick one, a colleague pointed out that he was seeing high memory usage in INSFV, so I took @ChrocheMisawa 's problem as posted in #17760 and cracked up the refinement. Im seeing huge memory increases for memory usage Suggesting 1e6 elements being 200 Gb of memory! That doesnt seem right, thats huge a post on the OpenFOAM forum (apples and elephants comparison) suggests 1e6 elements ~ 1 Gb of memory, so even accounting for the AD cost, this seems expensive. Thanks Andy |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 7 replies
-
Are you running these tests with
Here is some information on presplitting: |
Beta Was this translation helpful? Give feedback.
-
I should've pointed out this is in serial
|
Beta Was this translation helpful? Give feedback.
-
I think it's gmres that is building a very large krylov base in this example. How many linear iterations is this case doing per non-linear iteration? |
Beta Was this translation helpful? Give feedback.
-
I like that you said this was a quick one 😆 So yea @GiudGiud is working on this in in #18009 and in #18012. You did point out that the OpenFOAM to MOOSE comparison is apples to elephants and that's good. Because after eliminating the caching @GiudGiud 's profiles show that at least 60% of remaining memory allocation comes out of PETSc. 34% of that is a matrix alone and there's a pretty good chance that the referenced OpenFOAM case is using explicit time integration. Some of the remaining PETSc memory will be that minimum of 30 GMRES vectors (perhaps more if you increase the restart) unless you are using a different Really appreciate this discussion as it will seriously cut down on our FV memory usage. It would be good to do some more serious profile comparisons to OpenFOAM moving forward where we use the same solution methods because we want to be in the same ballpark for sure. I would be shocked if we ever "beat" them, but yea in the same ballpark is where I want us to be. |
Beta Was this translation helpful? Give feedback.
I like that you said this was a quick one 😆 So yea @GiudGiud is working on this in in #18009 and in #18012. You did point out that the OpenFOAM to MOOSE comparison is apples to elephants and that's good. Because after eliminating the caching @GiudGiud 's profiles show that at least 60% of remaining memory allocation comes out of PETSc. 34% of that is a matrix alone and there's a pretty good chance that the referenced OpenFOAM case is using explicit time integration. Some of the remaining PETSc memory will be that minimum of 30 GMRES vectors (perhaps more if you increase the restart) unless you are using a different
-ksp_type
. You will still have a mass matrix with explicit time integratio…