Replies: 3 comments
-
The parsing operations should be relatively straightforward to parallelize. MontePy is currently CPU-bound for our applications and could take advantage of multiprocessing. The more difficult and more interesting question is what makes makes the parsing so slow. The first thing for the development team to do is #137 and figure out where the time is being spent. I've done some preliminary profiling on one of my test reactor models with a fairly complicated geometry and about 1000 depleted materials, with all their fission products and such. Calls to |
Beta Was this translation helpful? Give feedback.
-
I'm really disappointed you are seeing ~5 hours for parsing, and that is really untenable for adoption, and really needs to be fixed prior to it being viable something on the scale of ITER. Side note: are there any public versions of the ITER model available? It would be nice if we could do some profiling of this model on our own.
It sounds like you (@dodu94) are serious about pursuing MontePy for F4E. There might be some opportunities for collaboration on this development. P.S. I'll hop over to #137 to add my 2 cents. |
Beta Was this translation helpful? Give feedback.
-
I started playing around with profiling and making big models with MontePy, and it is going really slowly. I think performance is a top priority for the next month at least now. |
Beta Was this translation helpful? Give feedback.
-
Hello everyone,
first of all I would like to thank the devs for the work they did in the last month or so to promptly solve all the issues that I opened. I was finally able to successfully parse (at least without breaking) MCNP models created with our typical workflows here at Fusion For Energy.
As you may remember, my plan is to eventually substitute numjuggler with montepy as the parsing engine of F4Enix but I am afraid that until parsing performance is improved this may be out of the question. In fact, I noticed an increase of around x30 going from F4Enix (which already is a x2 increase in time with respect to numjuggler) to montepy parsing.
Even if comparing numjuggler to montepy is not fair since the latter is a much more complete and robust parser, still this means that for inputs like E-lite we would increase the parsing times from roughly 10 minutes to something like 5 hours, which is not viable. (I did not have time to test this myself, just extrapolated the times from other tests I performed on smaller models).
I would like to start this discussion to:
Once again, thanks for all the work,
Davide
Beta Was this translation helpful? Give feedback.
All reactions