Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Need for initialisation? #152

Open
hiker opened this issue Dec 9, 2024 · 2 comments
Open

Need for initialisation? #152

hiker opened this issue Dec 9, 2024 · 2 comments

Comments

@hiker
Copy link

hiker commented Dec 9, 2024

Having to do an explicit init call to Vernier, and an exit call is pretty annoying. Could this potentially be avoided?

For non-MPI code, you could declare a global static C++ object, that would get its constructor called before main, and the destructor at the end. For MPI code, you can hook into MPI_Init() and MPI_Finalize()?

Admittedly, I didn't look to closely at the parameter that you need at init time, and how to support both approaches in one library ;)

@mo-mglover
Copy link
Collaborator

In a coupled model scenario, Vernier would be running under one or more separate models simultaneously. The init method provides an opportunity to give Vernier an explicit MPI communicator handle, which itself would be the result of duplicating and/or splitting MPI_COMM_WORLD by client code.

I have a change in prep. which will add a tag (string) to the init argument list, which would feed through into the Vernier profile filenames; we want differently named files for different models.

In practical terms, it's two calls in top-level model code; once coded, they're there.

@hiker
Copy link
Author

hiker commented Dec 10, 2024

In a coupled model scenario, Vernier would be running under one or more separate models simultaneously. The init method provides an opportunity to give Vernier an explicit MPI communicator handle, which itself would be the result of duplicating and/or splitting MPI_COMM_WORLD by client code.

Well, hooking into MPI_Init gives you access to MPI_COMM_WORLD, and I would assume that's all you need? Even if coupled models are not all using vernier, you would only get empty files.

I have a change in prep. which will add a tag (string) to the init argument list, which would feed through into the Vernier profile filenames; we want differently named files for different models.

That's a very good idea.

In practical terms, it's two calls in top-level model code; once coded, they're there.

I disagree with this, at least in the context of lfric, We are interested in running detailed performance tests nightly (hoping that at some stage we can revive our dashboard to show the results :) ). But we can't add these init/finalise calls to the lfric-core repo (since not every site would want to use Vernier or use it in every run), unless we add preprocessor directives (which is at the very least ugly, not even sure if LFRic would want to do that). Not to mention that for testing purposes, I run quite a few different (small) apps, so I need to manually all of them any time I want to measure performance.

Unless of course we add a manual patching step - but that's really ugly :)

Compared with a tool like tau (which hooks into different startup methods), where I don't need to do anything, this is a discouraging manual overhead.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants