Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unify obs and test configurations between FV3-JEDI and MPAS-JEDI #255

Open
wants to merge 31 commits into
base: develop
Choose a base branch
from

Conversation

SamuelDegelia-NOAA
Copy link
Contributor

@SamuelDegelia-NOAA SamuelDegelia-NOAA commented Dec 18, 2024

Description

The goal of this PR is to unify the obs and configuration between FV3-JEDI and MPAS-JEDI. New obs are generated by running the offline domain check with the FV3-JEDI grid (the smaller of the two grids). These new obs are staged in $RDAS_DATA/fix/expr_data/obs_2024052700 and linked into the fix directories for both test cases.

I also removed the older obs_ctest directory since it was confusing to have separate directories for the full obs and the obs used for the ctest. A lot of those files actually overlapped anyways. Now there is just one obs directory that should be the same for both test cases.

A lot of other little things are included in this PR to better unify the configuration between the two test cases and GSI:

  • ATMS obs are added to the FV3-JEDI tests to match the MPAS-JEDI obs list. There was previously an issue with the field names for soil temperature and moisture needed for CRTM that has been resolved with updates to the staged file: Data/fieldmetadata/tlei-gfs-restart.yaml.
  • The time window was increased for the GETKF tests to allow ATMS obs to pass QC for the smaller domain
  • niter and gradient norm reduction are now consistent between both cases for the Ens3Dvar test.
  • The &analysisDate variable is now set correctly for FV3-JEDI to better match wind obs counts compared to MPAS-JEDI (important for the temporal thinning filter).
  • The same localization radii are also now used for FV3-JEDI, MPAS-JEDI, and GSI. Bump was rerun for both FV3-JEDI and MPAS-JEDI. See here for a summary of the new localization radii and their units
  • The same number of mpi tasks (160) are now used for both sets of ctests and the bumploc files are updated for MPAS-JEDI for this purpose.
  • Since the GSI validation is an important part of RDASApp at the moment, I added a few of the important fix files into the repo for better tracking. These are under RDASApp/rrfs-test/gsi_fix.

Lastly, a small utility is also added called rrfs-test/ush/print_ctest_runtime.py that can be used for printing the actual runtime of the ctests (instead of the runtime + wait time included in the normal ctest output).

Issue(s) addressed

#246

Dependencies (if applicable)

List the other PRs that this PR is dependent on:
None

Checklist

  • I have performed a self-review of my own code.
  • I have run rrfs tests before creating the PR (if applicable).
  • I have staged the relevant data on all supported machines.

SamuelDegelia-NOAA and others added 25 commits December 4, 2024 19:30
@SamuelDegelia-NOAA SamuelDegelia-NOAA requested review from guoqing-noaa, delippi, hu5970 and ShunLiu-NOAA and removed request for guoqing-noaa December 18, 2024 14:50
@SamuelDegelia-NOAA
Copy link
Contributor Author

Ctest results from Hera:

[Samuel.Degelia@hfe06 rrfs-test]$ ctest -j8
Test project /scratch1/BMC/zrtrr/Samuel.Degelia/RDASApp_unify_fresh2/RDASApp/build/rrfs-test
    Start 2: rrfs_fv3jedi_2024052700_getkf_observer
    Start 5: rrfs_mpasjedi_2024052700_getkf_observer
    Start 1: rrfs_fv3jedi_2024052700_Ens3Dvar
    Start 4: rrfs_mpasjedi_2024052700_Ens3Dvar
    Start 7: rrfs_mpasjedi_2024052700_bumploc
    Start 8: rrfs_bufr2ioda_msonet
1/8 Test #8: rrfs_bufr2ioda_msonet .....................   Passed   32.57 sec
2/8 Test #7: rrfs_mpasjedi_2024052700_bumploc ..........   Passed  210.26 sec
3/8 Test #5: rrfs_mpasjedi_2024052700_getkf_observer ...   Passed  259.36 sec
    Start 6: rrfs_mpasjedi_2024052700_getkf_solver
4/8 Test #2: rrfs_fv3jedi_2024052700_getkf_observer ....   Passed  262.80 sec
    Start 3: rrfs_fv3jedi_2024052700_getkf_solver
5/8 Test #1: rrfs_fv3jedi_2024052700_Ens3Dvar ..........   Passed  286.87 sec
6/8 Test #4: rrfs_mpasjedi_2024052700_Ens3Dvar .........   Passed  383.32 sec
7/8 Test #3: rrfs_fv3jedi_2024052700_getkf_solver ......   Passed  534.06 sec
8/8 Test #6: rrfs_mpasjedi_2024052700_getkf_solver .....   Passed  863.33 sec

100% tests passed, 0 tests failed out of 8

Label Time Summary:
mpi            = 2832.56 sec*proc (8 tests)
rdas-bundle    = 2832.56 sec*proc (8 tests)
script         = 2832.56 sec*proc (8 tests)

Total Test time (real) = 1123.28 sec

@SamuelDegelia-NOAA
Copy link
Contributor Author

SamuelDegelia-NOAA commented Dec 18, 2024

Here is a comparison of the analyses from FV3-JEDI and MPAS-JEDI:

FV3-JEDI:
increment_airTemperature_fv3

MPAS-JEDI:
increment_airTemperature_mpas

The structure of the increments are generally consistent. There are some regions where increments are about 1-2 K larger in MPAS-JEDI compared to FV3-JEDI.

Copy link
Collaborator

@delippi delippi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR is quite large. It's generally better to keep PRs smaller when possible to make reviews easier. One of my main concerns is with the large yaml files under testinput. Are those files truly necessary? It seems like you already have templates in place, which could significantly reduce the amount of committed code. In a sense, committing unnecessarily long files is similar to committing binaries--every update to these files creates a new snapshot that github must store. Over time, this can increase repository size and impact the time required for cloning and pulling.

@SamuelDegelia-NOAA
Copy link
Contributor Author

This PR is quite large. It's generally better to keep PRs smaller when possible to make reviews easier. One of my main concerns is with the large yaml files under testinput. Are those files truly necessary? It seems like you already have templates in place, which could significantly reduce the amount of committed code. In a sense, committing unnecessarily long files is similar to committing binaries--every update to these files creates a new snapshot that github must store. Over time, this can increase repository size and impact the time required for cloning and pulling.

I agree that this PR is very large - sorry about that! I just kept finding little things that needed to be changed to match the configuration between GSI/FV3-JEDI/MPAS-JEDI. I kept adding those changes to the PR but I probably should have stopped somewhere and made individual PRs. One thing I struggled with is that the test references have to be updated for each change in this PR, so if we made individual PRs then there could be conflicts. Also, updating the mpi tasks for each ctest also meant rerunning bump for each solver. So it made sense to include localization changes at the same time. But still, I agree that this became too large.

Would it still be helpful to go ahead and split this up?

I could have individual PRs for

  1. Adding ATMS to FV3-JEDI
  2. Updating a few yaml options (niter, gradient norm reduction, &analysisDate)
  3. Updating localization radii + using 160 ntasks
  4. Adding GSI fix files to repo

@SamuelDegelia-NOAA
Copy link
Contributor Author

Also, in regards to commiting the super yamls, this is something we discussed in #184 and #187. It was suggested to commit the super yamls into the repo (instead of creating them at runtime or during the build) so that other developers can use them as templates.

But I agree that as these yamls become larger, that PRs like this will have lots of changes to them which make it very hard to review. These changes would also show up in both the templates and the super yamls. I am not fully sure the solution here, but maybe we should revisit the choice to commit the super yamls. Tagging @guoqing-noaa and @ShunLiu-NOAA here for potential thoughts.

@delippi
Copy link
Collaborator

delippi commented Dec 19, 2024

Also, in regards to commiting the super yamls, this is something we discussed in #184 and #187. It was suggested to commit the super yamls into the repo (instead of creating them at runtime or during the build) so that other developers can use them as templates.

But I agree that as these yamls become larger, that PRs like this will have lots of changes to them which make it very hard to review. These changes would also show up in both the templates and the super yamls. I am not fully sure the solution here, but maybe we should revisit the choice to commit the super yamls. Tagging @guoqing-noaa and @ShunLiu-NOAA here for potential thoughts.

I’d like to revisit this discussion, please, as I may have missed it earlier. I don’t think we should commit any of the large "super" yamls to RDASApp or JCB. We should only maintain a single copy of each file to keep the repository manageable. Committing these large files makes the PR unnecessarily large and harder to maintain and prone error.

If the argument for including them is that they can serve as templates for other developers, that doesn’t make sense. Developers should reference the validated_yamls instead, as those are expected to be the most up-to-date versions and should be used as examples for developing their own work.

Additionally, relying on these super yamls in ctest introduces a risk of missing updates or using outdated versions, which can cause issues down the line. For example, if the super yamls continue using "DRIPCG," that outdated configuration could propagate to other PRs and test results, perpetuating the problem.

@guoqing-noaa
Copy link
Collaborator

I would like to make a clarification that GIT treats text files and binary files very differently.

For binary files, GIT cannot track delta changes. It can only save a complete copy for different commits. That will increase a repo size quickly.

For text files, GIT uses a delta encoding mechanism to store only the differences (or "deltas") between the file versions. For example, for a 1M text file (which is a super large text file. As an example, a 22000-lines YAML file is about 572KB) , if we only modify 50 lines, when we git commit, Git stores only the changes (i.e., the modified 50 lines) and references the original file for the unchanged parts. Further, GIT compresses text files very efficiently in its storage.

@delippi
Copy link
Collaborator

delippi commented Dec 20, 2024

@guoqing-noaa thanks for adding that clarification. That makes sense that binaries are treated different in that way. I think there are valid reasons for doing this either way and I hope we can all discuss and come to some consensus on how we should handle this.

I would say that my main concern is that we should just keep one copy of each yaml part vs having as many as 9 different copies of the same thing that you have to maintain.

@SamuelDegelia-NOAA
Copy link
Contributor Author

I'll add a bit to the discussion about committing the super yamls. If we commit these files, it can make PRs like this one hard to review as we see lots of extra changes show up. For example, adding ATMS obs back to the ctests for FV3-JEDI caused lots of big changes to rrfs-test/testinput/rrfs_fv3jedi_2024052700_Ens3Dvar.yaml when really those additions just come from the existing file rrfs-test/validated_yamls/templates/obtype_config/atms_npp_qc_bc.yaml. Plus, updates like those in this PR show up in both the super yaml and the templates which makes the PR look larger than it is (even though this is still a large PR...)

However, I just realized that there is a downside to creating the super yamls during build.sh. Currently, our super yamls lag behind recent updates to the templates in obtype_config because the gen_yaml_${DYCORE}_ctest.sh scripts have not been run for a bit. This was done so that not everyone needed to learn how to edit the gen yaml scripts and update the ctest reference files if they made changes to the templates. But if we instead run this script during the build, then the ctests would be updated anytime that the templates in obtype_config change. And thus users would have to update the test reference files as part of their PR too.

At this point, I am thinking that it might be worth it to just leave the super yamls as they are in the repo and not dynamically update them for the ctests. We wanted to increase the number of obs for the ctests for more realistic testing, but at this point these yamls already have a large number of obs and are now useful for that purpose. So maybe we can just keep the ctests as is and only update them and their associated super yamls if there are any updates to the test cases itself (and not the obs space).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Unify obs used for FV3-JEDI and MPAS-JEDI tests
4 participants