Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support test coverage analysis #355

Open
hendrikvanantwerpen opened this issue Nov 23, 2023 · 0 comments
Open

Support test coverage analysis #355

hendrikvanantwerpen opened this issue Nov 23, 2023 · 0 comments
Labels

Comments

@hendrikvanantwerpen
Copy link
Collaborator

We should have a coverage command that tells us

  1. whether we test all TSG rules, and
  2. whether all tests are relevant.

Sketch for how we could do this:

  • Add tracing support to TSG so we can track which statements are being executed. The rest might be more convenient if we can control execution from the tracer as well, e.g., if we can return a value indicating the current statement should be skipped.

  • Run the test suite against the full set of TSG rules. All tests should succeed. Record which tests trigger which stanzas and statements.

    • If a stanza or statement is never executed, report it as untested.
  • Rerun the test suite against modified TSG rules, where one edge or attr statement is skipped. Optimize by running only against the tests that actually hit the omitted statement.

    • If none of the tests fail, report statement as untested / irrelevant. Record which test(s) failed.
    • If the stack graph construction fails, report the skipped statement as untestable. This could happen if groups of attributes that are required together are split over separate attr statements.
  • Report any test that never failed for any of the modified runs as irrelevant.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant