-
-
Notifications
You must be signed in to change notification settings - Fork 666
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for ordered test suite?(E.g. Smoke test suite is first run, then test suite for each components, finally performance test suite) #1278
Comments
what you’re describing is the default behavior. if you run:
it will stop after the first suite failes |
@onsi this is how we run test: |
that looks like you're just running one test suite. how many suites do you have and how have you organized them? |
btw this section of the docs: https://onsi.github.io/ginkgo/#mental-model-go-modules-packages-and-tests describes how in go and ginkgo each suite is a separate package |
@onsi Thanks for the doc. we have many test suites inside ./test directory. is it possible that smoke test suite is first run, then rest of test suites. We would like to have single test run so there is only one test report generated. |
The reason I was asking is because if you have
where in any event, assuming your suites are organized into different directories then you have two options:
then you can run
followed by If your test suites are not in separate directories like this (which sometimes happens where users define tests in different directories but then combine them into a single suite by importing the directories) then neither of these approaches will work. |
Our test directory is flat(e.g. A.test, B.test, smoke.test are all files not directory). Can we still have order of tests? I'm think moving smoke test to a directory so they will first run. But still it would not be perfect since smoke test is run twice. |
got it - so what you have is a single suite and you are wanting to run different subsets of the suite with a single invocation of ginkgo. that is not possible currently. i typically run a check in my |
Thanks @onsi. We run a single test suite with different types of tests specified by test labels. I think the best practice is to split into multiple test suites? do we still have only one test result file so only one test report is generated? Also a side question, wondering what is the difference between ''--flakeattempts" and "--repeat". We would like to use this flag to cure some test flakes. |
yes if you run a single invocation of ginkgo then ginkgo will concatenate the results into a single file unless you use As for whether it is better to split up a single test suite into multiple suites, or to use test labels... I think that really depends on what works better for you. The only caveat is if you want to run different sets of tests with different label filters that would be different invocations of
|
thanks, this is very helpful! wondering whether test report contains flake info. Currently we are generating a pdf report, but only pass/fail/skip for a test case. There is no "flake" state for a test case. |
how are you generating the pdf report? |
we are using junit2html package of python |
ginkgo’s json format includes the number of attempts for each spec: Line 161 in 52065f1
the junit format does not support adding structured information for each spec so the number of attempts must be inferred from the output of the spec. |
does it mean junit2html can not be used to generate report with flake info? Please recommend a pdf-generating tool that works best with ginkgo test output, |
i have no recommendation for you, sorry. this is the first time i’ve seen a request for a pdf version of the report. the junit format does not support adding additional data. have you inspected the output of a flakey spec to see what does get emitted to the pdf? you should be getting a lot of detail from the test run which will include each attempt note that a flakey spec can ultimately pass or fail - which is why “flake” is not a valid end state but rather an annotation on the spec. generating and html version of the json report would not be hard - though making it look pretty could be an undertaking. |
got it. the reason we use junit is because it is a industry standard so we can find tools to generate report from it. does json output comply with a standard such that it can be visualized and read easily? Here is a sample output of pdf report from junit2html: image |
looks like indeed we can see the flake info from junit report's log. There is no impressive indicator on top of report though. |
Correct - again, the junit format does not support such things. Perhaps you can step back and share the problem you are trying to solve so I can help. Is your goal to identify flaky tests and fix them? Is your goal to track whether a test suite is becoming more flakey over time? Is your goal to understand whether some environments are flakier than others? |
yup, basically we would like to run the test suite in a zone to ensure our components are healthy. But a zone's health is impacted by many factors(host infra, network, etc), they can all cause test to fail some time. We would like to add flake attempt to rule out environmental issues. |
In case there's interest, please take a look at the new proposal for managing shared resources and having finer-grained control over parallelism here: #1292 |
Thanks! this proposal would be very helpful in k8s related test scenarios. |
If previous test suite doesn't pass, then we can skip following tests to save time(E.g. if smoke test fails, it is not necessary to run performance test).
The text was updated successfully, but these errors were encountered: