-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
report failed assertions in return value of run
function
#66
Comments
This sounds like the idea that @sovelten has been pushing forward that test frameworks should avoid being effectul until the very last moment possible. As in, they should decouple evaluating tests with reporting tests. I think implementing this here might be a nice way to see whether it is a worthwhile idea to push forward in an actual test framework, either by adapting an existing one or writing a new one. This should be relatively straightforward to implement with I want to cc @plexus on this because he has a bunch of experience running and reporting tests via the kaocha work. I've dug into some of the kaocha code for midje and clojure.test and I can only imagine that having the run/report split would make writing a test runner much easier. What do you think of this idea @plexus? |
In Kaocha we basically have both, there's the functional view where a test run means transforming a test config into a test result. While that's happening we emit events which reporters can listen to to provide real-time feedback. That's the theory: it's just data in data out, but because users need some kind of progress reporting during a test run we have these imperative events, as a layer of observability on that. It gets murky though because all you get from typical clojure.test assertions are the events they emit, they don't return data as such, just side effects. So we then have to record these events and use them to build up the test result data structure, which is kind of backwards, but we do this so API consumers (e.g. plugins) can pretend everything is just data-in data-out. |
I think this is coupling a lot of things that shouldn't be coupled. If you're using I think a cleaner solution is to decouple the test running. As mentioned by @plexus, Kaocha already solves that in a way. Capturing match results is coupling |
That is already the case, so perhaps we need to think about decoupling that. |
yeah, I'm already not a fan of the |
Can you elaborate? |
Besides what I said about it not being related at all to
(if (state/state? (possibly-side-effectful-f))
(match-probe (possibly-side-effectful-f) {} nil)
(state/return (possibly-side-effectful-f)))
|
I've been tempted recently to set the default retry count to 0; it generally isn't useful in the single-service integration test context
Do you find yourself writing a bit more var binding boiler-plate to get results of subflows with the |
That's an undocumented feature, not a bug :)
There is
a better docstring would help a lot. Another problem is that
I like the idea of making that explicit, so we'd have |
I do, but I'd rather have that boilerplate and make things more explicit. It's weird for me to match some monadic value with a non-monadic one without any hints from the matcher function (and I didn't even knew that was a feature before this discussion btw). I like the idea of |
I've created a PoC on #75 that I believe is a cleaner solution to this. And as I said, I'd rather have |
StateFlow core is not coupled with matcher-combinators as there is nothing in the core that uses or relies on matcher-combinators. What is coupled with matcher-combinators, though, is StateFlow's current clojure.test support. It is entirely optional whether to use Should the test helpers be moved to their own library? I'm not sure, it looks like a lot of hassle right now with more disadvantages than benefits. As long as they are not part of the core, it seems ok to me. It is entirely possible and it should be ok to leverage upon state-flow core as a foundation for another testing library. But if we can come up with solutions that will fit everyone's particular tastes, that is much better. As @philomates said, what I would like to eventually see is going away from clojure.test altogether as it is just too stateful. In this scenario we would have a different match-like macro that would push metadata to the state with test reporting information. The test report can later be accessed and properly reported by the test runner after the flow has run. Each flow can be seen as a single test, if this is so there is no need for actual reporting while the flow is running. But if it is the case someone actually wants to be able to do that, then emitting events like @plexus described seems like the way to go. |
Regarding the specific suggestion, as long as the solution is within But ideally I would actually go towards a solution that gets rid of clojure.test if that is indeed possible. At least for the individual assertions. |
Well, that's technically true, but the library is still served as one. If I want to use cljtest support with another match there will still be the requirement for matcher-combinators, along with possible incompatibilities and all the fun stuff that comes with mismatched requirement versions.
I agree. clojure.test does not have a very friendly implementation and everyone that wants to build from it has to hack their way into the crazy reporting stuff. If StateFlow is a test framework, it makes sense to have its own assertion system. And implementing a kaocha extension (instead of another specific test runner) also allows us to use the tooling around it 👌 |
When a
match?
assertion fails, the failure info is printed to the repl, and the return value ofrun
doesn't include any information about it:This is fine if you're typing in the repl, but if you're using an dev environment in which you type in source files and send expressions to a connected repl, things get a bit disconnected.
I'm not sure about the format yet, but I think we should report assertion failures in the state, e.g.
Then, perhaps, the
run
function could inspectstate
for failures and include them in the return - possibly wrapping them in aFailure
object to align with runtime exceptions.The text was updated successfully, but these errors were encountered: