-
Notifications
You must be signed in to change notification settings - Fork 51
Any suggestions for numerical tolerancing? #18
Comments
My original thinking on this sort of thing is that if there's an existing command line tool that can do this, that'd be preferable to adding new syntax. That said, you aren't the first person to request this functionality, and as you've pointed out, there might not be a good tool for this that reads well in a Cram test. I think I might be open to adding a new matching mode for this, but I haven't given it a ton of thought as to how it'd work, or how it would combine with the existing Another route we could go down is adding plugin support to Cram. I've seen this approach work well for Mercurial, which also supports extensions, but I do worry that perhaps allowing the syntax to be customized could lead to Cram tests being hard to decipher between code bases. Though maybe that wouldn't be a big deal if the extension support is purely limited to adding extra matchers. Thoughts? Yet another thing we could do is actually make a completely separate tool for this sort of thing, totally standalone from Cram, and mention it in Cram's docs. That might be the most Unix-y way of solving this problem. Anyway, do you have a preferred approach here? Any thoughts on what might work best for you? |
Yes, I'll continue browsing for seperate tools that can filter for numbers in text and reduce the number of significant digits, for example. Typically it's either one particular number that we'd like to range check, or a whole table of numbers that we'd need to deal with on a column-by-column basis. |
You could probably use the |
Using |
I can see some cram support for testing small amounts of numerical output mixed with text being very useful, as it allows the expected program output to exist as reference data in the test script. This, in contrast to piping through a regex and then into a separate program like numdiff. In the same way that For testing large numeric tables, I think a tool like Anyway, for testing a small amount of mixed text and numeric data a cram builtin seems really useful. Could extra validation be somehow applied to the regex match groups? For example:
The idea here is that the |
By the way, julia's |
We're using cram quite successfully for a collection of command line tools concerned with processing geospatial data. In some cases we match numerical data such as XYZ position, the distance between two locations, or the size of files written. I've started looking around for tools that might help in the case that we'd like to tolerate some relative or absolute variation without failing the test.
http://www.nongnu.org/numdiff/
numdiff needs a reference input file and seems to apply the same criteria to every number in the output.
http://www.math.utah.edu/~beebe/software/ndiff/
ndiff also needs a pair of input files
It seems to me that cram might support something similar to (re) suffix for the purpose of testing numerical aspects of the the matched output. Does anyone have a suitable solution for this kind of situation?
As an example, say we are monitoring a web service. We fetch a page and check HTTP headers, including the size. We know the size will vary, but if it's within some sane range, we ought to consider it a positive test result.
The text was updated successfully, but these errors were encountered: