-
Notifications
You must be signed in to change notification settings - Fork 181
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Help comparing benchmarks across virtualenvs #370
Comments
|
The issue here is that when you run with |
The fact that |
Yes, that is not a supported use case currently. |
To support storing results when using pre-existing virtualenvs, the following questions would need to be answered:
|
How about a |
That's the easiest option to implement, sounds good to me. |
What if we just extended |
gh-352 allows Or, maybe we could add a new environment type similar to ExistingEnvironment with the difference that it does the build/install/uninstall cycle similarly as Virtualenv. Or, maybe change virtualenv to accept a path to a python executable instead of a version number, in which case it would only do the build/install/uninstall parts and skip the environment creation etc parts. |
I encountered the same use case. I made a PR to enable this option (#794). Please let me know if that's what you had in mind. |
Thanks - yes. If the changes allow me to specify an arbitrary label, and therefore, save the results, even if on the same machine, then yes, that it what I needed. |
I am particularly interested in compareing across python implementations, i.e. when using |
Sorry if this is the wrong place to ask. Sorry too that I am sure I am missing something obvious.
I am using asv on Windows : numpy/numpy#5479
I'm comparing performance of numpy in two virtualenvs, one with MKL linking, the other with ATLAS linking.
I told
asv machine
that I have two machines one called 'mike-mkl' and the other 'mike-atlas', identical other than the names.Then I ran the benchmarks in the MKL virtualenv as:
In the ATLAS virtualenv:
I was expecting this to store the two sets of benchmarks, but it appears from the
results/benchmarks.json
file size and theasv preview
output, that they overwrite each other.What is the right way to record / show these benchmarks relative to one another?
The text was updated successfully, but these errors were encountered: