You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It's a common scenario to want to run performance regression testing against a feature branch, to make sure it hasn't introduced any issues compared to the main branch. You can do this in the speedcurve.com web app relatively easily using the bookmark/compare feature, but it's cumbersome to do this comparison in other environments e.g. a build process.
What is the proposed solution?
All of the building blocks already exist to do this in the CLI. You could set up a site in SpeedCurve with two URLs: one for the main branch and one for feature branches:
When a new feature branch has been built to an environment, you can update the feature branch URL to point to the new environment and kick off a deploy:
(It probably works better if you run the deploy command once for each URL. I think we need to add a --json flag to the deploy command so that you can more easily grab the test IDs for each deploy)
You'd then run this new command to compare the results. It could have the concept of a failure threshold, where the command exits with an error status if any metrics got worse by more than a certain percentage.
What problem is this trying to solve?
It's a common scenario to want to run performance regression testing against a feature branch, to make sure it hasn't introduced any issues compared to the main branch. You can do this in the speedcurve.com web app relatively easily using the bookmark/compare feature, but it's cumbersome to do this comparison in other environments e.g. a build process.
What is the proposed solution?
All of the building blocks already exist to do this in the CLI. You could set up a site in SpeedCurve with two URLs: one for the main branch and one for feature branches:
When a new feature branch has been built to an environment, you can update the feature branch URL to point to the new environment and kick off a deploy:
(It probably works better if you run the deploy command once for each URL. I think we need to add a --json flag to the deploy command so that you can more easily grab the test IDs for each deploy)
You'd then run this new command to compare the results. It could have the concept of a failure threshold, where the command exits with an error status if any metrics got worse by more than a certain percentage.
The text was updated successfully, but these errors were encountered: