Skip to content

Commit

Permalink
Add pytest runner mode
Browse files Browse the repository at this point in the history
This commit adds a new runner mode to stestr that enables users to run
test suites using pytest instead of unittest. This is an opt-in feature
as for compatibility reasons we'll always default to using unittest, as
all existing stestr users have only been leveraging unittest to run
tests. A pytest plugin that adds a subunit output mode is now bundled
with stestr. When the user specifies running tests with pytest it calls
out to pytest just as stestr does with the integrated unittest extension
today and sets the appropriate flags to enable subunit output.

To facilitate this new feature pytest is added to the stestr
requirements list. I debated making it an optional dependency, but to
make it easier for the significantly larger pytest user base (it's
downloaded ~900x more per month) it seemed simpler to just make it a
hard requirement. But I'm still not 100% on this decision so if there
is sufficient pushback we can start it out as an optional dependency.

Co-Authored-By: Joe Gordon <[email protected]>

Closes: #354
  • Loading branch information
mtreinish committed Nov 5, 2023
1 parent 11584e7 commit ff1e971
Show file tree
Hide file tree
Showing 6 changed files with 298 additions and 7 deletions.
30 changes: 29 additions & 1 deletion doc/source/MANUAL.rst
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,7 @@ A full example config file is::
test_path=./project/tests
top_dir=./
group_regex=([^\.]*\.)*
runner=pytest


The ``group_regex`` option is used to specify is used to provide a scheduler
Expand All @@ -77,7 +78,10 @@ You can also specify the ``parallel_class=True`` instead of
group_regex to group tests in the stestr scheduler together by
class. Since this is a common use case this enables that without
needing to memorize the complicated regex for ``group_regex`` to do
this.
this. The ``runner`` argument is used to specify the test runner to use. By
default a runner based on Python's standard library ``unittest`` module is
used. However, if you'd prefer to use ``pytest`` as your runner you can specify
this as the runner argument in the config file.

There is also an option to specify all the options in the config file via the
CLI. This way you can run stestr directly without having to write a config file
Expand Down Expand Up @@ -137,6 +141,8 @@ providing configs in TOML format, the configuration directives
**must** be located in a ``[tool.stestr]`` section, and the filename
**must** have a ``.toml`` extension.



Running tests
-------------

Expand Down Expand Up @@ -166,6 +172,28 @@ Additionally you can specify a specific class or method within that file using
will skip discovery and directly call the test runner on the test method in the
specified test class.

Test runners
''''''''''''

By default ``stestr`` is built to run tests leveraging the Python standard
library ``unittest`` modules runner. stestr includes a test runner that will
emit the subunit protocol it relies on internally to handle live results from
parallel workers. However, there is an alternative runner available that
leverages ``pytest`` which is a popular test runner and testing library
alternative to the standard library's ``unittest`` module. The ``stestr``
project bundles a ``pytest`` plugin that adds real time subunit output to
pytest. As a test suite author the ``pytest`` plugin enables you to write your
test suite using pytest's test library instead of ``unittest``. There are two
ways to specify your test runner, first is the ``--pytest`` flag on
``stestr run``. This tells stestr for this test run use ``pytest`` as the runner
instead of ``unittest``, this is good for a/b comparisons between the test
runners and also general investigations with using different test runners. The
other option is to leverage your project's config file and set the ``runner``
field to either ``pytest`` or ``unittest`` (although ``unittest`` is always the
default so you shouldn't ever need to set it). This is the more natural fit
because if your test suite is written using pytest it won't be compatible with
the unittest based runner.

Running with pdb
''''''''''''''''

Expand Down
1 change: 1 addition & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -10,3 +10,4 @@ PyYAML>=3.10.0 # MIT
voluptuous>=0.8.9 # BSD License
tomlkit>=0.11.6 # MIT
extras>=1.0.0
pytest>=2.3 # MIT
2 changes: 2 additions & 0 deletions setup.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,8 @@ stestr.cm =
history_list = stestr.commands.history:HistoryList
history_show = stestr.commands.history:HistoryShow
history_remove = stestr.commands.history:HistoryRemove
pytest11 =
stestr_subunit = stestr.pytest_subunit

[extras]
sql =
Expand Down
13 changes: 13 additions & 0 deletions stestr/commands/run.py
Original file line number Diff line number Diff line change
Expand Up @@ -244,6 +244,12 @@ def get_parser(self, prog_name):
help="If set, show non-text attachments. This is "
"generally only useful for debug purposes.",
)
parser.add_argument(
"--pytest",
action="store_true",
dest="pytest",
help="If set to True enable using pytest as the test runner",
)
return parser

def take_action(self, parsed_args):
Expand Down Expand Up @@ -335,6 +341,7 @@ def take_action(self, parsed_args):
all_attachments=all_attachments,
show_binary_attachments=args.show_binary_attachments,
pdb=args.pdb,
pytest=args.pytest,
)

# Always output slowest test info if requested, regardless of other
Expand Down Expand Up @@ -396,6 +403,7 @@ def run_command(
all_attachments=False,
show_binary_attachments=True,
pdb=False,
pytest=False,
):
"""Function to execute the run command
Expand Down Expand Up @@ -460,6 +468,8 @@ def run_command(
:param str pdb: Takes in a single test_id to bypasses test
discover and just execute the test specified without launching any
additional processes. A file name may be used in place of a test name.
:param bool pytest: Set to true to use pytest as the test runner instead of
the stestr stdlib based unittest runner
:return return_code: The exit code for the command. 0 for success and > 0
for failures.
Expand Down Expand Up @@ -645,6 +655,7 @@ def run_tests():
top_dir=top_dir,
test_path=test_path,
randomize=random,
pytest=pytest,
)
if isolated:
result = 0
Expand All @@ -669,6 +680,7 @@ def run_tests():
randomize=random,
test_path=test_path,
top_dir=top_dir,
pytest=pytest,
)

run_result = _run_tests(
Expand Down Expand Up @@ -724,6 +736,7 @@ def run_tests():
randomize=random,
test_path=test_path,
top_dir=top_dir,
pytest=pytest,
)
if not _run_tests(cmd, until_failure):
# If the test was filtered, it won't have been run.
Expand Down
41 changes: 35 additions & 6 deletions stestr/config_file.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@ class TestrConf:
top_dir = None
parallel_class = False
group_regex = None
runner = None

def __init__(self, config_file, section="DEFAULT"):
self.config_file = str(config_file)
Expand All @@ -59,6 +60,7 @@ def _load_from_configparser(self):
self.group_regex = parser.get(
self.section, "group_regex", fallback=self.group_regex
)
self.runner = parser.get(self.section, "runner", fallback=self.runner)

def _load_from_toml(self):
with open(self.config_file) as f:
Expand All @@ -68,6 +70,7 @@ def _load_from_toml(self):
self.top_dir = root.get("top_dir", self.top_dir)
self.parallel_class = root.get("parallel_class", self.parallel_class)
self.group_regex = root.get("group_regex", self.group_regex)
self.runner = root.get("runner", self.runner)

@classmethod
def load_from_file(cls, config):
Expand Down Expand Up @@ -113,6 +116,7 @@ def get_run_command(
exclude_regex=None,
randomize=False,
parallel_class=None,
pytest=False,
):
"""Get a test_processor.TestProcessorFixture for this config file
Expand Down Expand Up @@ -158,6 +162,8 @@ def get_run_command(
stestr scheduler by class. If both this and the corresponding
config file option which includes `group-regex` are set, this value
will be used.
:param bool pytest: Set to true to use pytest as the test runner instead of
the stestr stdlib based unittest runner
:returns: a TestProcessorFixture object for the specified config file
and any arguments passed into this function
Expand Down Expand Up @@ -198,12 +204,35 @@ def get_run_command(
if os.path.exists('"%s"' % python):
python = '"%s"' % python

command = (
'%s -m stestr.subunit_runner.run discover -t "%s" "%s" '
"$LISTOPT $IDOPTION" % (python, top_dir, test_path)
)
listopt = "--list"
idoption = "--load-list $IDFILE"
if not pytest and self.runner is not None:
if self.runner == "pytest":
pytest = True
elif self.runner == "unittest":
pytest = False
else:
raise RuntimeError(
"Specified runner argument value: {self.runner} in config file is not "
"valid. Only pytest or unittest can be specified in the config file."
)
if pytest:
command = (
'%s -m pytest --subunit --rootdir="%s" "%s" '
"$LISTOPT $IDOPTION"
% (
python,
top_dir,
test_path,
)
)
listopt = "--co"
idoption = "--load-list $IDFILE"
else:
command = (
'%s -m stestr.subunit_runner.run discover -t "%s" "%s" '
"$LISTOPT $IDOPTION" % (python, top_dir, test_path)
)
listopt = "--list"
idoption = "--load-list $IDFILE"
# If the command contains $IDOPTION read that command from config
# Use a group regex if one is defined
if parallel_class or self.parallel_class:
Expand Down
Loading

0 comments on commit ff1e971

Please sign in to comment.