Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

50 add performance tests #57

Merged
merged 10 commits into from
May 28, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 38 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@ on:
permissions:
checks: write
contents: write
# deployments permission to deploy GitHub pages website
deployments: write
pull-requests: write


Expand Down Expand Up @@ -85,3 +87,39 @@ jobs:
echo "Coverage Tests - ${{ steps.coverageComment.outputs.tests }}"
echo "Coverage Time - ${{ steps.coverageComment.outputs.time }}"
echo "Not Success Test Info - ${{ steps.coverageComment.outputs.notSuccessTestInfo }}"

- name: Run benchmarks
run: |
pytest -m benchmark --benchmark-json=./output.json

- name: Download previous benchmark data
uses: actions/cache@v4
with:
path: ./cache
key: ${{ runner.os }}-benchmark

- name: Publish benchmark results
uses: benchmark-action/github-action-benchmark@v1
if: github.event_name != 'pull_request'
with:
tool: 'pytest'
auto-push: true
comment-always: true
output-file-path: output.json
github-token: ${{ secrets.GITHUB_TOKEN }}
comment-on-alert: true
save-data-file: true
summary-always: true

- name: Comment on benchmark results without publishing
uses: benchmark-action/github-action-benchmark@v1
with:
tool: 'pytest'
auto-push: false
github-token: ${{ secrets.GITHUB_TOKEN }}
comment-always: true
output-file-path: output.json
comment-on-alert: false
save-data-file: true
summary-always: true
external-data-json-path: ./cache/benchmark-data.json
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -401,6 +401,7 @@ Since the `EDTFField` and the `_earliest` and `_latest` field values are set aut

### Running tests
- From `python-edtf`, run the unit tests: `pytest`
- From `python-edtf`, run `pytest -m benchmark` to run the benchmarks (published [here]( https://ixc.github.io/python-edtf/dev/bench/))
- From `python-edtf/edtf_django_tests`, run the integration tests: `python manage.py test edtf_integration`
- To run CI locally, use `act`, e.g. `act pull_request` or `act --pull=false --container-architecture linux/amd64`. Some steps may require a Github PAT: `act pull_request --container-architecture linux/amd64 --pull=false -s GITHUB_TOKEN=<your PAT>`

Expand Down
2 changes: 2 additions & 0 deletions dev-requirements.txt
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
-r requirements.txt # Include all main requirements
django>=4.2,<5.0
pytest
pytest-benchmark
pytest-django
ruff
pre-commit
24 changes: 24 additions & 0 deletions edtf/natlang/tests.py
Original file line number Diff line number Diff line change
Expand Up @@ -185,3 +185,27 @@ def test_natlang(input_text, expected_output):
assert (
result == expected_output
), f"Failed for input: {input_text} - expected {expected_output}, got {result}"


@pytest.mark.benchmark
@pytest.mark.parametrize(
"input_text,expected_output",
[
("23rd Dynasty", None),
("January 2008", "2008-01"),
("ca1860", "1860~"),
("uncertain: approx 1862", "1862%"),
("January", "XXXX-01"),
("Winter 1872", "1872-24"),
("before approx January 18 1928", "/1928-01-18~"),
("birthday in 1872", "1872"),
("1270 CE", "1270"),
("2nd century bce", "-01XX"),
("1858/1860", "[1858, 1860]"),
],
)
def test_benchmark_natlang(benchmark, input_text, expected_output):
"""
Benchmark selected natural language conversions
"""
benchmark(text_to_edtf, input_text)
10 changes: 10 additions & 0 deletions edtf/parser/grammar.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,12 @@
# ruff: noqa: E402 I001

# It's recommended to `enablePackrat()` immediately after importing pyparsing
# https://github.com/pyparsing/pyparsing/wiki/Performance-Tips

import pyparsing

pyparsing.ParserElement.enablePackrat()

from pyparsing import (
Combine,
NotAny,
Expand All @@ -13,6 +22,7 @@
)
from pyparsing import Literal as L


from edtf.parser.edtf_exceptions import EDTFParseException

# (* ************************** Level 0 *************************** *)
Expand Down
21 changes: 21 additions & 0 deletions edtf/parser/tests.py
Original file line number Diff line number Diff line change
Expand Up @@ -216,6 +216,20 @@
("2001-34", ("2001-04-01", "2001-06-30")),
)

BENCHMARK_EXAMPLES = (
"2001-02-03",
"2008-12",
"2008",
"-0999",
"2004-01-01T10:10:10+05:00",
"-2005/-1999-02",
"/2006",
"?2004-%06",
"[1667, 1760-12]",
"Y3388E2S3",
"2001-29",
)

BAD_EXAMPLES = (
# parentheses are not used for group qualification in the 2018 spec
None,
Expand Down Expand Up @@ -340,3 +354,10 @@ def test_comparisons():
assert d4 == d5
assert d1 < d5
assert d1 > d6


@pytest.mark.benchmark
@pytest.mark.parametrize("test_input", BENCHMARK_EXAMPLES)
def test_benchmark_parser(benchmark, test_input):
"""Benchmark parsing of selected EDTF strings."""
benchmark(parse, test_input)
Comment on lines +359 to +363
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems very smooth to integrate this

8 changes: 6 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@ test = [
"django>=4.2,<5.0",
"pytest",
"pytest-django",
"pytest-benchmark",
"ruff",
"pre-commit",
"coverage",
Expand Down Expand Up @@ -81,8 +82,11 @@ legacy_tox_ini = """
python_files = ["tests.py", "test_*.py", "*_test.py", "*_tests.py"]
python_classes = ["Test*", "*Tests"]
python_functions = ["test_*"]
addopts = "--ignore=edtf_django_tests/ --cov=edtf"
plugins = ["pytest_cov"]
markers = [
"benchmark: mark a test as a benchmark",
]
addopts = "--ignore=edtf_django_tests/ --cov=edtf -m 'not benchmark'"
plugins = ["pytest_cov", "pytest_benchmark"]

[tool.coverage.run]
# we run the edtf_integration tests but only care about them testing fields.py in the main package
Expand Down
Loading