Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[QA] VizroAI UI tests #882

Merged
merged 57 commits into from
Jan 27, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
57 commits
Select commit Hold shift + click to select a range
ea31d69
component library tests
l0uden Nov 13, 2024
c656536
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Nov 13, 2024
40a293e
failed artifacts and slack notifications
l0uden Nov 14, 2024
3283d74
branch in notification
l0uden Nov 14, 2024
3cff909
delete screenshot
l0uden Nov 14, 2024
8fe3812
add screenshot
l0uden Nov 14, 2024
ab843c8
fix screenshot url
l0uden Nov 14, 2024
f9de666
Merge branch 'main' of https://github.com/mckinsey/vizro into qa/comp…
l0uden Nov 14, 2024
d74645f
changelog
l0uden Nov 14, 2024
2af4dce
vizroAI UI tests
l0uden Nov 18, 2024
aeb063f
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Nov 18, 2024
8ad3e4e
added PR branch for tests
l0uden Nov 18, 2024
9ff0bbb
fix for runner name
l0uden Nov 18, 2024
4d826f0
requirements for tests env
l0uden Nov 18, 2024
73ba50a
run app under hatch
l0uden Nov 18, 2024
1bc78c8
return headless mode
l0uden Nov 18, 2024
f5a67bf
test failure
l0uden Nov 18, 2024
fbbb2a0
test failure
l0uden Nov 18, 2024
5b6c718
test success
l0uden Nov 18, 2024
bb0b680
Merge branch 'main' of https://github.com/mckinsey/vizro into qa/vizr…
l0uden Dec 12, 2024
ea7ef47
fix merging main
l0uden Dec 12, 2024
2ffdbe6
tests refactor
l0uden Dec 13, 2024
41b6a97
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Dec 13, 2024
aaa805d
delete data csv
l0uden Dec 13, 2024
5f0f3ce
Merge branch 'qa/vizro_ai_ui_tests' of https://github.com/mckinsey/vi…
l0uden Dec 13, 2024
255b847
Merge branch 'main' of https://github.com/mckinsey/vizro into qa/vizr…
l0uden Dec 13, 2024
efc1acc
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Dec 13, 2024
d68d17d
test failed
l0uden Dec 13, 2024
4905fe7
Merge branch 'qa/vizro_ai_ui_tests' of https://github.com/mckinsey/vi…
l0uden Dec 13, 2024
f85aa78
test success
l0uden Dec 13, 2024
ad807f9
small refactoring
l0uden Dec 24, 2024
f2792be
Merge branch 'main' of https://github.com/mckinsey/vizro into qa/vizr…
l0uden Dec 24, 2024
84be043
changelog
l0uden Dec 24, 2024
ebc2465
temp uv fix
l0uden Dec 24, 2024
530d3fe
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Dec 24, 2024
01dcdc6
temp uv fix
l0uden Dec 24, 2024
7c78959
Merge branch 'qa/vizro_ai_ui_tests' of https://github.com/mckinsey/vi…
l0uden Dec 24, 2024
4f93588
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Dec 24, 2024
986872b
get conftest back to integration
l0uden Dec 24, 2024
725b7b2
Merge branch 'qa/vizro_ai_ui_tests' of https://github.com/mckinsey/vi…
l0uden Dec 24, 2024
c2f13f4
Propose rename for future
antonymilne Jan 16, 2025
dde3430
Merge branch 'main' of https://github.com/mckinsey/vizro into qa/vizr…
l0uden Jan 17, 2025
91e88ab
refactor tests structure and naming
l0uden Jan 17, 2025
30acedc
changes wait port
l0uden Jan 17, 2025
d6282c1
make dashboard_ui run with dash_duo
l0uden Jan 21, 2025
d313f0e
Merge branch 'main' of https://github.com/mckinsey/vizro into qa/vizr…
l0uden Jan 21, 2025
4e8d43e
move conftest with reset managers to vizro_ai_ui only
l0uden Jan 21, 2025
6bc8b30
fix test-vizro-ai-ui.yml
l0uden Jan 21, 2025
b28f5f9
linting
l0uden Jan 21, 2025
56e6812
webhook secret
l0uden Jan 21, 2025
e1c99b0
delete unused code from conftest.py
l0uden Jan 22, 2025
eea1c51
Merge branch 'main' of https://github.com/mckinsey/vizro into qa/vizr…
l0uden Jan 23, 2025
868ad70
slack notification fix
l0uden Jan 23, 2025
04aee84
move test steps to dash internal test methods
l0uden Jan 24, 2025
5e97ce1
delete score word from dashboard tests
l0uden Jan 24, 2025
b9769ee
additional verification
l0uden Jan 27, 2025
45df962
Merge branch 'main' of https://github.com/mckinsey/vizro into qa/vizr…
l0uden Jan 27, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -7,16 +7,16 @@ runs:
- name: Copy failed screenshots
shell: bash
run: |
mkdir /home/runner/work/vizro/vizro/vizro-core/failed_screenshots/
cd /home/runner/work/vizro/vizro/vizro-core/
mkdir ${{ env.PROJECT_PATH }}failed_screenshots/
cd ${{ env.PROJECT_PATH }}
cp *.png failed_screenshots

- name: Archive production artifacts
uses: actions/upload-artifact@v4
with:
name: Failed screenshots
path: |
/home/runner/work/vizro/vizro/vizro-core/failed_screenshots/*.png
${{ env.PROJECT_PATH }}failed_screenshots/*.png

- name: Send custom JSON data to Slack
id: slack
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,3 +43,4 @@ jobs:
env:
TESTS_NAME: Vizro e2e component library tests
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
PROJECT_PATH: /home/runner/work/vizro/vizro/vizro-core/
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name: Score tests for VizroAI
name: e2e dashboard tests for VizroAI

defaults:
run:
Expand All @@ -12,9 +12,9 @@ env:
FORCE_COLOR: 1

jobs:
test-score-vizro-ai-fork:
test-e2e-dashboard-vizro-ai-fork:
if: ${{ github.event.pull_request.head.repo.fork }}
name: test-score-vizro-ai on Py${{ matrix.config.python-version }} ${{ matrix.config.label }}
name: test-e2e-dashboard-vizro-ai on Py${{ matrix.config.python-version }} ${{ matrix.config.label }}
runs-on: ubuntu-latest
strategy:
fail-fast: false
Expand All @@ -38,9 +38,9 @@ jobs:
- name: Passed fork step
run: echo "Success!"

test-score-vizro-ai:
test-e2e-dashboard-vizro-ai:
if: ${{ ! github.event.pull_request.head.repo.fork }}
name: test-score-vizro-ai on Py${{ matrix.config.python-version }} ${{ matrix.config.label }}
name: test-e2e-dashboard-vizro-ai on Py${{ matrix.config.python-version }} ${{ matrix.config.label }}
runs-on: ubuntu-latest
strategy:
fail-fast: false
Expand Down Expand Up @@ -72,19 +72,19 @@ jobs:
- name: Show dependency tree
run: hatch run ${{ matrix.config.hatch-env }}:pip tree

- name: Run vizro-ai score tests with PyPI vizro
run: hatch run ${{ matrix.config.hatch-env }}:test-score
- name: Run vizro-ai e2e dashboard tests with PyPI vizro
run: hatch run ${{ matrix.config.hatch-env }}:test-e2e-dashboard
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
OPENAI_API_BASE: ${{ secrets.OPENAI_API_BASE }}
VIZRO_TYPE: pypi
BRANCH: ${{ github.head_ref }}
PYTHON_VERSION: ${{ matrix.config.python-version }}

- name: Run vizro-ai score tests with local vizro
- name: Run vizro-ai e2e dashboard tests with local vizro
run: |
hatch run ${{ matrix.config.hatch-env }}:pip install ../vizro-core
hatch run ${{ matrix.config.hatch-env }}:test-score
hatch run ${{ matrix.config.hatch-env }}:test-e2e-dashboard
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
OPENAI_API_BASE: ${{ secrets.OPENAI_API_BASE }}
Expand All @@ -99,7 +99,7 @@ jobs:
with:
payload: |
{
"text": "Vizro-ai ${{ matrix.config.hatch-env }} score tests build result: ${{ job.status }}\nBranch: ${{ github.head_ref }}\n${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}"
"text": "Vizro-ai ${{ matrix.config.hatch-env }} e2e dashboard tests build result: ${{ job.status }}\nBranch: ${{ github.head_ref }}\n${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}"
}
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
Expand All @@ -111,10 +111,10 @@ jobs:
with:
name: Report-${{ matrix.config.python-version }}-${{ matrix.config.label }}
path: |
/home/runner/work/vizro/vizro/vizro-ai/tests/score/reports/report*.csv
/home/runner/work/vizro/vizro/vizro-ai/tests/e2e/reports/report*.csv

test-score-vizro-ai-report:
needs: test-score-vizro-ai
test-e2e-dashboard-vizro-ai-report:
needs: test-e2e-dashboard-vizro-ai
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name: Integration tests for VizroAI
name: e2e plot tests for VizroAI

defaults:
run:
Expand All @@ -20,9 +20,9 @@ env:
FORCE_COLOR: 1

jobs:
test-integration-vizro-ai-fork:
test-e2e-plot-vizro-ai-fork:
if: ${{ github.event.pull_request.head.repo.fork }}
name: test-integration-vizro-ai on Py${{ matrix.config.python-version }} ${{ matrix.config.label }}
name: test-e2e-plot-vizro-ai on Py${{ matrix.config.python-version }} ${{ matrix.config.label }}
runs-on: ${{ matrix.config.os }}
strategy:
fail-fast: false
Expand Down Expand Up @@ -69,9 +69,9 @@ jobs:
- name: Passed fork step
run: echo "Success!"

test-integration-vizro-ai:
test-e2e-plot-vizro-ai:
if: ${{ ! github.event.pull_request.head.repo.fork }}
name: test-integration-vizro-ai on Py${{ matrix.config.python-version }} ${{ matrix.config.label }}
name: test-e2e-plot-vizro-ai on Py${{ matrix.config.python-version }} ${{ matrix.config.label }}
runs-on: ${{ matrix.config.os }}
strategy:
fail-fast: false
Expand Down Expand Up @@ -126,17 +126,17 @@ jobs:
- name: Show dependency tree
run: hatch run ${{ matrix.config.hatch-env }}:pip tree

- name: Run vizro-ai integration tests with PyPI vizro
run: hatch run ${{ matrix.config.hatch-env }}:test-integration
- name: Run vizro-ai e2e plot tests with PyPI vizro
run: hatch run ${{ matrix.config.hatch-env }}:test-e2e-plot
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
OPENAI_API_BASE: ${{ secrets.OPENAI_API_BASE }}
VIZRO_TYPE: pypi

- name: Run vizro-ai integration tests with local vizro
- name: Run vizro-ai e2e plot tests with local vizro
run: |
hatch run ${{ matrix.config.hatch-env }}:pip install ../vizro-core
hatch run ${{ matrix.config.hatch-env }}:test-integration
hatch run ${{ matrix.config.hatch-env }}:test-e2e-plot
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
OPENAI_API_BASE: ${{ secrets.OPENAI_API_BASE }}
Expand All @@ -149,7 +149,7 @@ jobs:
with:
payload: |
{
"text": "Vizro-ai ${{ matrix.config.hatch-env }} integration tests build result: ${{ job.status }}\nBranch: ${{ github.head_ref }}\n${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}"
"text": "Vizro-ai ${{ matrix.config.hatch-env }} e2e plot tests build result: ${{ job.status }}\nBranch: ${{ github.head_ref }}\n${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}"
}
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
Expand Down
68 changes: 68 additions & 0 deletions .github/workflows/test-vizro-ai-ui.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
name: tests for VizroAI UI

defaults:
run:
working-directory: vizro-ai

on:
push:
branches: [main]
pull_request:
branches:
- main
paths:
- "vizro-ai/**"
- "!vizro-ai/docs/**"

env:
PYTHONUNBUFFERED: 1
FORCE_COLOR: 1
PYTHON_VERSION: "3.12"

jobs:
test-vizro-ai-ui-fork:
if: ${{ github.event.pull_request.head.repo.fork }}
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v4

- name: Passed fork step
run: echo "Success!"

test-vizro-ai-ui:
if: ${{ ! github.event.pull_request.head.repo.fork }}
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v4

- name: Set up Python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python@v5
with:
python-version: ${{ env.PYTHON_VERSION }}

- name: Install Hatch
run: pip install hatch

- name: Show dependency tree
run: hatch run pip tree

- name: Run VizroAI UI tests
run: hatch run test-vizro-ai-ui
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
OPENAI_API_BASE: ${{ secrets.OPENAI_API_BASE }}

- name: Send custom JSON data to Slack
id: slack
uses: slackapi/[email protected]
if: failure()
with:
payload: |
{
"text": "VizroAI UI tests build result: ${{ job.status }}\nBranch: ${{ github.head_ref }}\n${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}"
}
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK
4 changes: 0 additions & 4 deletions .github/workflows/vizro-qa-tests-trigger.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,6 @@ jobs:
matrix:
include:
- label: integration tests
- label: vizro-ai ui tests
steps:
- name: Passed fork step
run: echo "Success!"
Expand All @@ -34,7 +33,6 @@ jobs:
matrix:
include:
- label: integration tests
- label: vizro-ai ui tests
steps:
- uses: actions/checkout@v4
- name: Tests trigger
Expand All @@ -44,8 +42,6 @@ jobs:

if [ "${{ matrix.label }}" == "integration tests" ]; then
export INPUT_WORKFLOW_FILE_NAME=${{ secrets.VIZRO_QA_INTEGRATION_TESTS_WORKFLOW }}
elif [ "${{ matrix.label }}" == "vizro-ai ui tests" ]; then
export INPUT_WORKFLOW_FILE_NAME=${{ secrets.VIZRO_QA_VIZRO_AI_UI_TESTS_WORKFLOW }}
fi
export INPUT_GITHUB_TOKEN=${{ secrets.VIZRO_SVC_PAT }}
export INPUT_REF=main # because we should send existent branch to dispatch workflow
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
<!--
A new scriv changelog fragment.

Uncomment the section that is right (remove the HTML comment wrapper).
-->

<!--
### Highlights ✨

- A bullet item for the Highlights ✨ category with a link to the relevant PR at the end of your entry, e.g. Enable feature XXX ([#1](https://github.com/mckinsey/vizro/pull/1))

-->
<!--
### Removed

- A bullet item for the Removed category with a link to the relevant PR at the end of your entry, e.g. Enable feature XXX ([#1](https://github.com/mckinsey/vizro/pull/1))

-->
<!--
### Added

- A bullet item for the Added category with a link to the relevant PR at the end of your entry, e.g. Enable feature XXX ([#1](https://github.com/mckinsey/vizro/pull/1))

-->
<!--
### Changed

- A bullet item for the Changed category with a link to the relevant PR at the end of your entry, e.g. Enable feature XXX ([#1](https://github.com/mckinsey/vizro/pull/1))

-->
<!--
### Deprecated

- A bullet item for the Deprecated category with a link to the relevant PR at the end of your entry, e.g. Enable feature XXX ([#1](https://github.com/mckinsey/vizro/pull/1))

-->
<!--
### Fixed

- A bullet item for the Fixed category with a link to the relevant PR at the end of your entry, e.g. Enable feature XXX ([#1](https://github.com/mckinsey/vizro/pull/1))

-->
<!--
### Security

- A bullet item for the Security category with a link to the relevant PR at the end of your entry, e.g. Enable feature XXX ([#1](https://github.com/mckinsey/vizro/pull/1))

-->
20 changes: 10 additions & 10 deletions vizro-ai/examples/dashboard_ui/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -277,15 +277,15 @@ def update_model_dropdown(value):
return available_models, default_model


app = Vizro().build(dashboard)
app.dash.layout.children.append(
dbc.NavLink(
["Made with ", html.Img(src=get_asset_url("logo.svg"), id="banner", alt="Vizro logo"), "vizro"],
href="https://github.com/mckinsey/vizro",
target="_blank",
className="anchor-container",
)
)
server = app.dash.server
if __name__ == "__main__":
app = Vizro().build(dashboard)
app.dash.layout.children.append(
dbc.NavLink(
["Made with ", html.Img(src=get_asset_url("logo.svg"), id="banner", alt="Vizro logo"), "vizro"],
href="https://github.com/mckinsey/vizro",
target="_blank",
className="anchor-container",
)
)
server = app.dash.server
app.run()
5 changes: 3 additions & 2 deletions vizro-ai/hatch.toml
Original file line number Diff line number Diff line change
Expand Up @@ -50,14 +50,15 @@ prep-release = [
]
pypath = "hatch run python -c 'import sys; print(sys.executable)'"
test = "pytest tests {args}"
test-integration = "pytest -vs --reruns 1 tests/integration --headless {args}"
test-score = "pytest -vs --reruns 1 tests/score --headless {args}"
test-e2e-dashboard = "pytest -vs tests/e2e/test_dashboard.py --headless {args}"
test-e2e-plot = "pytest -vs --reruns 1 tests/e2e/test_plot.py --headless {args}"
test-unit = "pytest tests/unit {args}"
test-unit-coverage = [
"coverage run -m pytest tests/unit {args}",
"- coverage combine",
"coverage report"
]
test-vizro-ai-ui = "pytest -vs tests/vizro_ai_ui/test_vizro_ai_ui.py --headless"

[envs.docs]
dependencies = [
Expand Down
7 changes: 6 additions & 1 deletion vizro-ai/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -66,8 +66,13 @@ filterwarnings = [
# Ignore LLMchian deprecation warning:
"ignore:.*The class `LLMChain` was deprecated in LangChain 0.1.17",
# Ignore warning for Pydantic v1 API and Python 3.13:
"ignore:Failing to pass a value to the 'type_params' parameter of 'typing.ForwardRef._evaluate' is deprecated:DeprecationWarning"
"ignore:Failing to pass a value to the 'type_params' parameter of 'typing.ForwardRef._evaluate' is deprecated:DeprecationWarning",
# Ignore deprecation warning until this is solved: https://github.com/plotly/dash/issues/2590:
"ignore:HTTPResponse.getheader():DeprecationWarning",
# Happens during dash_duo teardown in vizro_ai_ui tests. Not affecting functionality:
"ignore:Exception in thread"
]
pythonpath = ["../tools/tests"]

[tool.ruff]
extend = "../pyproject.toml"
Expand Down
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ def logic( # noqa: PLR0912, PLR0915
config: json config of the expected dashboard

"""
report_dir = "tests/score/reports"
report_dir = "tests/e2e/reports"
os.makedirs(report_dir, exist_ok=True)

app = Vizro().build(dashboard).dash
Expand Down
12 changes: 12 additions & 0 deletions vizro-ai/tests/vizro_ai_ui/conftest.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
import pytest
from vizro import Vizro


@pytest.fixture(autouse=True)
def reset_managers():
# this ensures that the managers are reset before and after each test
# the reset BEFORE all tests is important because at pytest test collection, fixtures are evaluated and hence
# the model_manager may be populated with models from other tests
Vizro._reset()
yield
Vizro._reset()
Loading
Loading