diff --git a/.github/img/frontend-architecture.png b/.github/img/frontend-architecture.png index 7eb7ce5bc..99c4373fe 100644 Binary files a/.github/img/frontend-architecture.png and b/.github/img/frontend-architecture.png differ diff --git a/ARCHITECTURE.md b/ARCHITECTURE.md index 4abdfce6c..d4bcf569c 100644 --- a/ARCHITECTURE.md +++ b/ARCHITECTURE.md @@ -65,7 +65,7 @@ The `localStorage` state is updated automatically on every Redux store update, v ![Kedro-Viz data flow diagram](/.github/img/frontend-architecture.png) -Kedro-Viz currently utilizes two different methods of data ingestion: the Redux setup for the pipeline and flowchart-view related components, and GraphQL via Apollo Client for the experiment tracking components. +Kedro-Viz currently utilizes one method of data ingestion: the Redux setup for the pipeline and flowchart-view related components. On initialisation for the Redux setup, Kedro-Viz [manually normalises pipeline data](/src/store/normalize-data.js), in order to [make immutable state updates as performant as possible](https://redux.js.org/recipes/structuring-reducers/normalizing-state-shape). @@ -73,10 +73,6 @@ Next, it [initialises the Redux data store](https://github.com/kedro-org/kedro-v During preparation, the initial state is separated into two parts: pipeline and non-pipeline state. This is because the non-pipeline state should persist for the session duration, even if the pipeline state is reset/overwritten - i.e. if the user selects a new top-level pipeline. -Kedro run data used for experiment tracking are stored in a SQLite database that is generated automatically once [experiment tracking is enabled in your Kedro project](https://kedro.readthedocs.io/en/stable/08_logging/02_experiment_tracking.html). By default, the session store database sits under the `/data` directory of your Kedro project as `session_store.db`. On loading Kedro-Viz from the Kedro project, the Kedro-Viz backend will consume the run data stored in the database and serve the data via the GraphQL endpoint via GraphQL query requests from the Apollo client on the front end. - -The server also allows updates to the database for certain fields of the run data (name, notes, etc.) via mutations. - ## React components React components are all to be found in `/src/components/`. The top-level React component for the standalone app is `Container`, which includes some extra code (e.g. global styles and data loading) that aren't included in the component library. The entry-point component for the library (as set by the `main` property in package.json) is `App`. @@ -107,18 +103,6 @@ Selectors can be found in `/src/selectors/`. We use [Reselect](https://github.co We have used Kedro-Viz to visualize the selector dependency graph - [visit the demo to see it in action](https://demo.kedro.org/?data=selectors). -## Apollo - -The `src/apollo` directory contains all the related setup for ingesting data from the GraphQL endpoint for the experiment tracking features. This includes the schema that defines all query and mutation types, the config that sets up the Apollo Client to be used within React components, and other files containing helper functions, such as mocks to generate random data for the mock server. - -The GraphQL schema is defined on the backend by Strawberry and automatically converted to GraphQL SDL (schema definition language) with `make schema-fix`. A CI check ensures that the resulting `schema.graphql` and below visualization are always in sync with the backend definition. - -![Kedro-Viz GraphQL schema](.github/img/schema.graphql.png) - -You can see documentation for the schema and run mock queries using GraphiQL, the GraphQL integrated development environment. This is possible without launching the full backend server: run `make strawberry-server` and then go to [http://127.0.0.1:8000/graphql](http://127.0.0.1:8000/graphql). - -⚠️ When a query supplies an ordered argument, the backend response must maintain the same ordering. For example, a the response to a query that calls for `runIds = [2, 3, 1]` should respond with runs in that same order. - ## Utils The `/src/utils/` directory contains miscellaneous reusable utility functions. @@ -153,4 +137,4 @@ The app uses [redux-watch](https://github.com/ExodusMovement/redux-watch) with a ![Kedro-Viz backend architecture](/.github/img/backend-architecture.png) -The backend of Kedro-Viz serves as the data provider and API layer that interacts with Kedro projects and manages data access for visualisations in the frontend. It offers both REST and GraphQL APIs to support data retrieval for the frontend, allowing access to pipeline structures, node-specific details, and experiment tracking data. Key components include the `DataAccessManager`, which interfaces with data `Repositories` to fetch and structure data. The CLI enables users launch with Kedro-Viz from the command line, while deploy and build options enables seamless sharing of pipeline visualisations on any static website hosting platform. +The backend of Kedro-Viz serves as the data provider and API layer that interacts with Kedro projects and manages data access for visualisations in the frontend. It offers REST API to support data retrieval for the frontend, allowing access to pipeline structures and node-specific details. Key components include the `DataAccessManager`, which interfaces with data `Repositories` to fetch and structure data. The CLI enables users launch with Kedro-Viz from the command line, while deploy and build options enables seamless sharing of pipeline visualisations on any static website hosting platform. diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 8b8fda0c1..2be58f6b3 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -210,23 +210,6 @@ make run PROJECT_PATH=/new-kedro-project > **Note**: Once the backend development server is launched at port 4142, the local app will always pull data from that server. To prevent this, you can comment out the proxy setting in `package.json` and restart the dev server at port 4141. -#### Launch the development server with the `SQLiteSessionStore` - -Kedro-Viz provides a `SQLiteSessionStore` that users can use in their project to enable experiment tracking functionality. If you want to use this session store with the development server, make sure you don't use a relative path when specifying the store's location in `settings.py`. For example, `demo-project` specifies the local `data` directory within a project as the session store's location as follows: - -```python -from kedro_viz.integrations.kedro.sqlite_store import SQLiteStore -SESSION_STORE_ARGS = {"path": str(Path(__file__).parents[2] / "data")} -``` - -Owing to this coupling between the project settings and Kedro-Viz, if you wish to execute any Kedro commands on `demo-project` (including `kedro run`), you will need to install the Kedro-Viz Python package. To install your local development version of the package, run: - -```bash -pip3 install -e package -``` - -Since Kedro 0.18, a session can only contain one run. In Kedro-Viz, once a session has been retrieved from the store we always use the terminology "run" rather than "session", e.g. `run_id` rather than `session_id`. - ## Testing guidelines - Scope out major journeys from acceptance criteria from the ticket for manual end-to-end testing diff --git a/docs/source/experiment_tracking.md b/docs/source/experiment_tracking.md deleted file mode 100644 index 3a6f42add..000000000 --- a/docs/source/experiment_tracking.md +++ /dev/null @@ -1,360 +0,0 @@ -# Experiment tracking in Kedro-Viz - -```{important} -Starting from version 8.0.0 of Kedro-Viz, Experiment Tracking is exclusively supported for users with kedro-datasets version 2.1.0 or higher. -``` - -Experiment tracking is the process of saving all the metadata related to an experiment each time you run it. It enables you to compare different runs of a machine-learning model as part of the experimentation process. - -The metadata you store may include: - -* Scripts used for running the experiment -* Environment configuration files -* Versions of the data used for training and evaluation -* Evaluation metrics -* Model weights -* Plots and other visualisations - -You can use Kedro-Viz experiment tracking to store and access results, and to share them with others for comparison. Storage can be local or remote, such as cloud storage on AWS S3. - -The experiment tracking demo enables you to explore the experiment tracking capabilities of Kedro-Viz. - -![](./images/experiment-tracking_demo.gif) - -## Kedro versions supporting experiment tracking -Kedro has always supported parameter versioning (as part of your codebase with a version control system like `git`) and Kedro’s dataset versioning capabilities enabled you to [snapshot models, datasets and plots](https://docs.kedro.org/en/stable/data/data_catalog.html#dataset-versioning). - -Kedro-Viz version 4.1.1 introduced metadata capture, visualisation, discovery and comparison, enabling you to access, edit and [compare your experiments](#access-run-data-and-compare-runs) and additionally [track how your metrics change over time](#view-and-compare-metrics-data). - -Kedro-Viz version 5.0 also supports the [display and comparison of plots, such as Plotly and Matplotlib](./preview_plotly_datasets.md). Support for metric plots (timeseries and parallel coords) was added to Kedro-Viz version 5.2.1. - -Kedro-Viz version 6.2 includes support for collaborative experiment tracking using a cloud storage solution. This means that multiple users can store their experiment data in a centralized remote storage, such as AWS S3, and access it through Kedro-Viz. - -## When should I use experiment tracking in Kedro? - -The choice of experiment tracking tool depends on your use case and choice of complementary tools, such as MLflow and Neptune: - -- **Kedro** - If you need experiment tracking, are looking for improved metrics visualisation and want a lightweight tool to work alongside existing functionality in Kedro. Kedro does not support a model registry. -- **MLflow** - You can combine MLflow with Kedro by using [`kedro-mlflow`](https://kedro-mlflow.readthedocs.io/en/stable/) if you require experiment tracking, model registry and/or model serving capabilities or have access to Managed MLflow within the Databricks ecosystem. -- **Neptune** - If you require experiment tracking and model registry functionality, improved visualisation of metrics and support for collaborative data science, you may consider [`kedro-neptune`](https://docs.neptune.ai/integrations/kedro/) for your workflow. - -{doc}`We support a growing list of integrations`. - -## Set up a project - -This section describes the steps necessary to set up experiment tracking and access logged metrics, using the {doc}`spaceflights tutorial` with a version of Kedro equal to or higher than 0.18.4, and a version of Kedro-Viz equal to or higher than 5.2. - -There are three steps to enable experiment tracking features with Kedro-Viz. We illustrate how to: - -- [Set up a session store to capture experiment metadata](#set-up-the-session-store) -- [Set up experiment tracking datasets to list the metrics to track](#set-up-experiment-tracking-datasets) -- [Modify your nodes and pipelines to output those metrics](#modify-your-nodes-and-pipelines-to-log-metrics) - -### Install Kedro and Kedro-Viz -To use this tutorial code, you must already have {doc}`installed Kedro` and [Kedro-Viz](./kedro-viz_visualisation.md). You can confirm the versions you have installed by running `kedro info` - -```{note} -The example code uses a version of Kedro-Viz `>6.2.0`. -``` - -Create a new project using the spaceflights starter. From the terminal run: - -```bash -kedro new --starter=spaceflights-pandas -``` - -Feel free to name your project as you like, but this guide assumes the project is named `Spaceflights`. - -### Install the dependencies for the project - -Once you have created the project, to run project-specific Kedro commands, you must navigate to the directory in which it has been created: - -```bash -cd spaceflights -``` -Install the project's dependencies: - -```bash -pip install -r src/requirements.txt -``` - -## Set up the session store - -In the domain of experiment tracking, each pipeline run is considered a session. A session store records all related metadata for each pipeline run, from logged metrics to other run-related data such as timestamp, `git` username and branch. The session store is a [SQLite](https://www.sqlite.org/index.html) database that is generated during your first pipeline run after it has been set up in your project. - -### Local storage -To set up the session store locally, go to the `src/spaceflights/settings.py` file and add the following: - -```python -from kedro_viz.integrations.kedro.sqlite_store import SQLiteStore -from pathlib import Path - -SESSION_STORE_CLASS = SQLiteStore -SESSION_STORE_ARGS = {"path": str(Path(__file__).parents[2] / "data")} -``` - -This specifies the creation of the `SQLiteStore` under the `data` subfolder, using the `SQLiteStore` setup from your installed Kedro-Viz plugin - -This step is crucial to enable experiment tracking features on Kedro-Viz, as it is the database used to serve all run data to the Kedro-Viz front-end. Once this step is complete, you can either proceed to [set up the tracking datasets](#set-up-experiment-tracking-datasets) or [set up your nodes and pipelines to log metrics](#modify-your-nodes-and-pipelines-to-log-metrics); these two activities are interchangeable, but both should be completed to get a working experiment tracking setup. - -```{note} -Starting from Kedro-Viz 9.2.0, if the user does not provide `SESSION_STORE_ARGS` in the project settings, a default directory `.viz` will be created at the root of your Kedro project and used for `SQLiteStore`. -``` - -## Collaborative experiment tracking - -```{note} -To use collaborative experiment tracking, ensure that your installed version of Kedro-Viz is `>=6.2.0`. -``` - -For collaborative experiment tracking, Kedro-Viz saves your experiments as SQLite database files on a central cloud storage. To ensure that all users have a unique filename, set up your `KEDRO_SQLITE_STORE_USERNAME` in the environment variables. By default, Kedro-Viz will take your computer user name if this is not specified. - -> Note: In Kedro-Viz version 6.2, the only way to set up credentials for accessing your cloud storage is through environment variables. - -```bash -export KEDRO_SQLITE_STORE_USERNAME="your_unique__username" - -``` - -Now specify a remote path in the `SESSION_STORE_ARGS` variable, which links to your cloud storage. - - -```python -from kedro_viz.integrations.kedro.sqlite_store import SQLiteStore -from pathlib import Path - -SESSION_STORE_CLASS = SQLiteStore -SESSION_STORE_ARGS = { - "path": str(Path(__file__).parents[2] / "data"), - "remote_path": "s3://my-bucket-name/path/to/experiments", -} -``` - -Finally, ensure you have the necessary credentials set up as shown below: - -```bash -export AWS_ACCESS_KEY_ID="your_access_key_id" -export AWS_SECRET_ACCESS_KEY="your_secret_access_key" -export AWS_REGION="your_aws_region" - -``` - -## Set up experiment tracking datasets - -There are two types of tracking datasets: {py:class}`tracking.MetricsDataset ` and {py:class}`tracking.JSONDataset `. The `tracking.MetricsDataset` should be used for tracking numerical metrics, and the `tracking.JSONDataset` can be used for tracking any other JSON-compatible data like boolean or text-based data. - -Set up two datasets to log the columns used in the companies dataset (`companies_columns`) and experiment metrics for the data science pipeline (`metrics`) like the coefficient of determination (`r2 score`), max error (`me`) and mean absolute error (`mae`) by adding the following in the `conf/base/catalog.yml` file: - -```yaml -metrics: - type: tracking.MetricsDataset - filepath: data/09_tracking/metrics.json - -companies_columns: - type: tracking.JSONDataset - filepath: data/09_tracking/companies_columns.json -``` - -## Modify your nodes and pipelines to log metrics - -Now that you have set up the tracking datasets to log experiment tracking data, next ensure that the data is returned from your nodes. - -Set up the data to be logged for the metrics dataset - under `nodes.py` of your `data_science` pipeline (`src/spaceflights/pipelines/data_science/nodes.py`), add three different metrics to your `evaluate_model` function to log `r2_score`, `mae` and `me` and return these 3 metrics as key-value pairs. - -The new `evaluate_model` function should look like this: - -```python -from sklearn.metrics import mean_absolute_error, max_error - - -def evaluate_model( - regressor: LinearRegression, X_test: pd.DataFrame, y_test: pd.Series -) -> Dict[str, float]: - """Calculates and logs the coefficient of determination. - - Args: - regressor: Trained model. - X_test: Testing data of independent features. - y_test: Testing data for price. - """ - y_pred = regressor.predict(X_test) - score = r2_score(y_test, y_pred) - mae = mean_absolute_error(y_test, y_pred) - me = max_error(y_test, y_pred) - logger = logging.getLogger(__name__) - logger.info("Model has a coefficient R^2 of %.3f on test data.", score) - return {"r2_score": score, "mae": mae, "max_error": me} -``` - -Next, ensure that the dataset is also specified as an output of your `evaluate_model` node. In the `src/spaceflights/pipelines/data_science/pipeline.py` file, specify the `output` of your `evaluate_model` to be the `metrics` dataset. Note that the output dataset must exactly match the name of the tracking dataset specified in the catalog file. - -The node of the `evaluate_model` on the pipeline should look like this: - -```python -node( - func=evaluate_model, - inputs=["regressor", "X_test", "y_test"], - name="evaluate_model_node", - outputs="metrics", -) -``` - -Repeat the same steps to set up the `companies_column` dataset. For this dataset, log the column that contains the list of companies as outlined in the `companies.csv` file under the `data/01_raw` directory. Modify the `preprocess_companies` node under the `data_processing` pipeline (`src/spaceflights/pipelines/data_processing/nodes.py`) to return the data under a key-value pair, as shown below: - -```python -from typing import Tuple, Dict - - -def preprocess_companies(companies: pd.DataFrame) -> Tuple[pd.DataFrame, Dict]: - """Preprocesses the data for companies. - - Args: - companies: Raw data. - Returns: - Preprocessed data, with `company_rating` converted to a float and - `iata_approved` converted to boolean. - """ - companies["iata_approved"] = _is_true(companies["iata_approved"]) - companies["company_rating"] = _parse_percentage(companies["company_rating"]) - return companies, {"columns": companies.columns.tolist(), "data_type": "companies"} -``` - -Again, you must ensure that the dataset is also specified as an output on the `pipeline.py` file under the `data_processing` pipeline (`src/spaceflights/pipelines/data_processing/pipeline.py`), as follows: - -```python -node( - func=preprocess_companies, - inputs="companies", - outputs=["preprocessed_companies", "companies_columns"], - name="preprocess_companies_node", -) -``` - -Having set up both datasets, you can now generate your first set of experiment tracking data! - -## Generate the run data - -The beauty of native experiment tracking in Kedro is that all tracked data is generated and stored each time you do a Kedro run. Hence, to generate the data, you need only execute: - -```bash -kedro run -``` - -After the run completes, under `data/09_tracking`, you can now see two folders, `companies_column.json` and `metrics.json`. On performing a pipeline run after setting up the tracking datasets, Kedro generates a folder with the dataset name for each tracked dataset. Each folder of the tracked dataset contains folders named by the timestamp of each pipeline run to store the saved metrics of the dataset, and each future pipeline run generates a new timestamp folder with the JSON file of the saved metrics under the folder of its subsequent tracked dataset. - -You can also see the `session_store.db` generated from your first pipeline run after enabling experiment tracking, which is used to store all the generated run metadata, alongside the tracking dataset, to be used for exposing experiment tracking to Kedro-Viz. - -![](./images/experiment-tracking-folder.png) - -Execute `kedro run` a few times in a row to generate a larger set of experiment data. You can also play around with setting up different tracking datasets, and check the logged data via the generated JSON data files. - -## Access run data and compare runs - -Here comes the fun part of accessing your run data on Kedro-Viz. Having generated some run data, execute the following command: - -```bash -kedro viz run -``` - -When you open the Kedro-Viz web app, you see an experiment tracking icon on the left-hand side of the screen. - -![](./images/experiment-tracking-icon.png) - -Click the icon to go to the experiment tracking page (you can also access the page from your browser at `http://127.0.0.1:4141/experiment-tracking`), where you can see the sets of experiment data generated from all previous runs: - -![](./images/experiment-tracking-runs-list.png) - -You can now access, compare and pin your runs by toggling the `Compare runs` button: - -![](./images/experiment-tracking-compare-runs.png) - -## View and compare plots - -In this section, we illustrate how to compare Matplotlib plots across experimental runs (functionality available since Kedro-Viz version 5.0). - -### Update the dependencies - -Update the `src/requirements.txt` file in your Kedro project by adding the following dataset to enable Matplotlib for your project: - -```text -kedro-datasets[matplotlib.MatplotlibWriter]~=1.1 -seaborn~=0.12.1 -``` - -And install the requirements with: - -```bash -pip install -r src/requirements.txt -``` - -### Add a plotting node - -Add a new node to the `data_processing` nodes (`src/spaceflights/pipelines/data_processing/nodes.py`): - -```python -import matplotlib.pyplot as plt -import seaborn as sn - - -def create_confusion_matrix(companies: pd.DataFrame): - actuals = [0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1] - predicted = [1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1] - data = {"y_Actual": actuals, "y_Predicted": predicted} - df = pd.DataFrame(data, columns=["y_Actual", "y_Predicted"]) - confusion_matrix = pd.crosstab( - df["y_Actual"], df["y_Predicted"], rownames=["Actual"], colnames=["Predicted"] - ) - sn.heatmap(confusion_matrix, annot=True) - return plt -``` - -And now add this node to the `data_processing` pipeline (`src/spaceflights/pipelines/data_processing/pipeline.py`): - -```python -from .nodes import create_confusion_matrix - -node( - func=create_confusion_matrix, - inputs="companies", - outputs="confusion_matrix", -), -``` - -In the catalog (`conf/base/catalog.yml`) add the `confusion_matrix` data definition, making sure to set the versioned flag to `true` within the project catalog to include the plot in experiment tracking: - -```yaml -confusion_matrix: - type: matplotlib.MatplotlibWriter - filepath: data/09_tracking/confusion_matrix.png - versioned: true -``` - -After running the pipeline with `kedro run`, the plot is saved and you can see it in the experiment tracking panel when you execute `kedro viz run`. Clicking on a plot expands it. When in comparison view, expanding a plot shows all the plots in that view for side-by-side comparison. - -![](./images/experiment-tracking-plots-comparison.png) - -![](./images/experiment-tracking-plots-comparison-expanded.png) - -## View and compare metrics data - -From Kedro-Viz `>=5.2.1` experiment tracking also supports the display and comparison of metrics data through two chart types: time series and parallel coordinates. - -Time series displays one metric per graph, showing how the metric value has changed over time. - -Parallel coordinates displays all metrics on a single graph, with each vertical line representing one metric with its own scale. The metric values are positioned along those vertical lines and connected across each axis. - -When in comparison view, comparing runs highlights your selections on the respective chart types, improving readability even in the event there is a multitude of data points. - -```{note} -The following graphic is taken from the [Kedro-Viz experiment tracking demo](https://demo.kedro.org/) (it is not a visualisation from the example code you created above). -``` - -![](./images/experiment-tracking-metrics-comparison.gif) - -Additionally, you can monitor the changes to metrics over time from the pipeline visualisation tab which you can access by following the icon on the left-hand side of the screen. - -![](./images/pipeline_visualisation_icon.png) - -Clicking on any `MetricsDataset` node opens a side panel displaying how the metric value has changed over time: - -![](./images/pipeline_show_metrics.gif) \ No newline at end of file diff --git a/docs/source/index.md b/docs/source/index.md index ea10570cf..4db6c326b 100644 --- a/docs/source/index.md +++ b/docs/source/index.md @@ -6,7 +6,7 @@

-Kedro-Viz is an interactive development tool for visualising data science pipelines built with [Kedro](https://github.com/kedro-org/kedro). Kedro-Viz also enables users to view and compare different experiment runs within their Kedro project. +Kedro-Viz is an interactive development tool for visualising data science pipelines built with [Kedro](https://github.com/kedro-org/kedro). Kedro-Viz features include: @@ -18,7 +18,6 @@ Kedro-Viz features include: 🎨 Rich metadata side panel to display parameters, plots, etc. 📊 Support for all types of [Plotly charts](https://plotly.com/javascript/). ♻️ Autoreload on code change. -🧪 Support for experiment tracking and comparing runs in a Kedro project. Take a look at the live demo for a preview of Kedro-Viz. @@ -30,7 +29,6 @@ kedro-viz_visualisation share_kedro_viz preview_datasets slice_a_pipeline -experiment_tracking ``` ```{toctree} diff --git a/docs/source/kedro-viz_visualisation.md b/docs/source/kedro-viz_visualisation.md index 18379f45d..744af9c43 100644 --- a/docs/source/kedro-viz_visualisation.md +++ b/docs/source/kedro-viz_visualisation.md @@ -89,7 +89,6 @@ Some of the known limitations while using `--lite` flag: * If the datasets are not resolved, they will be defaulted to a custom dataset `UnavailableDataset`. * The flowchart will not show the layers information for the datasets. -* Experiment Tracking will not work if the pre-requisite of having kedro-datasets version 2.1.0 and above is not met. ## Automatic visualisation updates diff --git a/package/README.md b/package/README.md index 781fff39c..d7d060581 100644 --- a/package/README.md +++ b/package/README.md @@ -204,34 +204,6 @@ Options: --include-previews A flag to include preview for all the datasets -h, --help Show this message and exit. ``` - -### Experiment Tracking usage - -To enable [experiment tracking](https://docs.kedro.org/en/stable/experiment_tracking/index.html) in Kedro-Viz, you need to add the Kedro-Viz `SQLiteStore` to your Kedro project. - -This can be done by adding the below code to `settings.py` in the `src` folder of your Kedro project. - -```python -from kedro_viz.integrations.kedro.sqlite_store import SQLiteStore -from pathlib import Path -SESSION_STORE_CLASS = SQLiteStore -SESSION_STORE_ARGS = {"path": str(Path(__file__).parents[2] / "data")} -``` - -Once the above set-up is complete, tracking datasets can be used to track relevant data for Kedro runs. More information on how to use tracking datasets can be found in the [experiment tracking documentation](https://docs.kedro.org/en/stable/experiment_tracking/index.html) - -**Notes:** - -- Experiment Tracking is only available for Kedro-Viz >= 4.0.2 and Kedro >= 0.17.5 -- Prior to Kedro 0.17.6, when using tracking datasets, you will have to explicitly mark the datasets as `versioned` for it to show up properly in Kedro-Viz experiment tracking tab. From Kedro >= 0.17.6, this is done automatically: - -```yaml -train_evaluation.r2_score_linear_regression: - type: tracking.MetricsDataset - filepath: ${base_location}/09_tracking/linear_score.json - versioned: true -``` - ### Standalone React component usage To use Kedro-Viz as a standalone React component, you can follow the example below. However, please note that Kedro-Viz does not support server-side rendering (SSR). If you're using Next.js or another SSR framework, you should be aware of this limitation.