diff --git a/docs/source/guide/explanation/additional_features/class_incremental_sampler.rst b/docs/source/guide/explanation/additional_features/class_incremental_sampler.rst new file mode 100644 index 00000000000..6c570695e19 --- /dev/null +++ b/docs/source/guide/explanation/additional_features/class_incremental_sampler.rst @@ -0,0 +1,58 @@ +Class-Incremental Sampler +=========================== + +This sampler is a sampler that creates an effective batch. +For default setting, the square root of (number of old data/number of new data) is used as the ratio of old data. + +.. tab-set:: + + .. tab-item:: API + + .. code-block:: python + + from otx.algo.samplers.class_incremental_sampler import ClassIncrementalSampler + + dataset = OTXDataset(...) + class_incr_sampler = ClassIncrementalSampler( + dataset=dataset, + batch_size=32, + old_classes=["car", "truck"], + new_classes=["bus"], + ) + + .. tab-item:: CLI + + .. code-block:: shell + + (otx) ...$ otx train ... \ + --data.config.train_subset.sampler.class_path otx.algo.samplers.class_incremental_sampler.ClassIncrementalSampler \ + --data.config.train_subset.sampler.init_args.old_classes '[car,truck]' \ + --data.config.train_subset.sampler.init_args.new_classes '[bus]' + + +Balanced Sampler +=========================== + +This sampler is a sampler that creates an effective batch. +It helps ensure balanced sampling by class based on the distribution of class labels during supervised learning. + + +.. tab-set:: + + .. tab-item:: API + + .. code-block:: python + + from otx.algo.samplers.balanced_sampler import BalancedSampler + + dataset = OTXDataset(...) + class_incr_sampler = BalancedSampler( + dataset=dataset, + ) + + .. tab-item:: CLI + + .. code-block:: shell + + (otx) ...$ otx train ... \ + --data.config.train_subset.sampler.class_path otx.algo.samplers.balanced_sampler.BalancedSampler diff --git a/docs/source/guide/explanation/additional_features/index.rst b/docs/source/guide/explanation/additional_features/index.rst index d7fa4855d47..f0e7f1f370d 100644 --- a/docs/source/guide/explanation/additional_features/index.rst +++ b/docs/source/guide/explanation/additional_features/index.rst @@ -13,3 +13,4 @@ Additional Features xai fast_data_loading tiling + class_incremental_sampler diff --git a/docs/source/guide/get_started/api_tutorial.rst b/docs/source/guide/get_started/api_tutorial.rst index 83fa6f52ca5..ab8718006b5 100644 --- a/docs/source/guide/get_started/api_tutorial.rst +++ b/docs/source/guide/get_started/api_tutorial.rst @@ -1,10 +1,10 @@ -OpenVINO™ Training Extensions API Quick-Start +:octicon:`code-square;1em` API Quick-Guide ============================================== Besides CLI functionality, The OpenVINO™ Training Extension provides APIs that help developers to integrate OpenVINO™ Training Extensions models into their projects. This tutorial intends to show how to create a dataset, model and use all of the CLI functionality through APIs. -For demonstration purposes we will use the Object Detection SSD model with `WGISD `_ public dataset as we did for the :doc:`CLI tutorial <../tutorials/base/how_to_train/detection>`. +For demonstration purposes we will use the Object Detection ATSS model with `WGISD `_ public dataset as we did for the :doc:`CLI tutorial <../tutorials/base/how_to_train/detection>`. .. note:: @@ -19,17 +19,17 @@ with `WGISD dataset `_. .. code-block:: shell - cd data - git clone https://github.com/thsant/wgisd.git - cd wgisd - git checkout 6910edc5ae3aae8c20062941b1641821f0c30127 + cd data + git clone https://github.com/thsant/wgisd.git + cd wgisd + git checkout 6910edc5ae3aae8c20062941b1641821f0c30127 2. We need to rename annotations to be distinguished by OpenVINO™ Training Extensions Datumaro manager: .. code-block:: shell - mv data images && mv coco_annotations annotations && mv annotations/train_bbox_instances.json instances_train.json && mv annotations/test_bbox_instances.json instances_val.json + mv data images && mv coco_annotations annotations && mv annotations/train_bbox_instances.json instances_train.json && mv annotations/test_bbox_instances.json instances_val.json Now it is all set to use this dataset inside OpenVINO™ Training Extensions @@ -37,7 +37,7 @@ Now it is all set to use this dataset inside OpenVINO™ Training Extensions Quick Start with auto-configuration ************************************ -Once the dataset is ready, we can immediately start training with the model and data pipeline recommended by OTX through auto-configuration. +Once the dataset is ready, we can immediately start training with the model and data pipeline recommended by OpenVINO™ Training Extension through auto-configuration. The following code snippet demonstrates how to use the auto-configuration feature: .. code-block:: python @@ -50,7 +50,7 @@ The following code snippet demonstrates how to use the auto-configuration featur .. note:: - If dataset supports multiple Task types, this will default to the Task type detected by OTX. + If dataset supports multiple Task types, this will default to the Task type detected by OpenVINO™ Training Extension. If you want to specify a specific Task type, you need to specify it like below: .. code-block:: python @@ -65,35 +65,55 @@ The following code snippet demonstrates how to use the auto-configuration featur Check Available Model Recipes ********************************** -If you want to use other models offered by OTX besides the ones provided by Auto-Configuration, you can get a list of available models in OTX as shown below. +If you want to use other models offered by OpenVINO™ Training Extension besides the ones provided by Auto-Configuration, you can get a list of available models in OpenVINO™ Training Extension as shown below. -.. code-block:: python +.. tab-set:: + + .. tab-item:: List of available model names + + .. code-block:: python + + from otx.engine.utils.api import list_models + + model_lists = list_models(task="DETECTION") + print(model_lists) + + ''' + [ + 'yolox_tiny_tile', + 'yolox_x', + 'yolox_l_tile', + 'yolox_x_tile', 'yolox_l', + 'atss_r50_fpn', + 'ssd_mobilenetv2', + 'yolox_s', + 'yolox_tiny', + 'openvino_model', + 'atss_mobilenetv2', + 'yolox_s_tile', + 'rtmdet_tiny', + 'atss_mobilenetv2_tile', + 'atss_resnext101', + 'ssd_mobilenetv2_tile', + ] + ''' + + .. tab-item:: Print available configuration information + + .. code-block:: python + + from otx.engine.utils.api import list_models - from otx.engine.utils.api import list_models - - model_lists = list_models(task="DETECTION") - print(model_lists) - - ''' - [ - 'yolox_tiny_tile', - 'yolox_x', - 'yolox_l_tile', - 'yolox_x_tile', 'yolox_l', - 'atss_r50_fpn', - 'ssd_mobilenetv2', - 'yolox_s', - 'yolox_tiny', - 'openvino_model', - 'atss_mobilenetv2', - 'yolox_s_tile', - 'rtmdet_tiny', - 'atss_mobilenetv2_tile', - 'atss_resnext101', - 'ssd_mobilenetv2_tile', - ] - ''' + model_lists = list_models(task="DETECTION", print_table=True) + ''' + ┏━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ + ┃ Task ┃ Model Name ┃ Recipe Path ┃ + ┡━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ + │ DETECTION │ yolox_tiny │ src/otx/recipe/detection/yolox_tiny.yaml │ + │ ... │ │ │ + └───────────┴───────────────────────┴────────────────────────────────────────────────────────────────┘ + ''' .. note:: @@ -170,7 +190,7 @@ The current default setting is ``auto``. In addition, the ``Engine`` constructor can be associated with the Trainer's constructor arguments to control the Trainer's functionality. Refer `lightning.Trainer `_. -4. Using the OTX configuration we can configure the Engine. +4. Using the OpenVINO™ Training Extension configuration we can configure the Engine. .. code-block:: python @@ -190,7 +210,7 @@ Training Create an output model and start actual training: -1. Below is an example using the ``atss_mobilenetv2`` model provided by OTX. +1. Below is an example using the ``atss_mobilenetv2`` model provided by OpenVINO™ Training Extension. .. code-block:: python @@ -212,7 +232,7 @@ Create an output model and start actual training: .. note:: - This can use callbacks provided by OTX and several training techniques. + This can use callbacks provided by OpenVINO™ Training Extension and several training techniques. However, in this case, no arguments are specified for train. 3. If you want to specify the model, you can do so as shown below: @@ -435,7 +455,7 @@ The default value for ``export_precision`` is ``FP32``. .. code-block:: python - engine.export(precision="FP16") + engine.export(export_precision="FP16") **** diff --git a/docs/source/guide/get_started/cli_commands.rst b/docs/source/guide/get_started/cli_commands.rst index 7779e49b6f3..8f7d5c71f4d 100644 --- a/docs/source/guide/get_started/cli_commands.rst +++ b/docs/source/guide/get_started/cli_commands.rst @@ -1,4 +1,4 @@ -OpenVINO™ Training Extensions CLI Usage +:octicon:`terminal;1em` CLI Guide ========================================== All possible OpenVINO™ Training Extensions CLI commands are presented below along with some general examples of how to run specific functionality. There are :doc:`dedicated tutorials <../tutorials/base/how_to_train/index>` in our documentation with life-practical examples on specific datasets for each task. @@ -6,8 +6,8 @@ All possible OpenVINO™ Training Extensions CLI commands are presented below al .. note:: To run CLI commands you need to prepare a dataset. Each task requires specific data formats. To know more about which formats are supported by each task, refer to :doc:`explanation section <../explanation/algorithms/index>` in the documentation. - Also, by default, the OTX CLI is written using jsonargparse, see jsonargparse or LightningCLI. - `Jsonargparse Documentation _` + Also, by default, the OpenVINO™ Training Extensions CLI is written using jsonargparse, see jsonargparse or LightningCLI. + Please refer `Jsonargparse Documentation `_ ***** Help @@ -198,7 +198,7 @@ Users can also pre-generate a config file with an example like the one below. Find ***** -``otx find`` lists model templates and backbones available for the given task. Specify the task name with ``--task`` option. Use ``--pattern`` to find the model name from OTX. +``otx find`` lists model templates and backbones available for the given task. Specify the task name with ``--task`` option. Use ``--pattern`` to find the model name from OpenVINO™ Training Extensions. .. code-block:: shell @@ -304,29 +304,48 @@ The results will be saved in ``./otx-workspace/`` folder by default. The output ``otx train`` receives ``--config`` as a argument. ``config`` can be a path to the specific ``*.yaml`` file. Also, the path to data root should be passed to the CLI to start training. +.. tab-set:: -Example of the command line to start training using Auto-Configuration: + .. tab-item:: Auto-Configuration -.. code-block:: shell + Example of the command line to start training using Auto-Configuration: - (otx) ...$ otx train --data_root --task + .. code-block:: shell -You can use the recipe configuration provided by OTX. The corresponding configuration file can be found via ``otx find``. + (otx) ...$ otx train --data_root --task -.. code-block:: shell + .. tab-item:: With Configuration + + You can use the recipe configuration provided by OpenVINO™ Training Extensions. The corresponding configuration file can be found via ``otx find``. + + .. code-block:: shell + + (otx) ...$ otx train --config --data_root + + .. tab-item:: With Custom Model + + You can also use a custom model and data module. The model and data module can be passed as a class path or a configuration file. + + .. code-block:: shell + + (otx) ...$ otx train --model --task --data_root + + For example, if you want to use the ``otx.algo.detection.atss.ATSS`` model class, you can train it as shown below. + + .. code-block:: shell - (otx) ...$ otx train --config --data_root + (otx) ...$ otx train --model otx.algo.detection.atss.ATSS --model.variant mobilenetv2 --task DETECTION ... .. note:: - You also can visualize the training using ``Tensorboard`` as these logs are located in ``/tensorboard``. + You also can visualize the training using ``Tensorboard`` as these logs are located in ``/tensorboard``. .. note:: - ``--data.config.mem_cache_size`` provides in-memory caching for decoded images in main memory. - If the batch size is large, such as for classification tasks, or if your dataset contains high-resolution images, - image decoding can account for a non-negligible overhead in data pre-processing. - This option can be useful for maximizing GPU utilization and reducing model training time in those cases. - If your machine has enough main memory, we recommend increasing this value as much as possible. - For example, you can cache approximately 10,000 of ``500x375~500x439`` sized images with ``--data.config.mem_cache_size 8GB``. + ``--data.config.mem_cache_size`` provides in-memory caching for decoded images in main memory. + If the batch size is large, such as for classification tasks, or if your dataset contains high-resolution images, + image decoding can account for a non-negligible overhead in data pre-processing. + This option can be useful for maximizing GPU utilization and reducing model training time in those cases. + If your machine has enough main memory, we recommend increasing this value as much as possible. + For example, you can cache approximately 10,000 of ``500x375~500x439`` sized images with ``--data.config.mem_cache_size 8GB``. It is also possible to start training by omitting the template and just passing the paths to dataset roots, then the :doc:`auto-configuration <../explanation/additional_features/auto_configuration>` will be enabled. Based on the dataset, OpenVINO™ Training Extensions will choose the task type and template with the best accuracy/speed trade-off. @@ -340,7 +359,7 @@ For example, that is how you can change the max epochs and the batch size for th .. note:: ``train``, ``test`` works based on ``lightning.Tranier``. You can change the Trainer component with the arguments of train and test. You can find more arguments in this documentation. - `Trainer _` + `Trainer `_ ********** Exporting @@ -352,7 +371,7 @@ The command below performs exporting to the ``{work_dir}/`` path. .. code-block:: shell - (otx) ...$ otx export ... --checkpoint + (otx) ...$ otx export ... --checkpoint The command results in ``exported_model.xml``, ``exported_model.bin``. @@ -360,7 +379,7 @@ To use the exported model as an input for ``otx explain``, please dump additiona .. code-block:: shell - (otx) ...$ otx export ... --checkpoint --explain True + (otx) ...$ otx export ... --checkpoint --explain True .. note:: @@ -387,8 +406,8 @@ Command example for optimizing OpenVINO™ model (.xml) with OpenVINO™ PTQ: .. code-block:: shell - (otx) ...$ otx optimize ... --checkpoint \ - --data_root \ + (otx) ...$ otx optimize ... --checkpoint \ + --data_root Thus, to use PTQ pass the path to exported IR (.xml) model. @@ -419,7 +438,7 @@ The command below will evaluate the trained model on the provided dataset: .. note:: - It is possible to pass both PyTorch weights ``.pth`` or OpenVINO™ IR ``openvino.xml`` to ``--checkpoint`` option. + It is possible to pass both PyTorch weights ``.ckpt`` or OpenVINO™ IR ``exported_model.xml`` to ``--checkpoint`` option. .. note:: @@ -449,13 +468,13 @@ The command below will generate saliency maps (heatmaps with red colored areas o .. note:: - It is possible to pass both PyTorch weights ``.pth`` or OpenVINO™ IR ``openvino.xml`` to ``--load-weights`` option. + It is possible to pass both PyTorch weights ``.ckpt`` or OpenVINO™ IR ``exported_model.xml`` to ``--load-weights`` option. By default, the model is exported to the OpenVINO™ IR format without extra feature information needed for the ``explain`` function. To use OpenVINO™ IR model in ``otx explain``, please first export it with ``--explain`` parameter: .. code-block:: shell - (otx) ...$ otx export ... --checkpoint \ + (otx) ...$ otx export ... --checkpoint \ --explain True (otx) ...$ otx explain ... --checkpoint outputs/openvino/with_features \ @@ -477,7 +496,7 @@ If we run a typical Training example, will have a folder like the one below as o 20240000_000001/ # Deliverables from OTX CLI Second-Trial -OTX considers the folder with ``.latest`` to be the root of the entire Workspace. +OpenVINO™ Training Extensions considers the folder with ``.latest`` to be the root of the entire Workspace. ``.latest`` soft-links to the most recently trained output folder. Case 1: If a user specifies an output ``work_dir`` (An already existing workspace) diff --git a/docs/source/guide/get_started/installation.rst b/docs/source/guide/get_started/installation.rst index fd4ce0d08f6..dd2ed9b352a 100644 --- a/docs/source/guide/get_started/installation.rst +++ b/docs/source/guide/get_started/installation.rst @@ -1,5 +1,5 @@ -Installation -============ +:octicon:`package` Installation +==================================== ************** Prerequisites @@ -15,54 +15,46 @@ The current version of OpenVINO™ Training Extensions was tested in the followi Install OpenVINO™ Training Extensions for users *********************************************** -1. Clone the training_extensions -repository with the following command: +1. Install OpenVINO™ Training Extensions package: -.. code-block:: shell - - git clone https://github.com/openvinotoolkit/training_extensions.git - cd training_extensions - git checkout develop - -2. Set up a -virtual environment. - -.. code-block:: shell +* A local source in development mode - # Create virtual env. - python -m venv .otx +.. tab-set:: - # Activate virtual env. - source .otx/bin/activate + .. tab-item:: PyPI -3. Install OpenVINO™ Training Extensions package from either: + .. code-block:: shell -* A local source in development mode + pip install otx -.. code-block:: shell + .. tab-item:: Source - pip install -e . + .. code-block:: shell -* PyPI + # Clone the training_extensions repository with the following command: + git clone https://github.com/openvinotoolkit/training_extensions.git + cd training_extensions -.. code-block:: shell + # Set up a virtual environment. + python -m venv .otx + source .otx/bin/activate - pip install otx + pip install -e . -4. Install PyTorch & Requirements for training according to your system environment. +2. Install PyTorch & Requirements for training according to your system environment. .. code-block:: shell otx install -v -[Optional] Refer to the `official installation guide `_ +[Optional] Refer to the `torch official installation guide `_ .. note:: Currently, only torch==2.1.1 was fully validated. (older versions are not supported due to security issues). -5. Once the package is installed in the virtual environment, you can use full +3. Once the package is installed in the virtual environment, you can use full OpenVINO™ Training Extensions command line functionality. **************************************************** @@ -141,7 +133,7 @@ Troubleshooting 1. If you have problems when you try to use ``pip install`` command, please update pip version by following command: -.. code-block:: +.. code-block:: shell python -m pip install --upgrade pip @@ -155,3 +147,10 @@ please use pip with proxy call as demonstrated by command below: .. code-block:: shell python -m pip install --proxy http://:@: + +4. If you're facing a problem with CLI side of the OTX, please check the help message of the command by using ``--help`` option. +If you still want to see more ``jsonargparse``-related messages, you can set the environment variables like below. + +.. code-block:: shell + + export JSONARGPARSE_DEBUG=1 # 0: Off, 1: On diff --git a/docs/source/guide/get_started/introduction.rst b/docs/source/guide/get_started/introduction.rst index 073e9f83f2d..3cfe0be5c40 100644 --- a/docs/source/guide/get_started/introduction.rst +++ b/docs/source/guide/get_started/introduction.rst @@ -1,8 +1,8 @@ .. raw:: html -
- Logo -
+
+ Logo +
Introduction ============ @@ -11,7 +11,7 @@ Introduction The CLI commands of the framework allows users to train, infer, optimize and deploy models easily and quickly even with low expertise in the deep learning field. OpenVINO™ Training Extensions offers diverse combinations of model architectures, learning methods, and task types based on `PyTorch `_ and `OpenVINO™ toolkit `_. -OpenVINO™ Training Extensions provides a **“model template”** for every supported task type, which consolidates necessary information to build a model. Model templates are validated on various datasets and serve one-stop shop for obtaining the best models in general. If you are an experienced user, you can configure your own model based on `torchvision `_, `pytorchcv `_, `mmcv `_ and `OpenVINO Model Zoo (OMZ) `_ frameworks. +OpenVINO™ Training Extensions provide **`recipe `_** for every supported task type, which consolidates necessary information to build a model. Model templates are validated on various datasets and serve one-stop shop for obtaining the best models in general. If you are an experienced user, you can configure your own model based on `torchvision `_, `pytorchcv `_, `mmcv `_ and `OpenVINO Model Zoo (OMZ) `_ frameworks. Furthermore, OpenVINO™ Training Extensions provides :doc:`automatic configuration <../explanation/additional_features/auto_configuration>` of task types and hyperparameters. The framework will identify the most suitable model template based on your dataset, and choose the best hyperparameter configuration. The development team is continuously extending functionalities to make training as simple as possible so that single CLI command can obtain accurate, efficient and robust models ready to be integrated into your project. @@ -27,10 +27,11 @@ OpenVINO™ Training Extensions supports the following computer vision tasks: - **Instance segmentation** including tiling algorithm support - **Action recognition** including action classification and detection - **Anomaly recognition** tasks including anomaly classification, detection and segmentation +- **Visual Prompting** tasks including segment anything model, zero-shot visual prompting OpenVINO™ Training Extensions supports the :doc:`following learning methods <../explanation/algorithms/index>`: -- **Supervised**, incremental training, which includes class incremental scenario and contrastive learning for classification and semantic segmentation tasks +- **Supervised**, incremental training, which includes class incremental scenario. OpenVINO™ Training Extensions will provide the :doc:`following features <../explanation/additional_features/index>` in coming releases: @@ -46,85 +47,110 @@ Documentation content 1. :octicon:`light-bulb` **Quick start guide**: - .. grid:: - :gutter: 1 +.. grid:: + :gutter: 1 - .. grid-item-card:: Installation Guide - :link: installation - :link-type: doc - :text-align: center + .. grid-item-card:: :octicon:`package` Installation Guide + :link: installation + :link-type: doc + :text-align: center - .. grid-item-card:: API Quick-Start - :link: api_tutorial - :link-type: doc - :text-align: center + Learn more about how to install OpenVINO™ Training Extensions - .. grid-item-card:: CLI Commands - :link: cli_commands - :link-type: doc - :text-align: center + .. grid-item-card:: :octicon:`code-square` API Quick-Guide + :link: api_tutorial + :link-type: doc + :text-align: center + + Learn more about how to use OpenVINO™ Training Extensions Python API. + + .. grid-item-card:: :octicon:`terminal` CLI Guide + :link: cli_commands + :link-type: doc + :text-align: center + + Learn more about how to use OpenVINO™ Training Extensions CLI commands 2. :octicon:`book` **Tutorials**: - .. grid:: 1 2 2 3 - :margin: 1 1 0 0 - :gutter: 1 - - .. grid-item-card:: Classification - :link: ../tutorials/base/how_to_train/classification - :link-type: doc - :text-align: center - - .. grid-item-card:: Detection - :link: ../tutorials/base/how_to_train/detection - :link-type: doc - :text-align: center - - .. grid-item-card:: Instance Segmentation - :link: ../tutorials/base/how_to_train/instance_segmentation - :link-type: doc - :text-align: center - - .. grid-item-card:: Semantic Segmentation - :link: ../tutorials/base/how_to_train/semantic_segmentation - :link-type: doc - :text-align: center - - .. grid-item-card:: Anomaly Task - :link: ../tutorials/base/how_to_train/anomaly_detection - :link-type: doc - :text-align: center - - .. grid-item-card:: Action Classification - :link: ../tutorials/base/how_to_train/action_classification - :link-type: doc - :text-align: center - - .. grid-item-card:: Action Detection - :link: ../tutorials/base/how_to_train/action_detection - :link-type: doc - :text-align: center - - .. grid-item-card:: Visual Prompting - :text-align: center - - .. grid-item-card:: Advanced - :link: ../tutorials/advanced/index - :link-type: doc - :text-align: center +.. grid:: 1 2 2 3 + :margin: 1 1 0 0 + :gutter: 1 + + .. grid-item-card:: Classification + :link: ../tutorials/base/how_to_train/classification + :link-type: doc + :text-align: center + + Learn how to train a classification model + + .. grid-item-card:: Detection + :link: ../tutorials/base/how_to_train/detection + :link-type: doc + :text-align: center + + Learn how to train a detection model. + + .. grid-item-card:: Instance Segmentation + :link: ../tutorials/base/how_to_train/instance_segmentation + :link-type: doc + :text-align: center + + Learn how to train an instance segmentation model + + .. grid-item-card:: Semantic Segmentation + :link: ../tutorials/base/how_to_train/semantic_segmentation + :link-type: doc + :text-align: center + + Learn how to train a semantic segmentation model + + .. grid-item-card:: Anomaly Task + :link: ../tutorials/base/how_to_train/anomaly_detection + :link-type: doc + :text-align: center + + Learn how to train an anomaly detection model + + .. grid-item-card:: Action Classification + :link: ../tutorials/base/how_to_train/action_classification + :link-type: doc + :text-align: center + + Learn how to train an action classification model + + .. grid-item-card:: Action Detection + :link: ../tutorials/base/how_to_train/action_detection + :link-type: doc + :text-align: center + + Learn how to train an action detection model + + .. grid-item-card:: Visual Prompting + :link: ../tutorials/base/how_to_train/visual_prompting + :link-type: doc + :text-align: center + + Learn how to train a visual prompting model + + .. grid-item-card:: Advanced + :link: ../tutorials/advanced/index + :link-type: doc + :text-align: center + + Learn how to use advanced features of OpenVINO™ Training Extensions 3. **Explanation section**: - This section consists of an algorithms explanation and describes additional features that are supported by OpenVINO™ Training Extensions. - :ref:`Algorithms ` section includes a description of all supported algorithms: +This section consists of an algorithms explanation and describes additional features that are supported by OpenVINO™ Training Extensions. +:ref:`Algorithms ` section includes a description of all supported algorithms: 1. Explanation of the task and main supervised training pipeline. 2. Description of the supported datasets formats for each task. 3. Available templates and models. 4. Incremental learning approach. - 5. Semi-supervised and Self-supervised algorithms. - :ref:`Additional Features ` section consists of: +:ref:`Additional Features ` section consists of: 1. Overview of model optimization algorithms. 2. Hyperparameters optimization functionality (HPO). @@ -132,8 +158,8 @@ Documentation content 4. **Reference**: - This section gives an overview of the OpenVINO™ Training Extensions code base. There source code for Entities, classes and functions can be found. +This section gives an overview of the OpenVINO™ Training Extensions code base. There source code for Entities, classes and functions can be found. 5. **Release Notes**: - There can be found a description of new and previous releases. +There can be found a description of new and previous releases. diff --git a/docs/source/guide/release_notes/index.rst b/docs/source/guide/release_notes/index.rst index 133b7350c9e..a1653700ac9 100644 --- a/docs/source/guide/release_notes/index.rst +++ b/docs/source/guide/release_notes/index.rst @@ -2,7 +2,12 @@ Releases ######## .. toctree:: - :maxdepth: 1 + :maxdepth: 1 + + +v2.0.0 (1Q24) +------------- + v1.5.0 (4Q23) ------------- diff --git a/docs/source/guide/tutorials/advanced/index.rst b/docs/source/guide/tutorials/advanced/index.rst index dd781303b4a..1e65ac463b5 100644 --- a/docs/source/guide/tutorials/advanced/index.rst +++ b/docs/source/guide/tutorials/advanced/index.rst @@ -3,7 +3,6 @@ Advanced Tutorials .. toctree:: :maxdepth: 1 - :hidden: configuration diff --git a/docs/source/guide/tutorials/base/explain.rst b/docs/source/guide/tutorials/base/explain.rst index 20bee0eb974..25ed0b9a3d3 100644 --- a/docs/source/guide/tutorials/base/explain.rst +++ b/docs/source/guide/tutorials/base/explain.rst @@ -1,4 +1,4 @@ -How to explain the model behavior +XAI Tutorial ================================= This guide explains the model behavior, which is trained through :doc:`previous stage `. diff --git a/docs/source/guide/tutorials/base/how_to_train/classification.rst b/docs/source/guide/tutorials/base/how_to_train/classification.rst index 7e6e4a801cd..f522b1aa9e8 100644 --- a/docs/source/guide/tutorials/base/how_to_train/classification.rst +++ b/docs/source/guide/tutorials/base/how_to_train/classification.rst @@ -101,8 +101,6 @@ The list of supported templates for classification is available with the command The characteristics and detailed comparison of the models could be found in :doc:`Explanation section <../../../explanation/algorithms/classification/multi_class_classification>`. - You also can modify the architecture of supported models with various backbones. To do that, please refer to the :doc:`advanced tutorial for model customization <../../advanced/backbones>`. - .. tab-set:: .. tab-item:: CLI @@ -149,7 +147,7 @@ The list of supported templates for classification is available with the command ] ''' -2. On this step we will prepare custom configuration +1. On this step we will prepare custom configuration with: - all necessary configs for otx_efficientnet_b0 diff --git a/docs/source/guide/tutorials/base/how_to_train/index.rst b/docs/source/guide/tutorials/base/how_to_train/index.rst index 2b159611e81..7d224cb46cf 100644 --- a/docs/source/guide/tutorials/base/how_to_train/index.rst +++ b/docs/source/guide/tutorials/base/how_to_train/index.rst @@ -1,5 +1,5 @@ -How to train, validate, export and optimize the model -================================================================ +Training to deployment tutorials +================================= .. grid:: 1 2 2 3 :margin: 1 1 0 0 @@ -10,41 +10,57 @@ How to train, validate, export and optimize the model :link-type: doc :text-align: center + Learn how to train a classification model + .. grid-item-card:: Detection :link: detection :link-type: doc :text-align: center + Learn how to train a detection model. + .. grid-item-card:: Instance Segmentation :link: instance_segmentation :link-type: doc :text-align: center + Learn how to train an instance segmentation model + .. grid-item-card:: Semantic Segmentation :link: semantic_segmentation :link-type: doc :text-align: center + Learn how to train a semantic segmentation model + .. grid-item-card:: Anomaly Task :link: anomaly_detection :link-type: doc :text-align: center + Learn how to train an anomaly detection model + .. grid-item-card:: Action Classification :link: action_classification :link-type: doc :text-align: center + Learn how to train an action classification model + .. grid-item-card:: Action Detection :link: action_detection :link-type: doc :text-align: center + Learn how to train an action detection model + .. grid-item-card:: Visual Prompting :link: visual_prompting :link-type: doc :text-align: center + Learn how to train a visual prompting model + .. toctree:: :maxdepth: 1 :hidden: diff --git a/docs/source/guide/tutorials/base/how_to_train/semantic_segmentation.rst b/docs/source/guide/tutorials/base/how_to_train/semantic_segmentation.rst index 07a9731940b..a326f385868 100644 --- a/docs/source/guide/tutorials/base/how_to_train/semantic_segmentation.rst +++ b/docs/source/guide/tutorials/base/how_to_train/semantic_segmentation.rst @@ -72,8 +72,6 @@ The list of supported templates for semantic segmentation is available with the The characteristics and detailed comparison of the models could be found in :doc:`Explanation section <../../../explanation/algorithms/segmentation/semantic_segmentation>`. - We also can modify the architecture of supported models with various backbones, please refer to the :doc:`advanced tutorial for model customization <../../advanced/backbones>`. - .. code-block:: (otx) ...$ otx find --task segmentation diff --git a/docs/source/guide/tutorials/base/index.rst b/docs/source/guide/tutorials/base/index.rst index a852f3f6517..21922f9302d 100644 --- a/docs/source/guide/tutorials/base/index.rst +++ b/docs/source/guide/tutorials/base/index.rst @@ -4,16 +4,20 @@ Base Tutorials .. grid:: :gutter: 1 - .. grid-item-card:: Train to Export Model + .. grid-item-card:: Training to deployment tutorials :link: how_to_train/index :link-type: doc :text-align: center - .. grid-item-card:: Explain Model + Learn how to train a model and deploy it to use in OpenVINO™ Training Extensions. + + .. grid-item-card:: XAI Tutorial :link: explain :link-type: doc :text-align: center + Learn how to explain a model using OpenVINO™ Training Extensions. + .. toctree:: :maxdepth: 2 :hidden: diff --git a/docs/source/index.rst b/docs/source/index.rst index 40a5d5809a3..a14536bc2d4 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -12,4 +12,4 @@ Indices and tables * :ref:`genindex` * :ref:`modindex` -* :ref:`search` +* :ref:`search` \ No newline at end of file