Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove uneeded *py tutorials and edit the tutorials readm #1002

Merged
merged 8 commits into from
Mar 21, 2024
26 changes: 26 additions & 0 deletions tutorials/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# MCT Tutorials
Explore the Model Compression Toolkit (MCT) through our tutorials,
covering compression techniques for Keras and PyTorch models.
Access interactive Jupyter notebooks for hands-on learning.


## Getting started
Learn how to quickly quantize pre-trained models using MCT's post-training quantization technique for both Keras and PyTorch models.
- [Keras MobileNetV2 post training quantization](notebooks/keras/ptq/example_keras_imagenet.ipynb)
- [Pytorch MobileNetV2 post training quantization](notebooks/pytorch/ptq/example_pytorch_quantization_mnist.ipynb)

## MCT Features
This set of tutorials covers all the quantization tools provided by MCT.
The notebooks in this section demonstrate how to configure and run simple and advanced post-training quantization methods.
This includes fine-tuning PTQ (Post-Training Quantization) configurations, exporting models,
and exploring advanced compression techniques.
These techniques are essential for further optimizing models and achieving superior performance in deployment scenarios.
- [MCT notebooks](notebooks/MCT_notebooks.md)
Idan-BenAmi marked this conversation as resolved.
Show resolved Hide resolved

## Quantization for Sony-IMX500 deployment

This section provides several guides on quantizing pre-trained models to meet specific constraints for deployment on the
[Sony-IMX500](https://developer.sony.com/imx500/) processing platform.
We will cover various tasks and demonstrate the necessary steps to achieve efficient quantization for optimal
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not a tutorial but an introduction, the future tense here is mistaken.

deployment performance.
- [IMX500 notebooks](notebooks/IMX500_notebooks.md)
12 changes: 12 additions & 0 deletions tutorials/notebooks/IMX500_notebooks.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
# Sony-IMX500 Notebooks

Here we provide examples on quantizing pre-trained models for deployment on Sony-IMX500 processing platform.
We will cover various tasks and demonstrate the necessary steps to achieve efficient quantization for optimal
deployment performance.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

highlight that the exported models from these tutorials are ready to be deployed on IMX500! (plug-and-play)


| Task | Model | Source Repository | Notebook |
|-----------------------------------------------------------------|----------------|---------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|
| Classification | MobileNetV2 | [Keras Applications](https://keras.io/api/applications/) | [Keras notebook](model_optimization/tutorials/notebooks/keras/ptq/example_keras_imagenet.ipynb) |
| Object Detection | YOLOv8n | [Ultralytics](https://github.com/ultralytics/ultralytics) | [Keras notebook](model_optimization/tutorials/notebooks/keras/ptq/keras_yolov8n_for_imx500.ipynb) |
| Semantic Segmentation | DeepLabV3-Plus | [bonlime's repo](https://github.com/bonlime/keras-deeplab-v3-plus) | [Keras notebook](model_optimization/tutorials/notebooks/keras/ptq/keras_deeplabv3plus_for_imx500.ipynb) |

Original file line number Diff line number Diff line change
@@ -1,30 +1,11 @@
# Tutorials

## Table of Contents
- [Introduction](#introduction)
- [Keras Tutorials](#keras-tutorials)
- [Post-Training Quantization](#keras-ptq)
- [Gradient-Based Post-Training Quantization](#keras-gptq)
- [Quantization-Aware Training](#keras-qat)
- [Structured Pruning](#keras-pruning)
- [Export Quantized Models](#keras-export)
- [Debug Tools](#keras-debug)
- [Pytorch Tutorials](#pytorch-tutorials)
- [Quick-Start with Torchvision](#pytorch-quickstart-torchvision)
- [Post-Training Quantization](#pytorch-ptq)
- [Quantization-Aware Training](#pytorch-qat)
- [Structured Pruning](#pytorch-pruning)
- [Data Generation](#pytorch-data-generation)
- [Export Quantized Models](#pytorch-export)

## Introduction
Dive into the Model-Compression-Toolkit (MCT) with our collection of tutorials, covering a wide
range of compression techniques for Keras and Pytorch models. We provide
both Python scripts and interactive Jupyter notebooks for an
engaging and hands-on experience.


## Keras Tutorials
# MCT Features
This tutorial set introduces the various quantization tools offered by MCT.
The notebooks included here illustrate the setup and usage of both basic and advanced post-training quantization methods.
You'll learn how to refine PTQ (Post-Training Quantization) settings, export models, and explore advanced compression
techniques such as GPTQ (Gradient-Based Post-Training Quantization), Mixed precision quantization and more.
These techniques are essential for further optimizing models and achieving superior performance in deployment scenarios.

### Keras Tutorials
Idan-BenAmi marked this conversation as resolved.
Show resolved Hide resolved

<details id="keras-ptq">
<summary>Post-Training Quantization (PTQ)</summary>
Expand Down Expand Up @@ -85,7 +66,7 @@ engaging and hands-on experience.

</details>

## Pytorch Tutorials
### Pytorch Tutorials


<details id="pytorch-quickstart-torchvision">
Expand Down Expand Up @@ -117,6 +98,7 @@ engaging and hands-on experience.
| Tutorial | Included Features |
|-----------------------------------------------------------------------------------|--------------|
| [QAT on MNIST](pytorch/qat/example_pytorch_qat.py) | &#x2705; QAT |
</details>

</details>

Expand Down Expand Up @@ -148,4 +130,3 @@ engaging and hands-on experience.
| [Exporter Usage](pytorch/export/example_pytorch_export.ipynb) | &#x2705; Export |

</details>

129 changes: 0 additions & 129 deletions tutorials/notebooks/keras/gptq/example_keras_mobilenet_gptq.py

This file was deleted.

Loading
Loading