Skip to content

Commit

Permalink
v0.6.0
Browse files Browse the repository at this point in the history
See https://github.com/quic/ai-hub-models/releases/v0.6.0 for changelog.

Signed-off-by: QAIHM Team <[email protected]>
  • Loading branch information
qaihm-bot committed May 20, 2024
1 parent c2116ba commit 49f7592
Show file tree
Hide file tree
Showing 475 changed files with 22,303 additions and 6,273 deletions.
2 changes: 2 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@ repos:
args: [--allow-multiple-documents]
- id: trailing-whitespace
exclude: '\.diff$'
args: [--markdown-linebreak-ext=md]
- id: check-added-large-files
args: ['--maxkb=1024']
- id: check-merge-conflict
Expand All @@ -56,6 +57,7 @@ repos:
rev: v0.9.0.6
hooks:
- id: shellcheck
exclude: '\.yml$'
- repo: https://github.com/pycqa/isort
rev: 5.12.0
hooks:
Expand Down
31 changes: 26 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,12 +12,21 @@ memory etc.) and ready to deploy on Qualcomm® devices.
* Access the models through [Hugging Face](https://huggingface.co/qualcomm).
* [Sign up](https://myaccount.qualcomm.com/signup) to run these models on hosted Qualcomm® devices.

Supported **python package host machine** Operating Systems:
- Linux (x86, ARM)
- Windows (x86)
- Windows (ARM-- ONLY via x86 Python, not ARM Python)
- MacOS (x86, ARM)

Supported runtimes
* [TensorFlow Lite](https://www.tensorflow.org/lite)
* [Qualcomm AI Engine Direct](https://www.qualcomm.com/developer/artificial-intelligence#overview)
* [ONNX](https://onnxruntime.ai/docs/execution-providers/QNN-ExecutionProvider.html)

Supported operating systems:
* Android 11+
Models can be deployed on:
* Android
* Windows
* Linux

Supported compute units
* CPU, GPU, NPU (includes [Hexagon DSP](https://developer.qualcomm.com/software/hexagon-dsp-sdk/dsp-processor), [HTP](https://developer.qualcomm.com/hardware/qualcomm-innovators-development-kit/ai-resources-overview/ai-hardware-cores-accelerators))
Expand All @@ -28,12 +37,13 @@ Supported precision

Supported chipsets
* [Snapdragon 845](https://www.qualcomm.com/products/mobile/snapdragon/smartphones/snapdragon-8-series-mobile-platforms/snapdragon-845-mobile-platform), [Snapdragon 855/855+](https://www.qualcomm.com/products/mobile/snapdragon/smartphones/snapdragon-8-series-mobile-platforms/snapdragon-855-mobile-platform), [Snapdragon 865/865+](https://www.qualcomm.com/products/mobile/snapdragon/smartphones/snapdragon-8-series-mobile-platforms/snapdragon-865-plus-5g-mobile-platform), [Snapdragon 888/888+](https://www.qualcomm.com/products/mobile/snapdragon/smartphones/snapdragon-8-series-mobile-platforms/snapdragon-888-5g-mobile-platform)
* [Snapdragon 8 Gen 1](https://www.qualcomm.com/products/mobile/snapdragon/smartphones/snapdragon-8-series-mobile-platforms/snapdragon-8-gen-1-mobile-platform), [Snapdragon 8 Gen 2](https://www.qualcomm.com/products/mobile/snapdragon/smartphones/snapdragon-8-series-mobile-platforms/snapdragon-8-gen-2-mobile-platform), [Snapdragon 8 Gen 3](https://www.qualcomm.com/products/mobile/snapdragon/smartphones/snapdragon-8-series-mobile-platforms/snapdragon-8-gen-3-mobile-platform)
* [Snapdragon 8 Gen 1](https://www.qualcomm.com/products/mobile/snapdragon/smartphones/snapdragon-8-series-mobile-platforms/snapdragon-8-gen-1-mobile-platform), [Snapdragon 8 Gen 2](https://www.qualcomm.com/products/mobile/snapdragon/smartphones/snapdragon-8-series-mobile-platforms/snapdragon-8-gen-2-mobile-platform), [Snapdragon 8 Gen 3](https://www.qualcomm.com/products/mobile/snapdragon/smartphones/snapdragon-8-series-mobile-platforms/snapdragon-8-gen-3-mobile-platform), [Snapdragon X Elite](https://www.qualcomm.com/products/mobile/snapdragon/pcs-and-tablets/snapdragon-x-elite)

Select supported devices
* Samsung Galaxy S21 Series, Galaxy S22 Series, Galaxy S23 Series, Galaxy S24 Series
* Xiaomi 12, 13
* Google Pixel 3, 4, 5
* Snapdragon X Elite CRD (Compute Reference Device)

and many more.

Expand Down Expand Up @@ -261,6 +271,8 @@ Qualcomm® AI Hub Models is licensed under BSD-3. See the [LICENSE file](../LICE
| | | | |
| **Image Classification**
| [ConvNext-Tiny](https://aihub.qualcomm.com/models/convnext_tiny) | [qai_hub_models.models.convnext_tiny](qai_hub_models/models/convnext_tiny/README.md) | ✔️ | ✔️ | ✔️
| [ConvNext-Tiny-w8a16-Quantized](qai_hub_models/models/convnext_tiny_w8a16_quantized/README.md) | [qai_hub_models.models.convnext_tiny_w8a16_quantized](qai_hub_models/models/convnext_tiny_w8a16_quantized/README.md) | ✔️ | ✔️ | ✔️
| [ConvNext-Tiny-w8a8-Quantized](qai_hub_models/models/convnext_tiny_w8a8_quantized/README.md) | [qai_hub_models.models.convnext_tiny_w8a8_quantized](qai_hub_models/models/convnext_tiny_w8a8_quantized/README.md) | ✔️ | ✔️ | ✔️
| [DenseNet-121](https://aihub.qualcomm.com/models/densenet121) | [qai_hub_models.models.densenet121](qai_hub_models/models/densenet121/README.md) | ✔️ | ✔️ | ✔️
| [EfficientNet-B0](https://aihub.qualcomm.com/models/efficientnet_b0) | [qai_hub_models.models.efficientnet_b0](qai_hub_models/models/efficientnet_b0/README.md) | ✔️ | ✔️ | ✔️
| [GoogLeNet](https://aihub.qualcomm.com/models/googlenet) | [qai_hub_models.models.googlenet](qai_hub_models/models/googlenet/README.md) | ✔️ | ✔️ | ✔️
Expand Down Expand Up @@ -321,7 +333,8 @@ Qualcomm® AI Hub Models is licensed under BSD-3. See the [LICENSE file](../LICE
| [DeepLabV3-Plus-MobileNet](https://aihub.qualcomm.com/models/deeplabv3_plus_mobilenet) | [qai_hub_models.models.deeplabv3_plus_mobilenet](qai_hub_models/models/deeplabv3_plus_mobilenet/README.md) | ✔️ | ✔️ | ✔️
| [DeepLabV3-Plus-MobileNet-Quantized](https://aihub.qualcomm.com/models/deeplabv3_plus_mobilenet_quantized) | [qai_hub_models.models.deeplabv3_plus_mobilenet_quantized](qai_hub_models/models/deeplabv3_plus_mobilenet_quantized/README.md) | ✔️ | ✔️ | ✔️
| [DeepLabV3-ResNet50](https://aihub.qualcomm.com/models/deeplabv3_resnet50) | [qai_hub_models.models.deeplabv3_resnet50](qai_hub_models/models/deeplabv3_resnet50/README.md) | ✔️ | ✔️ | ✔️
| [FCN_ResNet50](https://aihub.qualcomm.com/models/fcn_resnet50) | [qai_hub_models.models.fcn_resnet50](qai_hub_models/models/fcn_resnet50/README.md) | ✔️ | ✔️ | ✔️
| [FCN-ResNet50](https://aihub.qualcomm.com/models/fcn_resnet50) | [qai_hub_models.models.fcn_resnet50](qai_hub_models/models/fcn_resnet50/README.md) | ✔️ | ✔️ | ✔️
| [FCN-ResNet50-Quantized](https://aihub.qualcomm.com/models/fcn_resnet50_quantized) | [qai_hub_models.models.fcn_resnet50_quantized](qai_hub_models/models/fcn_resnet50_quantized/README.md) | ✔️ | ✔️ | ✔️
| [FFNet-122NS-LowRes](https://aihub.qualcomm.com/models/ffnet_122ns_lowres) | [qai_hub_models.models.ffnet_122ns_lowres](qai_hub_models/models/ffnet_122ns_lowres/README.md) | ✔️ | ✔️ | ✔️
| [FFNet-40S](https://aihub.qualcomm.com/models/ffnet_40s) | [qai_hub_models.models.ffnet_40s](qai_hub_models/models/ffnet_40s/README.md) | ✔️ | ✔️ | ✔️
| [FFNet-40S-Quantized](https://aihub.qualcomm.com/models/ffnet_40s_quantized) | [qai_hub_models.models.ffnet_40s_quantized](qai_hub_models/models/ffnet_40s_quantized/README.md) | ✔️ | ✔️ | ✔️
Expand All @@ -347,6 +360,8 @@ Qualcomm® AI Hub Models is licensed under BSD-3. See the [LICENSE file](../LICE
| [MediaPipe-Hand-Detection](https://aihub.qualcomm.com/models/mediapipe_hand) | [qai_hub_models.models.mediapipe_hand](qai_hub_models/models/mediapipe_hand/README.md) | ✔️ | ✔️ | ✔️
| [YOLOv8-Detection](https://aihub.qualcomm.com/models/yolov8_det) | [qai_hub_models.models.yolov8_det](qai_hub_models/models/yolov8_det/README.md) | ✔️ | ✔️ | ✔️
| [YOLOv8-Detection-Quantized](https://aihub.qualcomm.com/models/yolov8_det_quantized) | [qai_hub_models.models.yolov8_det_quantized](qai_hub_models/models/yolov8_det_quantized/README.md) | ✔️ | ✔️ | ✔️
| [Yolo-NAS](https://aihub.qualcomm.com/models/yolonas) | [qai_hub_models.models.yolonas](qai_hub_models/models/yolonas/README.md) | ✔️ | ✔️ | ✔️
| [Yolo-NAS-Quantized](https://aihub.qualcomm.com/models/yolonas_quantized) | [qai_hub_models.models.yolonas_quantized](qai_hub_models/models/yolonas_quantized/README.md) | ✔️ | ✔️ | ✔️
| [Yolo-v6](https://aihub.qualcomm.com/models/yolov6) | [qai_hub_models.models.yolov6](qai_hub_models/models/yolov6/README.md) | ✔️ | ✔️ | ✔️
| [Yolo-v7](https://aihub.qualcomm.com/models/yolov7) | [qai_hub_models.models.yolov7](qai_hub_models/models/yolov7/README.md) | ✔️ | ✔️ | ✔️
| [Yolo-v7-Quantized](https://aihub.qualcomm.com/models/yolov7_quantized) | [qai_hub_models.models.yolov7_quantized](qai_hub_models/models/yolov7_quantized/README.md) | ✔️ | ✔️ | ✔️
Expand All @@ -356,6 +371,10 @@ Qualcomm® AI Hub Models is licensed under BSD-3. See the [LICENSE file](../LICE
| [LiteHRNet](https://aihub.qualcomm.com/models/litehrnet) | [qai_hub_models.models.litehrnet](qai_hub_models/models/litehrnet/README.md) | ✔️ | ✔️ | ✔️
| [MediaPipe-Pose-Estimation](https://aihub.qualcomm.com/models/mediapipe_pose) | [qai_hub_models.models.mediapipe_pose](qai_hub_models/models/mediapipe_pose/README.md) | ✔️ | ✔️ | ✔️
| [OpenPose](https://aihub.qualcomm.com/models/openpose) | [qai_hub_models.models.openpose](qai_hub_models/models/openpose/README.md) | ✔️ | ✔️ | ✔️
| [Posenet-Mobilenet](qai_hub_models/models/posenet_mobilenet/README.md) | [qai_hub_models.models.posenet_mobilenet](qai_hub_models/models/posenet_mobilenet/README.md) | ✔️ | ✔️ | ✔️
| | | | |
| **Depth Estimation**
| [Midas-V2](qai_hub_models/models/midas/README.md) | [qai_hub_models.models.midas](qai_hub_models/models/midas/README.md) | ✔️ | ✔️ | ✔️

### Audio

Expand Down Expand Up @@ -386,7 +405,9 @@ Qualcomm® AI Hub Models is licensed under BSD-3. See the [LICENSE file](../LICE
| | | | |
| **Image Generation**
| [ControlNet](https://aihub.qualcomm.com/models/controlnet_quantized) | [qai_hub_models.models.controlnet_quantized](qai_hub_models/models/controlnet_quantized/README.md) | ✔️ | ✔️ | ✔️
| [Stable-Diffusion](https://aihub.qualcomm.com/models/stable_diffusion_quantized) | [qai_hub_models.models.stable_diffusion_quantized](qai_hub_models/models/stable_diffusion_quantized/README.md) | ✔️ | ✔️ | ✔️
| [Riffusion](qai_hub_models/models/riffusion_quantized/README.md) | [qai_hub_models.models.riffusion_quantized](qai_hub_models/models/riffusion_quantized/README.md) | ✔️ | ✔️ | ✔️
| [Stable-Diffusion-v1.5](https://aihub.qualcomm.com/models/stable_diffusion_v1_5_quantized) | [qai_hub_models.models.stable_diffusion_v1_5_quantized](qai_hub_models/models/stable_diffusion_v1_5_quantized/README.md) | ✔️ | ✔️ | ✔️
| [Stable-Diffusion-v2.1](qai_hub_models/models/stable_diffusion_v2_1_quantized/README.md) | [qai_hub_models.models.stable_diffusion_v2_1_quantized](qai_hub_models/models/stable_diffusion_v2_1_quantized/README.md) | ✔️ | ✔️ | ✔️
| | | | |
| **Text Generation**
| [Baichuan-7B](https://aihub.qualcomm.com/models/baichuan_7b_quantized) | [qai_hub_models.models.baichuan_7b_quantized](qai_hub_models/models/baichuan_7b_quantized/README.md) | ✔️ | ✔️ | ✔️
Expand Down
2 changes: 1 addition & 1 deletion apps/android/ImageClassification/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,5 +85,5 @@ Also, you can use AI-HUB Model name as mentioned in models directory, to directl
You can also select the model provided in the list menu during the execution of build_apk.py without specifying the model name and model path.

```
python build_apk.py -q "<QNN_SDK_PATH>"
python build_apk.py -q "<QNN_SDK_PATH>"
```
2 changes: 1 addition & 1 deletion qai_hub_models/_version.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@
# Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved.
# SPDX-License-Identifier: BSD-3-Clause
# ---------------------------------------------------------------------
__version__ = "0.5.1"
__version__ = "0.6.0"
25 changes: 11 additions & 14 deletions qai_hub_models/datasets/bsd300.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,39 +32,36 @@ class BSD300Dataset(BaseDataset):

def __init__(self, scaling_factor=4):
self.bsd_path = BSD300_ASSET.path(extracted=True)
self.images_path = os.path.join(self.bsd_path, "images/train")
self.images_path = self.bsd_path / "images" / "train"
BaseDataset.__init__(self, self.bsd_path)
self.scaling_factor = scaling_factor

def _validate_data(self) -> bool:
images_path = os.path.join(self.dataset_path, "images/train")

# Check image path exists
if not os.path.exists(images_path):
if not self.images_path.exists():
return False

# Ensure the correct number of images are there
files = os.listdir(images_path)
images = [f for f in files if ".jpg" in f]
images = [f for f in self.images_path.iterdir() if ".jpg" in f.name]
if len(images) != DATASET_LENGTH:
return False

return True

def _prepare_data(self):
# Rename images to be more friendly to enumeration
directory = os.path.join(self.dataset_path, "images/train")
files = os.listdir(directory)
for i, filename in enumerate(files):
if filename.endswith(".jpg"):
# directory = os.path.join(self.dataset_path, "images/train")
# files = os.listdir(directory)
for i, filepath in enumerate(self.images_path.iterdir()):
if filepath.name.endswith(".jpg"):
# Open the image and convert it to png
try:
with Image.open(os.path.join(directory, filename)) as img:
img.save(os.path.join(directory, f"img_{i + 1:03d}_HR.jpg"))
with Image.open(filepath) as img:
img.save(self.images_path / f"img_{i + 1:03d}_HR.jpg")
# delete the old image
os.remove(os.path.join(directory, filename))
os.remove(filepath)
except ValueError:
print(f"File {filename} does not exist!")
print(f"File {filepath} does not exist!")

def __len__(self):
return DATASET_LENGTH
Expand Down
11 changes: 6 additions & 5 deletions qai_hub_models/datasets/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
import os
import shutil
from abc import ABC, abstractmethod
from pathlib import Path
from typing import final

from torch.utils.data import Dataset
Expand All @@ -17,17 +18,17 @@ class BaseDataset(Dataset, ABC):
Base class to be extended by Datasets used in this repo for quantizing models.
"""

def __init__(self, dataset_path: str):
self.dataset_path = dataset_path
def __init__(self, dataset_path: str | Path):
self.dataset_path = Path(dataset_path)
self.download_data()

@final
def download_data(self) -> None:
if self._validate_data():
return
if os.path.exists(self.dataset_path):
if self.dataset_path.exists():
# Data is corrupted, delete and re-download
if os.path.isdir(self.dataset_path):
if self.dataset_path.is_dir():
shutil.rmtree(self.dataset_path)
else:
os.remove(self.dataset_path)
Expand All @@ -49,4 +50,4 @@ def _validate_data(self) -> bool:
"""
Validates data downloaded on disk. By default just checks that folder exists.
"""
return os.path.exists(self.dataset_path)
return self.dataset_path.exists()
94 changes: 94 additions & 0 deletions qai_hub_models/datasets/imagenet.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
# ---------------------------------------------------------------------
# Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved.
# SPDX-License-Identifier: BSD-3-Clause
# ---------------------------------------------------------------------
import os
import subprocess

from torchvision.datasets import ImageNet

from qai_hub_models.datasets.common import BaseDataset
from qai_hub_models.utils.asset_loaders import CachedWebDatasetAsset
from qai_hub_models.utils.image_processing import IMAGENET_TRANSFORM

IMAGENET_FOLDER_NAME = "imagenet"
IMAGENET_VERSION = 1

IMAGENET_ASSET = CachedWebDatasetAsset(
"https://image-net.org/data/ILSVRC/2012/ILSVRC2012_img_val.tar",
IMAGENET_FOLDER_NAME,
IMAGENET_VERSION,
"ILSVRC2012_img_val.tar",
)
DEVKIT_NAME = "ILSVRC2012_devkit_t12.tar.gz"
DEVKIT_ASSET = CachedWebDatasetAsset(
f"https://image-net.org/data/ILSVRC/2012/{DEVKIT_NAME}",
IMAGENET_FOLDER_NAME,
IMAGENET_VERSION,
DEVKIT_NAME,
)
VAL_PREP_ASSET = CachedWebDatasetAsset(
"https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep.sh",
IMAGENET_FOLDER_NAME,
IMAGENET_VERSION,
"valprep.sh",
)


class ImagenetDataset(BaseDataset, ImageNet):
"""
Wrapper class for using the Imagenet validation dataset: https://www.image-net.org/
"""

def __init__(self):
"""
A direct download link for the validation set is not available.
Users should download the validation dataset manually and pass the local filepath
as an argument here. After this is done once, it will be symlinked to an
internal location and doesn't need to be passed again.
input_data_path: Local filepath to imagenet validation set.
"""
BaseDataset.__init__(self, IMAGENET_ASSET.path().parent)
ImageNet.__init__(
self,
root=self.dataset_path,
split="val",
transform=IMAGENET_TRANSFORM,
)

def _validate_data(self) -> bool:
val_path = self.dataset_path / "val"
if not (self.dataset_path / DEVKIT_NAME).exists():
print("Missing Devkit.")
return False

subdirs = [filepath for filepath in val_path.iterdir() if filepath.is_dir()]
if len(subdirs) != 1000:
print(f"Expected 1000 subdirectories but got {len(subdirs)}")
return False

total_images = 0
for subdir in subdirs:
total_images += len(list(subdir.iterdir()))

if total_images != 50000:
print(f"Expected 50000 images but got {total_images}")
return False
return True

def _download_data(self) -> None:
val_path = self.dataset_path / "val"
os.makedirs(val_path, exist_ok=True)

IMAGENET_ASSET.fetch(extract=True)
DEVKIT_ASSET.fetch()
VAL_PREP_ASSET.fetch()

os.rename(VAL_PREP_ASSET.path(), val_path / VAL_PREP_ASSET.path().name)
for filepath in self.dataset_path.iterdir():
if filepath.name.endswith(".JPEG"):
os.rename(filepath, val_path / filepath.name)

print("Moving images to appropriate class folder. This may take a few minutes.")
subprocess.call(f"sh {VAL_PREP_ASSET.path().name}", shell=True, cwd=val_path)
Loading

0 comments on commit 49f7592

Please sign in to comment.