diff --git a/qai_hub_models/models/aotgan/README.md b/qai_hub_models/models/aotgan/README.md index 0b577a8a..a842f7c6 100644 --- a/qai_hub_models/models/aotgan/README.md +++ b/qai_hub_models/models/aotgan/README.md @@ -5,7 +5,7 @@ AOT-GAN is a machine learning model that allows to erase and in-paint part of given input image. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of AOT-GAN found [here](https://github.com/researchmm/AOT-GAN-for-Inpainting). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/aotgan). diff --git a/qai_hub_models/models/baichuan2_7b_quantized/README.md b/qai_hub_models/models/baichuan2_7b_quantized/README.md index b38a0ff0..068d7acd 100644 --- a/qai_hub_models/models/baichuan2_7b_quantized/README.md +++ b/qai_hub_models/models/baichuan2_7b_quantized/README.md @@ -5,7 +5,7 @@ Baichuan2-7B is a family of LLMs. It achieves the state-of-the-art performance of its size on standard Chinese and English authoritative benchmarks (C-EVAL/MMLU). 4-bit weights and 16-bit activations making it suitable for on-device deployment. For Prompt and output length specified below, the time to first token is Baichuan2-PromptProcessor-Quantized's latency and average time per addition token is Baichuan2-TokenGenerator-Quantized's latency. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Baichuan2-7B found [here](https://github.com/baichuan-inc/Baichuan-7B/). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/baichuan2_7b_quantized). diff --git a/qai_hub_models/models/beit/README.md b/qai_hub_models/models/beit/README.md index 714e53be..2d5df4ef 100644 --- a/qai_hub_models/models/beit/README.md +++ b/qai_hub_models/models/beit/README.md @@ -5,7 +5,7 @@ Beit is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Beit found [here](https://github.com/microsoft/unilm/tree/master/beit). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/beit). diff --git a/qai_hub_models/models/conditional_detr_resnet50/README.md b/qai_hub_models/models/conditional_detr_resnet50/README.md index be018e21..fdeb381b 100644 --- a/qai_hub_models/models/conditional_detr_resnet50/README.md +++ b/qai_hub_models/models/conditional_detr_resnet50/README.md @@ -5,7 +5,7 @@ DETR is a machine learning model that can detect objects (trained on COCO dataset). -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Conditional-DETR-ResNet50 found [here](https://github.com/huggingface/transformers/tree/main/src/transformers/models/conditional_detr). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/conditional_detr_resnet50). diff --git a/qai_hub_models/models/controlnet_quantized/README.md b/qai_hub_models/models/controlnet_quantized/README.md index 8370f32c..a27fbc56 100644 --- a/qai_hub_models/models/controlnet_quantized/README.md +++ b/qai_hub_models/models/controlnet_quantized/README.md @@ -5,7 +5,7 @@ On-device, high-resolution image synthesis from text and image prompts. ControlNet guides Stable-diffusion with provided input image to generate accurate images from given input prompt. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of ControlNet found [here](https://github.com/lllyasviel/ControlNet). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/controlnet_quantized). diff --git a/qai_hub_models/models/convnext_base/README.md b/qai_hub_models/models/convnext_base/README.md index 26a794d1..c09feedf 100644 --- a/qai_hub_models/models/convnext_base/README.md +++ b/qai_hub_models/models/convnext_base/README.md @@ -5,7 +5,7 @@ ConvNextBase is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of ConvNext-Base found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/convnext.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/convnext_base). diff --git a/qai_hub_models/models/convnext_tiny/README.md b/qai_hub_models/models/convnext_tiny/README.md index a090cc3f..43d14188 100644 --- a/qai_hub_models/models/convnext_tiny/README.md +++ b/qai_hub_models/models/convnext_tiny/README.md @@ -5,7 +5,7 @@ ConvNextTiny is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of ConvNext-Tiny found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/convnext.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/convnext_tiny). diff --git a/qai_hub_models/models/convnext_tiny_w8a16_quantized/README.md b/qai_hub_models/models/convnext_tiny_w8a16_quantized/README.md index d7c7199b..0bbbedff 100644 --- a/qai_hub_models/models/convnext_tiny_w8a16_quantized/README.md +++ b/qai_hub_models/models/convnext_tiny_w8a16_quantized/README.md @@ -5,7 +5,7 @@ ConvNextTiny is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of ConvNext-Tiny-w8a16-Quantized found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/convnext.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/convnext_tiny_w8a16_quantized). diff --git a/qai_hub_models/models/convnext_tiny_w8a8_quantized/README.md b/qai_hub_models/models/convnext_tiny_w8a8_quantized/README.md index bb667ecb..baf8d30d 100644 --- a/qai_hub_models/models/convnext_tiny_w8a8_quantized/README.md +++ b/qai_hub_models/models/convnext_tiny_w8a8_quantized/README.md @@ -5,7 +5,7 @@ ConvNextTiny is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of ConvNext-Tiny-w8a8-Quantized found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/convnext.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/convnext_tiny_w8a8_quantized). diff --git a/qai_hub_models/models/ddrnet23_slim/README.md b/qai_hub_models/models/ddrnet23_slim/README.md index 26acb767..d5fb3a37 100644 --- a/qai_hub_models/models/ddrnet23_slim/README.md +++ b/qai_hub_models/models/ddrnet23_slim/README.md @@ -5,7 +5,7 @@ DDRNet23Slim is a machine learning model that segments an image into semantic classes, specifically designed for road-based scenes. It is designed for the application of self-driving cars. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of DDRNet23-Slim found [here](https://github.com/chenjun2hao/DDRNet.pytorch). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/ddrnet23_slim). diff --git a/qai_hub_models/models/deeplabv3_plus_mobilenet/README.md b/qai_hub_models/models/deeplabv3_plus_mobilenet/README.md index 83612f64..a2a68b1c 100644 --- a/qai_hub_models/models/deeplabv3_plus_mobilenet/README.md +++ b/qai_hub_models/models/deeplabv3_plus_mobilenet/README.md @@ -5,7 +5,7 @@ DeepLabV3 is designed for semantic segmentation at multiple scales, trained on the various datasets. It uses MobileNet as a backbone. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of DeepLabV3-Plus-MobileNet found [here](https://github.com/jfzhang95/pytorch-deeplab-xception). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/deeplabv3_plus_mobilenet). diff --git a/qai_hub_models/models/deeplabv3_plus_mobilenet_quantized/README.md b/qai_hub_models/models/deeplabv3_plus_mobilenet_quantized/README.md index 5d888797..a85b92fb 100644 --- a/qai_hub_models/models/deeplabv3_plus_mobilenet_quantized/README.md +++ b/qai_hub_models/models/deeplabv3_plus_mobilenet_quantized/README.md @@ -5,7 +5,7 @@ DeepLabV3 Quantized is designed for semantic segmentation at multiple scales, trained on various datasets. It uses MobileNet as a backbone. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of DeepLabV3-Plus-MobileNet-Quantized found [here](https://github.com/jfzhang95/pytorch-deeplab-xception). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/deeplabv3_plus_mobilenet_quantized). diff --git a/qai_hub_models/models/deeplabv3_resnet50/README.md b/qai_hub_models/models/deeplabv3_resnet50/README.md index 7852ddd8..0703473a 100644 --- a/qai_hub_models/models/deeplabv3_resnet50/README.md +++ b/qai_hub_models/models/deeplabv3_resnet50/README.md @@ -5,7 +5,7 @@ DeepLabV3 is designed for semantic segmentation at multiple scales, trained on the COCO dataset. It uses ResNet50 as a backbone. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of DeepLabV3-ResNet50 found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/segmentation/deeplabv3.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/deeplabv3_resnet50). diff --git a/qai_hub_models/models/densenet121/README.md b/qai_hub_models/models/densenet121/README.md index dd2c1346..b16138e8 100644 --- a/qai_hub_models/models/densenet121/README.md +++ b/qai_hub_models/models/densenet121/README.md @@ -5,7 +5,7 @@ Densenet is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of DenseNet-121 found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/densenet.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/densenet121). diff --git a/qai_hub_models/models/densenet121_quantized/README.md b/qai_hub_models/models/densenet121_quantized/README.md index e9abe17b..abc9ba45 100644 --- a/qai_hub_models/models/densenet121_quantized/README.md +++ b/qai_hub_models/models/densenet121_quantized/README.md @@ -5,7 +5,7 @@ Densenet is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of DenseNet-121-Quantized found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/densenet.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/densenet121_quantized). diff --git a/qai_hub_models/models/detr_resnet101/README.md b/qai_hub_models/models/detr_resnet101/README.md index d56ce509..f94ba79d 100644 --- a/qai_hub_models/models/detr_resnet101/README.md +++ b/qai_hub_models/models/detr_resnet101/README.md @@ -5,7 +5,7 @@ DETR is a machine learning model that can detect objects (trained on COCO dataset). -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of DETR-ResNet101 found [here](https://github.com/facebookresearch/detr). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/detr_resnet101). diff --git a/qai_hub_models/models/detr_resnet101_dc5/README.md b/qai_hub_models/models/detr_resnet101_dc5/README.md index 01764933..f19a8cd3 100644 --- a/qai_hub_models/models/detr_resnet101_dc5/README.md +++ b/qai_hub_models/models/detr_resnet101_dc5/README.md @@ -5,7 +5,7 @@ DETR is a machine learning model that can detect objects (trained on COCO dataset). -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of DETR-ResNet101-DC5 found [here](https://github.com/facebookresearch/detr). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/detr_resnet101_dc5). diff --git a/qai_hub_models/models/detr_resnet50/README.md b/qai_hub_models/models/detr_resnet50/README.md index 37c8d678..1208cc28 100644 --- a/qai_hub_models/models/detr_resnet50/README.md +++ b/qai_hub_models/models/detr_resnet50/README.md @@ -5,7 +5,7 @@ DETR is a machine learning model that can detect objects (trained on COCO dataset). -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of DETR-ResNet50 found [here](https://github.com/facebookresearch/detr). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/detr_resnet50). diff --git a/qai_hub_models/models/detr_resnet50_dc5/README.md b/qai_hub_models/models/detr_resnet50_dc5/README.md index f10d45e8..a4e08939 100644 --- a/qai_hub_models/models/detr_resnet50_dc5/README.md +++ b/qai_hub_models/models/detr_resnet50_dc5/README.md @@ -5,7 +5,7 @@ DETR is a machine learning model that can detect objects (trained on COCO dataset). -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of DETR-ResNet50-DC5 found [here](https://github.com/facebookresearch/detr). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/detr_resnet50_dc5). diff --git a/qai_hub_models/models/efficientnet_b0/README.md b/qai_hub_models/models/efficientnet_b0/README.md index ef0e88ef..48791758 100644 --- a/qai_hub_models/models/efficientnet_b0/README.md +++ b/qai_hub_models/models/efficientnet_b0/README.md @@ -5,7 +5,7 @@ EfficientNetB0 is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of EfficientNet-B0 found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/efficientnet.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/efficientnet_b0). diff --git a/qai_hub_models/models/efficientnet_b4/README.md b/qai_hub_models/models/efficientnet_b4/README.md index a3a1fb37..06994841 100644 --- a/qai_hub_models/models/efficientnet_b4/README.md +++ b/qai_hub_models/models/efficientnet_b4/README.md @@ -5,7 +5,7 @@ EfficientNetB4 is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of EfficientNet-B4 found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/efficientnet.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/efficientnet_b4). diff --git a/qai_hub_models/models/efficientnet_v2_s/README.md b/qai_hub_models/models/efficientnet_v2_s/README.md index ed25b56c..820ebb98 100644 --- a/qai_hub_models/models/efficientnet_v2_s/README.md +++ b/qai_hub_models/models/efficientnet_v2_s/README.md @@ -5,7 +5,7 @@ EfficientNetV2-s is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of EfficientNet-V2-s found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/efficientnet.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/efficientnet_v2_s). diff --git a/qai_hub_models/models/efficientvit_b2_cls/README.md b/qai_hub_models/models/efficientvit_b2_cls/README.md index b160c25f..4fe18f13 100644 --- a/qai_hub_models/models/efficientvit_b2_cls/README.md +++ b/qai_hub_models/models/efficientvit_b2_cls/README.md @@ -5,7 +5,7 @@ EfficientViT is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of EfficientViT-b2-cls found [here](https://github.com/CVHub520/efficientvit). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/efficientvit_b2_cls). diff --git a/qai_hub_models/models/efficientvit_l2_cls/README.md b/qai_hub_models/models/efficientvit_l2_cls/README.md index f43bbef1..82791029 100644 --- a/qai_hub_models/models/efficientvit_l2_cls/README.md +++ b/qai_hub_models/models/efficientvit_l2_cls/README.md @@ -5,7 +5,7 @@ EfficientViT is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of EfficientViT-l2-cls found [here](https://github.com/CVHub520/efficientvit). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/efficientvit_l2_cls). diff --git a/qai_hub_models/models/efficientvit_l2_seg/README.md b/qai_hub_models/models/efficientvit_l2_seg/README.md index ff8da639..eae3d54d 100644 --- a/qai_hub_models/models/efficientvit_l2_seg/README.md +++ b/qai_hub_models/models/efficientvit_l2_seg/README.md @@ -5,7 +5,7 @@ EfficientViT is a machine learning model that can segment images from the Cityscape dataset. It has lightweight and hardware-efficient operations and thus delivers significant speedup on diverse hardware platforms -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of EfficientViT-l2-seg found [here](https://github.com/CVHub520/efficientvit). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/efficientvit_l2_seg). diff --git a/qai_hub_models/models/esrgan/README.md b/qai_hub_models/models/esrgan/README.md index 347cb914..e0de9c1f 100644 --- a/qai_hub_models/models/esrgan/README.md +++ b/qai_hub_models/models/esrgan/README.md @@ -5,7 +5,7 @@ ESRGAN is a machine learning model that upscales an image with minimal loss in quality. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of ESRGAN found [here](https://github.com/xinntao/ESRGAN/). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/esrgan). diff --git a/qai_hub_models/models/face_attrib_net/README.md b/qai_hub_models/models/face_attrib_net/README.md index 39f4f7bf..78897d1d 100644 --- a/qai_hub_models/models/face_attrib_net/README.md +++ b/qai_hub_models/models/face_attrib_net/README.md @@ -5,7 +5,7 @@ Facial feature extraction and additional attributes including liveness, eyeclose, mask and glasses detection for face recognition. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of FaceAttribNet found [here](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/face_attrib_net/model.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/face_attrib_net). diff --git a/qai_hub_models/models/face_det_lite/README.md b/qai_hub_models/models/face_det_lite/README.md index ed98fc9d..b690a549 100644 --- a/qai_hub_models/models/face_det_lite/README.md +++ b/qai_hub_models/models/face_det_lite/README.md @@ -5,7 +5,7 @@ face_det_lite is a machine learning model that detect face in the images -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Lightweight-Face-Detection found [here](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/face_det_lite/model.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/face_det_lite). diff --git a/qai_hub_models/models/facemap_3dmm/README.md b/qai_hub_models/models/facemap_3dmm/README.md index a946da06..9ade5ff9 100644 --- a/qai_hub_models/models/facemap_3dmm/README.md +++ b/qai_hub_models/models/facemap_3dmm/README.md @@ -5,7 +5,7 @@ Real-time 3D facial landmark detection optimized for mobile and edge. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Facial-Landmark-Detection found [here](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/facemap_3dmm/model.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/facemap_3dmm). diff --git a/qai_hub_models/models/facemap_3dmm_quantized/README.md b/qai_hub_models/models/facemap_3dmm_quantized/README.md index dde4b218..8367d859 100644 --- a/qai_hub_models/models/facemap_3dmm_quantized/README.md +++ b/qai_hub_models/models/facemap_3dmm_quantized/README.md @@ -5,7 +5,7 @@ Real-time 3D facial landmark detection optimized for mobile and edge. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Facial-Landmark-Detection-Quantized found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/facemap_3dmm_quantized). diff --git a/qai_hub_models/models/fastsam_s/README.md b/qai_hub_models/models/fastsam_s/README.md index a298c883..4b9bf095 100644 --- a/qai_hub_models/models/fastsam_s/README.md +++ b/qai_hub_models/models/fastsam_s/README.md @@ -5,7 +5,7 @@ The Fast Segment Anything Model (FastSAM) is a novel, real-time CNN-based solution for the Segment Anything task. This task is designed to segment any object within an image based on various possible user interaction prompts. The model performs competitively despite significantly reduced computation, making it a practical choice for a variety of vision tasks. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of FastSam-S found [here](https://github.com/CASIA-IVA-Lab/FastSAM). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/fastsam_s). diff --git a/qai_hub_models/models/fastsam_x/README.md b/qai_hub_models/models/fastsam_x/README.md index bbb571e5..15c6f0b8 100644 --- a/qai_hub_models/models/fastsam_x/README.md +++ b/qai_hub_models/models/fastsam_x/README.md @@ -5,7 +5,7 @@ The Fast Segment Anything Model (FastSAM) is a novel, real-time CNN-based solution for the Segment Anything task. This task is designed to segment any object within an image based on various possible user interaction prompts. The model performs competitively despite significantly reduced computation, making it a practical choice for a variety of vision tasks. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of FastSam-X found [here](https://github.com/CASIA-IVA-Lab/FastSAM). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/fastsam_x). diff --git a/qai_hub_models/models/fcn_resnet50/README.md b/qai_hub_models/models/fcn_resnet50/README.md index 4df80006..a957db01 100644 --- a/qai_hub_models/models/fcn_resnet50/README.md +++ b/qai_hub_models/models/fcn_resnet50/README.md @@ -5,7 +5,7 @@ FCN_ResNet50 is a machine learning model that can segment images from the COCO dataset. It uses ResNet50 as a backbone. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of FCN-ResNet50 found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/segmentation/fcn.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/fcn_resnet50). diff --git a/qai_hub_models/models/fcn_resnet50_quantized/README.md b/qai_hub_models/models/fcn_resnet50_quantized/README.md index 2f275166..1de829d2 100644 --- a/qai_hub_models/models/fcn_resnet50_quantized/README.md +++ b/qai_hub_models/models/fcn_resnet50_quantized/README.md @@ -5,7 +5,7 @@ FCN_ResNet50 is a quantized machine learning model that can segment images from the COCO dataset. It uses ResNet50 as a backbone. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of FCN-ResNet50-Quantized found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/segmentation/fcn.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/fcn_resnet50_quantized). diff --git a/qai_hub_models/models/ffnet_122ns_lowres/README.md b/qai_hub_models/models/ffnet_122ns_lowres/README.md index bca0a9e8..3171227f 100644 --- a/qai_hub_models/models/ffnet_122ns_lowres/README.md +++ b/qai_hub_models/models/ffnet_122ns_lowres/README.md @@ -5,7 +5,7 @@ FFNet-122NS-LowRes is a "fuss-free network" that segments street scene images with per-pixel classes like road, sidewalk, and pedestrian. Trained on the Cityscapes dataset. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of FFNet-122NS-LowRes found [here](https://github.com/Qualcomm-AI-research/FFNet). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/ffnet_122ns_lowres). diff --git a/qai_hub_models/models/ffnet_40s/README.md b/qai_hub_models/models/ffnet_40s/README.md index 1a82f4bb..c4ffde5e 100644 --- a/qai_hub_models/models/ffnet_40s/README.md +++ b/qai_hub_models/models/ffnet_40s/README.md @@ -5,7 +5,7 @@ FFNet-40S is a "fuss-free network" that segments street scene images with per-pixel classes like road, sidewalk, and pedestrian. Trained on the Cityscapes dataset. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of FFNet-40S found [here](https://github.com/Qualcomm-AI-research/FFNet). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/ffnet_40s). diff --git a/qai_hub_models/models/ffnet_40s_quantized/README.md b/qai_hub_models/models/ffnet_40s_quantized/README.md index b7d0f17e..4c0642c6 100644 --- a/qai_hub_models/models/ffnet_40s_quantized/README.md +++ b/qai_hub_models/models/ffnet_40s_quantized/README.md @@ -5,7 +5,7 @@ FFNet-40S-Quantized is a "fuss-free network" that segments street scene images with per-pixel classes like road, sidewalk, and pedestrian. Trained on the Cityscapes dataset. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of FFNet-40S-Quantized found [here](https://github.com/Qualcomm-AI-research/FFNet). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/ffnet_40s_quantized). diff --git a/qai_hub_models/models/ffnet_54s/README.md b/qai_hub_models/models/ffnet_54s/README.md index e425f713..e1094463 100644 --- a/qai_hub_models/models/ffnet_54s/README.md +++ b/qai_hub_models/models/ffnet_54s/README.md @@ -5,7 +5,7 @@ FFNet-54S is a "fuss-free network" that segments street scene images with per-pixel classes like road, sidewalk, and pedestrian. Trained on the Cityscapes dataset. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of FFNet-54S found [here](https://github.com/Qualcomm-AI-research/FFNet). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/ffnet_54s). diff --git a/qai_hub_models/models/ffnet_54s_quantized/README.md b/qai_hub_models/models/ffnet_54s_quantized/README.md index 1e4681f9..d7c3857d 100644 --- a/qai_hub_models/models/ffnet_54s_quantized/README.md +++ b/qai_hub_models/models/ffnet_54s_quantized/README.md @@ -5,7 +5,7 @@ FFNet-54S-Quantized is a "fuss-free network" that segments street scene images with per-pixel classes like road, sidewalk, and pedestrian. Trained on the Cityscapes dataset. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of FFNet-54S-Quantized found [here](https://github.com/Qualcomm-AI-research/FFNet). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/ffnet_54s_quantized). diff --git a/qai_hub_models/models/ffnet_78s/README.md b/qai_hub_models/models/ffnet_78s/README.md index 925b00ee..be752796 100644 --- a/qai_hub_models/models/ffnet_78s/README.md +++ b/qai_hub_models/models/ffnet_78s/README.md @@ -5,7 +5,7 @@ FFNet-78S is a "fuss-free network" that segments street scene images with per-pixel classes like road, sidewalk, and pedestrian. Trained on the Cityscapes dataset. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of FFNet-78S found [here](https://github.com/Qualcomm-AI-research/FFNet). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/ffnet_78s). diff --git a/qai_hub_models/models/ffnet_78s_lowres/README.md b/qai_hub_models/models/ffnet_78s_lowres/README.md index 71d78605..8b6465e7 100644 --- a/qai_hub_models/models/ffnet_78s_lowres/README.md +++ b/qai_hub_models/models/ffnet_78s_lowres/README.md @@ -5,7 +5,7 @@ FFNet-78S-LowRes is a "fuss-free network" that segments street scene images with per-pixel classes like road, sidewalk, and pedestrian. Trained on the Cityscapes dataset. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of FFNet-78S-LowRes found [here](https://github.com/Qualcomm-AI-research/FFNet). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/ffnet_78s_lowres). diff --git a/qai_hub_models/models/ffnet_78s_quantized/README.md b/qai_hub_models/models/ffnet_78s_quantized/README.md index 43805122..de1f32d6 100644 --- a/qai_hub_models/models/ffnet_78s_quantized/README.md +++ b/qai_hub_models/models/ffnet_78s_quantized/README.md @@ -5,7 +5,7 @@ FFNet-78S-Quantized is a "fuss-free network" that segments street scene images with per-pixel classes like road, sidewalk, and pedestrian. Trained on the Cityscapes dataset. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of FFNet-78S-Quantized found [here](https://github.com/Qualcomm-AI-research/FFNet). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/ffnet_78s_quantized). diff --git a/qai_hub_models/models/foot_track_net/README.md b/qai_hub_models/models/foot_track_net/README.md index 94cec09f..c8f3c92f 100644 --- a/qai_hub_models/models/foot_track_net/README.md +++ b/qai_hub_models/models/foot_track_net/README.md @@ -5,7 +5,7 @@ Real-time multiple person detection with accurate feet localization optimized for mobile and edge. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Person-Foot-Detection found [here](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/foot_track_net/model.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/foot_track_net). diff --git a/qai_hub_models/models/foot_track_net_quantized/README.md b/qai_hub_models/models/foot_track_net_quantized/README.md index 050294e8..bf5a7e45 100644 --- a/qai_hub_models/models/foot_track_net_quantized/README.md +++ b/qai_hub_models/models/foot_track_net_quantized/README.md @@ -5,7 +5,7 @@ Real-time multiple person detection with accurate feet localization optimized for mobile and edge. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Person-Foot-Detection-Quantized found [here](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/foot_track_net_quantized/model.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/foot_track_net_quantized). diff --git a/qai_hub_models/models/gear_guard_net/README.md b/qai_hub_models/models/gear_guard_net/README.md index 9f686ed7..13732e91 100644 --- a/qai_hub_models/models/gear_guard_net/README.md +++ b/qai_hub_models/models/gear_guard_net/README.md @@ -5,7 +5,7 @@ Detect if a person is wearing personal protective equipments (PPE) in real-time. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of PPE-Detection found [here](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/gear_guard_net/model.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/gear_guard_net). diff --git a/qai_hub_models/models/gear_guard_net_quantized/README.md b/qai_hub_models/models/gear_guard_net_quantized/README.md index fe5f83b1..bb7ffb5c 100644 --- a/qai_hub_models/models/gear_guard_net_quantized/README.md +++ b/qai_hub_models/models/gear_guard_net_quantized/README.md @@ -5,7 +5,7 @@ Detect if a person is wearing personal protective equipments (PPE) in real-time. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of PPE-Detection-Quantized found [here](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/gear_guard_net_quantized/model.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/gear_guard_net_quantized). diff --git a/qai_hub_models/models/googlenet/README.md b/qai_hub_models/models/googlenet/README.md index 76efc69a..3e627a45 100644 --- a/qai_hub_models/models/googlenet/README.md +++ b/qai_hub_models/models/googlenet/README.md @@ -5,7 +5,7 @@ GoogLeNet is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of GoogLeNet found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/googlenet.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/googlenet). diff --git a/qai_hub_models/models/googlenet_quantized/README.md b/qai_hub_models/models/googlenet_quantized/README.md index 747c6636..ea8bbbc4 100644 --- a/qai_hub_models/models/googlenet_quantized/README.md +++ b/qai_hub_models/models/googlenet_quantized/README.md @@ -5,7 +5,7 @@ GoogLeNet is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of GoogLeNetQuantized found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/googlenet.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/googlenet_quantized). diff --git a/qai_hub_models/models/hrnet_pose/README.md b/qai_hub_models/models/hrnet_pose/README.md index 15d24853..1f88fe38 100644 --- a/qai_hub_models/models/hrnet_pose/README.md +++ b/qai_hub_models/models/hrnet_pose/README.md @@ -5,7 +5,7 @@ HRNet performs pose estimation in high-resolution representations. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of HRNetPose found [here](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/hrnet_posenet). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/hrnet_pose). diff --git a/qai_hub_models/models/hrnet_pose_quantized/README.md b/qai_hub_models/models/hrnet_pose_quantized/README.md index 8c91680e..62f3cfd9 100644 --- a/qai_hub_models/models/hrnet_pose_quantized/README.md +++ b/qai_hub_models/models/hrnet_pose_quantized/README.md @@ -5,7 +5,7 @@ HRNet performs pose estimation in high-resolution representations. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of HRNetPoseQuantized found [here](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/hrnet_posenet). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/hrnet_pose_quantized). diff --git a/qai_hub_models/models/huggingface_wavlm_base_plus/README.md b/qai_hub_models/models/huggingface_wavlm_base_plus/README.md index 37c3e68c..d4a2ea44 100644 --- a/qai_hub_models/models/huggingface_wavlm_base_plus/README.md +++ b/qai_hub_models/models/huggingface_wavlm_base_plus/README.md @@ -5,7 +5,7 @@ HuggingFaceWavLMBasePlus is a real time speech processing backbone based on Microsoft's WavLM model. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of HuggingFace-WavLM-Base-Plus found [here](https://huggingface.co/patrickvonplaten/wavlm-libri-clean-100h-base-plus/tree/main). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/huggingface_wavlm_base_plus). diff --git a/qai_hub_models/models/ibm_granite_3b_code_instruct/README.md b/qai_hub_models/models/ibm_granite_3b_code_instruct/README.md index c5036921..4425bcc2 100644 --- a/qai_hub_models/models/ibm_granite_3b_code_instruct/README.md +++ b/qai_hub_models/models/ibm_granite_3b_code_instruct/README.md @@ -5,7 +5,7 @@ Granite-3B-Code-Instruct-2K is a 3B parameter model fine tuned from Granite-3B-Code-Base-2K on a combination of permissively licensed instruction data to enhance instruction following capabilities including logical reasoning and problem-solving skills. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of IBM-Granite-3B-Code-Instruct found [here](https://huggingface.co/ibm-granite/granite-3b-code-instruct-2k). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/ibm_granite_3b_code_instruct). diff --git a/qai_hub_models/models/inception_v3/README.md b/qai_hub_models/models/inception_v3/README.md index b38829f8..8fbc0be5 100644 --- a/qai_hub_models/models/inception_v3/README.md +++ b/qai_hub_models/models/inception_v3/README.md @@ -5,7 +5,7 @@ InceptionNetV3 is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Inception-v3 found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/inception.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/inception_v3). diff --git a/qai_hub_models/models/inception_v3_quantized/README.md b/qai_hub_models/models/inception_v3_quantized/README.md index b04e2249..067fc73b 100644 --- a/qai_hub_models/models/inception_v3_quantized/README.md +++ b/qai_hub_models/models/inception_v3_quantized/README.md @@ -5,7 +5,7 @@ InceptionNetV3 is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. This model is post-training quantized to int8 using samples from Google's open images dataset. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Inception-v3-Quantized found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/inception.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/inception_v3_quantized). diff --git a/qai_hub_models/models/indus_1b_quantized/README.md b/qai_hub_models/models/indus_1b_quantized/README.md index f474206c..23573b32 100644 --- a/qai_hub_models/models/indus_1b_quantized/README.md +++ b/qai_hub_models/models/indus_1b_quantized/README.md @@ -5,7 +5,7 @@ Indus is today a 1.2 billion parameter model and has been supervised fine tuned for Hindi and dialects. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of IndusQ-1.1B found [here](https://huggingface.co/nickmalhotra/ProjectIndus). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/indus_1b_quantized). diff --git a/qai_hub_models/models/jais_6p7b_chat_quantized/README.md b/qai_hub_models/models/jais_6p7b_chat_quantized/README.md index da60bb5f..f1e75235 100644 --- a/qai_hub_models/models/jais_6p7b_chat_quantized/README.md +++ b/qai_hub_models/models/jais_6p7b_chat_quantized/README.md @@ -5,7 +5,7 @@ JAIS 6.7B is a bilingual large language model (LLM) for both Arabic and English developed by Inception, a G42 company in partnership with MBZUAI and Cerebras. This is a 6.7 billion parameter LLM, trained on a dataset containing 141 billion Arabic tokens and 339 billion English/code tokens. The model is based on transformer-based decoder-only (GPT-3) architecture and uses SwiGLU non-linearity. It implements ALiBi position embeddings, enabling the model to extrapolate to long sequence lengths, providing improved context handling and model precision. The JAIS family of models is a comprehensive series of bilingual English-Arabic LLMs. These models are optimized to excel in Arabic while having strong English capabilities. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of JAIS-6p7b-Chat found [here](https://huggingface.co/inceptionai/jais-family-6p7b). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/jais_6p7b_chat_quantized). diff --git a/qai_hub_models/models/lama_dilated/README.md b/qai_hub_models/models/lama_dilated/README.md index 696b2f3d..5b413502 100644 --- a/qai_hub_models/models/lama_dilated/README.md +++ b/qai_hub_models/models/lama_dilated/README.md @@ -5,7 +5,7 @@ LaMa-Dilated is a machine learning model that allows to erase and in-paint part of given input image. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of LaMa-Dilated found [here](https://github.com/advimman/lama). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/lama_dilated). diff --git a/qai_hub_models/models/litehrnet/README.md b/qai_hub_models/models/litehrnet/README.md index ba1924c7..30d38e6c 100644 --- a/qai_hub_models/models/litehrnet/README.md +++ b/qai_hub_models/models/litehrnet/README.md @@ -5,7 +5,7 @@ LiteHRNet is a machine learning model that detects human pose and returns a location and confidence for each of 17 joints. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of LiteHRNet found [here](https://github.com/HRNet/Lite-HRNet). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/litehrnet). diff --git a/qai_hub_models/models/llama_v2_7b_chat_quantized/README.md b/qai_hub_models/models/llama_v2_7b_chat_quantized/README.md index 94fa9334..5b7ad695 100644 --- a/qai_hub_models/models/llama_v2_7b_chat_quantized/README.md +++ b/qai_hub_models/models/llama_v2_7b_chat_quantized/README.md @@ -5,7 +5,7 @@ Llama 2 is a family of LLMs. The "Chat" at the end indicates that the model is optimized for chatbot-like dialogue. The model is quantized to w4a16(4-bit weights and 16-bit activations) and part of the model is quantized to w8a16(8-bit weights and 16-bit activations) making it suitable for on-device deployment. For Prompt and output length specified below, the time to first token is Llama-PromptProcessor-Quantized's latency and average time per addition token is Llama-TokenGenerator-KVCache-Quantized's latency. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Llama-v2-7B-Chat found [here](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/llama_v2_7b_chat_quantized). diff --git a/qai_hub_models/models/llama_v3_1_8b_chat_quantized/README.md b/qai_hub_models/models/llama_v3_1_8b_chat_quantized/README.md index f8642a13..39061f03 100644 --- a/qai_hub_models/models/llama_v3_1_8b_chat_quantized/README.md +++ b/qai_hub_models/models/llama_v3_1_8b_chat_quantized/README.md @@ -5,7 +5,7 @@ Llama 3 is a family of LLMs. The "Chat" at the end indicates that the model is optimized for chatbot-like dialogue. The model is quantized to w4a16 (4-bit weights and 16-bit activations) and part of the model is quantized to w8a16 (8-bit weights and 16-bit activations) making it suitable for on-device deployment. For Prompt and output length specified below, the time to first token is Llama-PromptProcessor-Quantized's latency and average time per addition token is Llama-TokenGenerator-Quantized's latency. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Llama-v3.1-8B-Chat found [here](https://github.com/meta-llama/llama3/tree/main). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/llama_v3_1_8b_chat_quantized). diff --git a/qai_hub_models/models/llama_v3_2_3b_chat_quantized/README.md b/qai_hub_models/models/llama_v3_2_3b_chat_quantized/README.md index 5cf5092c..7091ba4d 100644 --- a/qai_hub_models/models/llama_v3_2_3b_chat_quantized/README.md +++ b/qai_hub_models/models/llama_v3_2_3b_chat_quantized/README.md @@ -5,7 +5,7 @@ Llama 3 is a family of LLMs. The "Chat" at the end indicates that the model is optimized for chatbot-like dialogue. The model is quantized to w4a16 (4-bit weights and 16-bit activations) and part of the model is quantized to w8a16 (8-bit weights and 16-bit activations) making it suitable for on-device deployment. For Prompt and output length specified below, the time to first token is Llama-PromptProcessor-Quantized's latency and average time per addition token is Llama-TokenGenerator-Quantized's latency. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Llama-v3.2-3B-Chat found [here](https://github.com/meta-llama/llama3/tree/main). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/llama_v3_2_3b_chat_quantized). diff --git a/qai_hub_models/models/llama_v3_8b_chat_quantized/README.md b/qai_hub_models/models/llama_v3_8b_chat_quantized/README.md index 9ee03499..46029810 100644 --- a/qai_hub_models/models/llama_v3_8b_chat_quantized/README.md +++ b/qai_hub_models/models/llama_v3_8b_chat_quantized/README.md @@ -5,7 +5,7 @@ Llama 3 is a family of LLMs. The "Chat" at the end indicates that the model is optimized for chatbot-like dialogue. The model is quantized to w4a16 (4-bit weights and 16-bit activations) and part of the model is quantized to w8a16 (8-bit weights and 16-bit activations) making it suitable for on-device deployment. For Prompt and output length specified below, the time to first token is Llama-PromptProcessor-Quantized's latency and average time per addition token is Llama-TokenGenerator-Quantized's latency. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Llama-v3-8B-Chat found [here](https://github.com/meta-llama/llama3/tree/main). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/llama_v3_8b_chat_quantized). diff --git a/qai_hub_models/models/mediapipe_face/README.md b/qai_hub_models/models/mediapipe_face/README.md index a982de1f..41e3c2bc 100644 --- a/qai_hub_models/models/mediapipe_face/README.md +++ b/qai_hub_models/models/mediapipe_face/README.md @@ -5,7 +5,7 @@ Designed for sub-millisecond processing, this model predicts bounding boxes and pose skeletons (left eye, right eye, nose tip, mouth, left eye tragion, and right eye tragion) of faces in an image. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of MediaPipe-Face-Detection found [here](https://github.com/zmurez/MediaPipePyTorch/). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/mediapipe_face). diff --git a/qai_hub_models/models/mediapipe_face_quantized/README.md b/qai_hub_models/models/mediapipe_face_quantized/README.md index 62b49410..fef4ff5d 100644 --- a/qai_hub_models/models/mediapipe_face_quantized/README.md +++ b/qai_hub_models/models/mediapipe_face_quantized/README.md @@ -5,7 +5,7 @@ Designed for sub-millisecond processing, this model predicts bounding boxes and pose skeletons (left eye, right eye, nose tip, mouth, left eye tragion, and right eye tragion) of faces in an image. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of MediaPipe-Face-Detection-Quantized found [here](https://github.com/zmurez/MediaPipePyTorch/). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/mediapipe_face_quantized). diff --git a/qai_hub_models/models/mediapipe_hand/README.md b/qai_hub_models/models/mediapipe_hand/README.md index 4872c7f2..ab0398b9 100644 --- a/qai_hub_models/models/mediapipe_hand/README.md +++ b/qai_hub_models/models/mediapipe_hand/README.md @@ -5,7 +5,7 @@ The MediaPipe Hand Landmark Detector is a machine learning pipeline that predicts bounding boxes and pose skeletons of hands in an image. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of MediaPipe-Hand-Detection found [here](https://github.com/zmurez/MediaPipePyTorch/). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/mediapipe_hand). diff --git a/qai_hub_models/models/mediapipe_pose/README.md b/qai_hub_models/models/mediapipe_pose/README.md index 9e194785..5d43fc82 100644 --- a/qai_hub_models/models/mediapipe_pose/README.md +++ b/qai_hub_models/models/mediapipe_pose/README.md @@ -5,7 +5,7 @@ The MediaPipe Pose Landmark Detector is a machine learning pipeline that predicts bounding boxes and pose skeletons of poses in an image. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of MediaPipe-Pose-Estimation found [here](https://github.com/zmurez/MediaPipePyTorch/). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/mediapipe_pose). diff --git a/qai_hub_models/models/mediapipe_selfie/README.md b/qai_hub_models/models/mediapipe_selfie/README.md index 577a80d2..b4fb10b1 100644 --- a/qai_hub_models/models/mediapipe_selfie/README.md +++ b/qai_hub_models/models/mediapipe_selfie/README.md @@ -5,7 +5,7 @@ Light-weight model that segments a person from the background in square or landscape selfie and video conference imagery. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of MediaPipe-Selfie-Segmentation found [here](https://github.com/google/mediapipe/tree/master/mediapipe/modules/selfie_segmentation). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/mediapipe_selfie). diff --git a/qai_hub_models/models/midas/README.md b/qai_hub_models/models/midas/README.md index 11d05c53..edb9b3fe 100644 --- a/qai_hub_models/models/midas/README.md +++ b/qai_hub_models/models/midas/README.md @@ -5,7 +5,7 @@ Midas is designed for estimating depth at each point in an image. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Midas-V2 found [here](https://github.com/isl-org/MiDaS). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/midas). diff --git a/qai_hub_models/models/midas_quantized/README.md b/qai_hub_models/models/midas_quantized/README.md index d621dd14..a1c83362 100644 --- a/qai_hub_models/models/midas_quantized/README.md +++ b/qai_hub_models/models/midas_quantized/README.md @@ -5,7 +5,7 @@ Midas is designed for estimating depth at each point in an image. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Midas-V2-Quantized found [here](https://github.com/isl-org/MiDaS). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/midas_quantized). diff --git a/qai_hub_models/models/mistral_3b_quantized/README.md b/qai_hub_models/models/mistral_3b_quantized/README.md index 0da98d67..0a42ffda 100644 --- a/qai_hub_models/models/mistral_3b_quantized/README.md +++ b/qai_hub_models/models/mistral_3b_quantized/README.md @@ -5,7 +5,7 @@ Mistral 3B model is Mistral AI's first generation edge model, optimized for optimal performance on Snapdragon platforms. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Mistral-3B found [here](https://github.com/mistralai/mistral-inference). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/mistral_3b_quantized). diff --git a/qai_hub_models/models/mistral_7b_instruct_v0_3_quantized/README.md b/qai_hub_models/models/mistral_7b_instruct_v0_3_quantized/README.md index 8c0d000b..7812e88f 100644 --- a/qai_hub_models/models/mistral_7b_instruct_v0_3_quantized/README.md +++ b/qai_hub_models/models/mistral_7b_instruct_v0_3_quantized/README.md @@ -5,7 +5,7 @@ Mistral AI's first open source dense model released September 2023. Mistral-7B-Instruct-v0.3 Large Language Model (LLM) is an instruct fine‑tuned version of the Mistral‑7B‑v0.3. It has an extended vocabulary and supports the v3 Tokenizer, enhancing language understanding and generation. Additionally function calling is enabled. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Mistral-7B-Instruct-v0.3 found [here](https://github.com/mistralai/mistral-inference). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/mistral_7b_instruct_v0_3_quantized). diff --git a/qai_hub_models/models/mnasnet05/README.md b/qai_hub_models/models/mnasnet05/README.md index b3a93a9e..832137fc 100644 --- a/qai_hub_models/models/mnasnet05/README.md +++ b/qai_hub_models/models/mnasnet05/README.md @@ -5,7 +5,7 @@ MNASNet05 is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of MNASNet05 found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/mnasnet.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/mnasnet05). diff --git a/qai_hub_models/models/mobile_vit/README.md b/qai_hub_models/models/mobile_vit/README.md index 1070a8bc..904b0885 100644 --- a/qai_hub_models/models/mobile_vit/README.md +++ b/qai_hub_models/models/mobile_vit/README.md @@ -5,7 +5,7 @@ MobileVit is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Mobile_Vit found [here](https://github.com/apple/ml-cvnets). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/mobile_vit). diff --git a/qai_hub_models/models/mobilenet_v2/README.md b/qai_hub_models/models/mobilenet_v2/README.md index 97d67f50..51bd1e54 100644 --- a/qai_hub_models/models/mobilenet_v2/README.md +++ b/qai_hub_models/models/mobilenet_v2/README.md @@ -5,7 +5,7 @@ MobileNetV2 is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of MobileNet-v2 found [here](https://github.com/tonylins/pytorch-mobilenet-v2/tree/master). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/mobilenet_v2). diff --git a/qai_hub_models/models/mobilenet_v2_quantized/README.md b/qai_hub_models/models/mobilenet_v2_quantized/README.md index 1ac4107f..fc66d6b4 100644 --- a/qai_hub_models/models/mobilenet_v2_quantized/README.md +++ b/qai_hub_models/models/mobilenet_v2_quantized/README.md @@ -5,7 +5,7 @@ MobileNetV2 is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of MobileNet-v2-Quantized found [here](https://github.com/tonylins/pytorch-mobilenet-v2/tree/master). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/mobilenet_v2_quantized). diff --git a/qai_hub_models/models/mobilenet_v3_large/README.md b/qai_hub_models/models/mobilenet_v3_large/README.md index aa9a6f0b..5ff8af70 100644 --- a/qai_hub_models/models/mobilenet_v3_large/README.md +++ b/qai_hub_models/models/mobilenet_v3_large/README.md @@ -5,7 +5,7 @@ MobileNet-v3-Large is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of MobileNet-v3-Large found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/mobilenetv3.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/mobilenet_v3_large). diff --git a/qai_hub_models/models/mobilenet_v3_large_quantized/README.md b/qai_hub_models/models/mobilenet_v3_large_quantized/README.md index 6c747499..dc1b57b7 100644 --- a/qai_hub_models/models/mobilenet_v3_large_quantized/README.md +++ b/qai_hub_models/models/mobilenet_v3_large_quantized/README.md @@ -5,7 +5,7 @@ MobileNet-v3-Large is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of MobileNet-v3-Large-Quantized found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/mobilenetv3.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/mobilenet_v3_large_quantized). diff --git a/qai_hub_models/models/mobilenet_v3_small/README.md b/qai_hub_models/models/mobilenet_v3_small/README.md index 0f389417..a69b441b 100644 --- a/qai_hub_models/models/mobilenet_v3_small/README.md +++ b/qai_hub_models/models/mobilenet_v3_small/README.md @@ -5,7 +5,7 @@ MobileNetV3Small is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of MobileNet-v3-Small found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/mobilenetv3.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/mobilenet_v3_small). diff --git a/qai_hub_models/models/openai_clip/README.md b/qai_hub_models/models/openai_clip/README.md index cc179b2f..4e73f0be 100644 --- a/qai_hub_models/models/openai_clip/README.md +++ b/qai_hub_models/models/openai_clip/README.md @@ -5,7 +5,7 @@ Contrastive Language-Image Pre-Training (CLIP) uses a ViT like transformer to get visual features and a causal language model to get the text features. Both the text and visual features can then be used for a variety of zero-shot learning tasks. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of OpenAI-Clip found [here](https://github.com/openai/CLIP/). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/openai_clip). diff --git a/qai_hub_models/models/openpose/README.md b/qai_hub_models/models/openpose/README.md index 62ebe8ec..e5fd0aeb 100644 --- a/qai_hub_models/models/openpose/README.md +++ b/qai_hub_models/models/openpose/README.md @@ -5,7 +5,7 @@ OpenPose is a machine learning model that estimates body and hand pose in an image and returns location and confidence for each of 19 joints. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of OpenPose found [here](https://github.com/CMU-Perceptual-Computing-Lab/openpose). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/openpose). diff --git a/qai_hub_models/models/plamo_1b_quantized/README.md b/qai_hub_models/models/plamo_1b_quantized/README.md index 4fbae336..f830a757 100644 --- a/qai_hub_models/models/plamo_1b_quantized/README.md +++ b/qai_hub_models/models/plamo_1b_quantized/README.md @@ -5,7 +5,7 @@ PLaMo-1B is the first small language model (SLM) in the PLaMo™ Lite series from Preferred Networks (PFN), designed to power AI applications for edge devices including mobile, automotive, and robots across various industrial sectors. This model builds on the advancements of PLaMo-100B, a 100-billion parameter large language model (LLM) developed from the ground up by PFN’s subsidiary Preferred Elements (PFE). Leveraging high-quality Japanese and English text data generated by PLaMo-100B, PLaMo-1B has been pre-trained on a total of 4 trillion tokens. As a result, it delivers exceptional performance in Japanese benchmarks, outperforming other SLMs with similar parameter sizes. In evaluations such as Jaster 0-shot and 4-shot, PLaMo-1B has demonstrated performance on par with larger LLMs, making it a highly efficient solution for edge-based AI tasks. -{source_repo_details}This repository contains scripts for optimized on-device +This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/plamo_1b_quantized). diff --git a/qai_hub_models/models/posenet_mobilenet/README.md b/qai_hub_models/models/posenet_mobilenet/README.md index c55c003c..094261dc 100644 --- a/qai_hub_models/models/posenet_mobilenet/README.md +++ b/qai_hub_models/models/posenet_mobilenet/README.md @@ -5,7 +5,7 @@ Posenet performs pose estimation on human images. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Posenet-Mobilenet found [here](https://github.com/rwightman/posenet-pytorch). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/posenet_mobilenet). diff --git a/qai_hub_models/models/posenet_mobilenet_quantized/README.md b/qai_hub_models/models/posenet_mobilenet_quantized/README.md index fa114867..f17ff37e 100644 --- a/qai_hub_models/models/posenet_mobilenet_quantized/README.md +++ b/qai_hub_models/models/posenet_mobilenet_quantized/README.md @@ -5,7 +5,7 @@ Posenet performs pose estimation on human images. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Posenet-Mobilenet-Quantized found [here](https://github.com/rwightman/posenet-pytorch). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/posenet_mobilenet_quantized). diff --git a/qai_hub_models/models/quicksrnetlarge/README.md b/qai_hub_models/models/quicksrnetlarge/README.md index c68bcece..96fe6c0e 100644 --- a/qai_hub_models/models/quicksrnetlarge/README.md +++ b/qai_hub_models/models/quicksrnetlarge/README.md @@ -5,7 +5,7 @@ QuickSRNet Large is designed for upscaling images on mobile platforms to sharpen in real-time. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of QuickSRNetLarge found [here](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/quicksrnet). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/quicksrnetlarge). diff --git a/qai_hub_models/models/quicksrnetlarge_quantized/README.md b/qai_hub_models/models/quicksrnetlarge_quantized/README.md index 91e5d259..081fa8ee 100644 --- a/qai_hub_models/models/quicksrnetlarge_quantized/README.md +++ b/qai_hub_models/models/quicksrnetlarge_quantized/README.md @@ -5,7 +5,7 @@ QuickSRNet Large is designed for upscaling images on mobile platforms to sharpen in real-time. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of QuickSRNetLarge-Quantized found [here](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/quicksrnet). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/quicksrnetlarge_quantized). diff --git a/qai_hub_models/models/quicksrnetmedium/README.md b/qai_hub_models/models/quicksrnetmedium/README.md index 89724b8e..0f097b9b 100644 --- a/qai_hub_models/models/quicksrnetmedium/README.md +++ b/qai_hub_models/models/quicksrnetmedium/README.md @@ -5,7 +5,7 @@ QuickSRNet Medium is designed for upscaling images on mobile platforms to sharpen in real-time. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of QuickSRNetMedium found [here](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/quicksrnet). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/quicksrnetmedium). diff --git a/qai_hub_models/models/quicksrnetmedium_quantized/README.md b/qai_hub_models/models/quicksrnetmedium_quantized/README.md index f3cde50b..1db1a984 100644 --- a/qai_hub_models/models/quicksrnetmedium_quantized/README.md +++ b/qai_hub_models/models/quicksrnetmedium_quantized/README.md @@ -5,7 +5,7 @@ QuickSRNet Medium is designed for upscaling images on mobile platforms to sharpen in real-time. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of QuickSRNetMedium-Quantized found [here](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/quicksrnet). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/quicksrnetmedium_quantized). diff --git a/qai_hub_models/models/quicksrnetsmall/README.md b/qai_hub_models/models/quicksrnetsmall/README.md index ac8194ca..b63293de 100644 --- a/qai_hub_models/models/quicksrnetsmall/README.md +++ b/qai_hub_models/models/quicksrnetsmall/README.md @@ -5,7 +5,7 @@ QuickSRNet Small is designed for upscaling images on mobile platforms to sharpen in real-time. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of QuickSRNetSmall found [here](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/quicksrnet). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/quicksrnetsmall). diff --git a/qai_hub_models/models/quicksrnetsmall_quantized/README.md b/qai_hub_models/models/quicksrnetsmall_quantized/README.md index 115e0737..727b969e 100644 --- a/qai_hub_models/models/quicksrnetsmall_quantized/README.md +++ b/qai_hub_models/models/quicksrnetsmall_quantized/README.md @@ -5,7 +5,7 @@ QuickSRNet Small is designed for upscaling images on mobile platforms to sharpen in real-time. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of QuickSRNetSmall-Quantized found [here](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/quicksrnet). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/quicksrnetsmall_quantized). diff --git a/qai_hub_models/models/qwen2_7b_instruct_quantized/README.md b/qai_hub_models/models/qwen2_7b_instruct_quantized/README.md index e2e00104..6d1346c2 100644 --- a/qai_hub_models/models/qwen2_7b_instruct_quantized/README.md +++ b/qai_hub_models/models/qwen2_7b_instruct_quantized/README.md @@ -5,7 +5,7 @@ The Qwen2-7B-Instruct is a state-of-the-art multilingual language model with 7.07 billion parameters, excelling in language understanding, generation, coding, and mathematics. AI Hub provides with four QNN context binaries (shared weights) that can be deployed on Snapdragon 8 Elite with Genie SDK. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Qwen2-7B-Instruct found [here](https://github.com/QwenLM/Qwen2.5). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/qwen2_7b_instruct_quantized). diff --git a/qai_hub_models/models/real_esrgan_general_x4v3/README.md b/qai_hub_models/models/real_esrgan_general_x4v3/README.md index b9dabe38..57e97513 100644 --- a/qai_hub_models/models/real_esrgan_general_x4v3/README.md +++ b/qai_hub_models/models/real_esrgan_general_x4v3/README.md @@ -5,7 +5,7 @@ Real-ESRGAN is a machine learning model that upscales an image with minimal loss in quality. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Real-ESRGAN-General-x4v3 found [here](https://github.com/xinntao/Real-ESRGAN/tree/master). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/real_esrgan_general_x4v3). diff --git a/qai_hub_models/models/real_esrgan_x4plus/README.md b/qai_hub_models/models/real_esrgan_x4plus/README.md index 99838be1..66c8c5f5 100644 --- a/qai_hub_models/models/real_esrgan_x4plus/README.md +++ b/qai_hub_models/models/real_esrgan_x4plus/README.md @@ -5,7 +5,7 @@ Real-ESRGAN is a machine learning model that upscales an image with minimal loss in quality. The implementation is a derivative of the Real-ESRGAN-x4plus architecture, a larger and more powerful version compared to the Real-ESRGAN-general-x4v3 architecture. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Real-ESRGAN-x4plus found [here](https://github.com/xinntao/Real-ESRGAN). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/real_esrgan_x4plus). diff --git a/qai_hub_models/models/regnet/README.md b/qai_hub_models/models/regnet/README.md index 54c2c7f1..b1b34f0b 100644 --- a/qai_hub_models/models/regnet/README.md +++ b/qai_hub_models/models/regnet/README.md @@ -5,7 +5,7 @@ RegNet is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of RegNet found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/regnet.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/regnet). diff --git a/qai_hub_models/models/regnet_quantized/README.md b/qai_hub_models/models/regnet_quantized/README.md index 67d52298..7cbb09b0 100644 --- a/qai_hub_models/models/regnet_quantized/README.md +++ b/qai_hub_models/models/regnet_quantized/README.md @@ -5,7 +5,7 @@ RegNet is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of RegNetQuantized found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/regnet.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/regnet_quantized). diff --git a/qai_hub_models/models/resnet101/README.md b/qai_hub_models/models/resnet101/README.md index 156475ef..78e7c4e9 100644 --- a/qai_hub_models/models/resnet101/README.md +++ b/qai_hub_models/models/resnet101/README.md @@ -5,7 +5,7 @@ ResNet101 is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of ResNet101 found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/resnet101). diff --git a/qai_hub_models/models/resnet101_quantized/README.md b/qai_hub_models/models/resnet101_quantized/README.md index 0e5574b1..2d50f46d 100644 --- a/qai_hub_models/models/resnet101_quantized/README.md +++ b/qai_hub_models/models/resnet101_quantized/README.md @@ -5,7 +5,7 @@ ResNet101 is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of ResNet101Quantized found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/resnet101_quantized). diff --git a/qai_hub_models/models/resnet18/README.md b/qai_hub_models/models/resnet18/README.md index 6929c3c6..865f20d9 100644 --- a/qai_hub_models/models/resnet18/README.md +++ b/qai_hub_models/models/resnet18/README.md @@ -5,7 +5,7 @@ ResNet18 is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of ResNet18 found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/resnet18). diff --git a/qai_hub_models/models/resnet18_quantized/README.md b/qai_hub_models/models/resnet18_quantized/README.md index 549efa42..f7c9634c 100644 --- a/qai_hub_models/models/resnet18_quantized/README.md +++ b/qai_hub_models/models/resnet18_quantized/README.md @@ -5,7 +5,7 @@ ResNet18 is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of ResNet18Quantized found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/resnet18_quantized). diff --git a/qai_hub_models/models/resnet50/README.md b/qai_hub_models/models/resnet50/README.md index f24b2ab4..d00cd8ce 100644 --- a/qai_hub_models/models/resnet50/README.md +++ b/qai_hub_models/models/resnet50/README.md @@ -5,7 +5,7 @@ ResNet50 is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of ResNet50 found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/resnet50). diff --git a/qai_hub_models/models/resnet50_quantized/README.md b/qai_hub_models/models/resnet50_quantized/README.md index 30716929..58995765 100644 --- a/qai_hub_models/models/resnet50_quantized/README.md +++ b/qai_hub_models/models/resnet50_quantized/README.md @@ -5,7 +5,7 @@ ResNet50 is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of ResNet50Quantized found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/resnet50_quantized). diff --git a/qai_hub_models/models/resnext101/README.md b/qai_hub_models/models/resnext101/README.md index d4b86ca2..908bcdef 100644 --- a/qai_hub_models/models/resnext101/README.md +++ b/qai_hub_models/models/resnext101/README.md @@ -5,7 +5,7 @@ ResNeXt101 is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of ResNeXt101 found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/resnext101). diff --git a/qai_hub_models/models/resnext101_quantized/README.md b/qai_hub_models/models/resnext101_quantized/README.md index 658fa1f1..c26311a6 100644 --- a/qai_hub_models/models/resnext101_quantized/README.md +++ b/qai_hub_models/models/resnext101_quantized/README.md @@ -5,7 +5,7 @@ ResNeXt101 is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of ResNeXt101Quantized found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/resnext101_quantized). diff --git a/qai_hub_models/models/resnext50/README.md b/qai_hub_models/models/resnext50/README.md index f13fff9e..dd60173b 100644 --- a/qai_hub_models/models/resnext50/README.md +++ b/qai_hub_models/models/resnext50/README.md @@ -5,7 +5,7 @@ ResNeXt50 is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of ResNeXt50 found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/resnext50). diff --git a/qai_hub_models/models/resnext50_quantized/README.md b/qai_hub_models/models/resnext50_quantized/README.md index 0b85050b..fe202596 100644 --- a/qai_hub_models/models/resnext50_quantized/README.md +++ b/qai_hub_models/models/resnext50_quantized/README.md @@ -5,7 +5,7 @@ ResNeXt50 is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of ResNeXt50Quantized found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/resnext50_quantized). diff --git a/qai_hub_models/models/riffusion_quantized/README.md b/qai_hub_models/models/riffusion_quantized/README.md index 3c1edcfc..7982d4f1 100644 --- a/qai_hub_models/models/riffusion_quantized/README.md +++ b/qai_hub_models/models/riffusion_quantized/README.md @@ -5,7 +5,7 @@ Generates high resolution spectrograms images from text prompts using a latent diffusion model. This model uses CLIP ViT-L/14 as text encoder, U-Net based latent denoising, and VAE based decoder to generate the final image. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Riffusion found [here](https://github.com/CompVis/stable-diffusion/tree/main). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/riffusion_quantized). diff --git a/qai_hub_models/models/sam/README.md b/qai_hub_models/models/sam/README.md index 473460ac..f77931cd 100644 --- a/qai_hub_models/models/sam/README.md +++ b/qai_hub_models/models/sam/README.md @@ -5,7 +5,7 @@ Transformer based encoder-decoder where prompts specify what to segment in an image thereby allowing segmentation without the need for additional training. The image encoder generates embeddings and the lightweight decoder operates on the embeddings for point and mask based image segmentation. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Segment-Anything-Model found [here](https://github.com/facebookresearch/segment-anything). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/sam). diff --git a/qai_hub_models/models/sesr_m5/README.md b/qai_hub_models/models/sesr_m5/README.md index 29b27ffa..751643ce 100644 --- a/qai_hub_models/models/sesr_m5/README.md +++ b/qai_hub_models/models/sesr_m5/README.md @@ -5,7 +5,7 @@ SESR M5 performs efficient on-device upscaling of images. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of SESR-M5 found [here](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/sesr). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/sesr_m5). diff --git a/qai_hub_models/models/sesr_m5_quantized/README.md b/qai_hub_models/models/sesr_m5_quantized/README.md index 429a58fc..31e4a8e1 100644 --- a/qai_hub_models/models/sesr_m5_quantized/README.md +++ b/qai_hub_models/models/sesr_m5_quantized/README.md @@ -5,7 +5,7 @@ SESR M5 performs efficient on-device upscaling of images. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of SESR-M5-Quantized found [here](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/sesr). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/sesr_m5_quantized). diff --git a/qai_hub_models/models/shufflenet_v2/README.md b/qai_hub_models/models/shufflenet_v2/README.md index 091e4bad..d09c1327 100644 --- a/qai_hub_models/models/shufflenet_v2/README.md +++ b/qai_hub_models/models/shufflenet_v2/README.md @@ -5,7 +5,7 @@ ShufflenetV2 is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Shufflenet-v2 found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/shufflenetv2.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/shufflenet_v2). diff --git a/qai_hub_models/models/shufflenet_v2_quantized/README.md b/qai_hub_models/models/shufflenet_v2_quantized/README.md index 6ce74ebd..2754ef59 100644 --- a/qai_hub_models/models/shufflenet_v2_quantized/README.md +++ b/qai_hub_models/models/shufflenet_v2_quantized/README.md @@ -5,7 +5,7 @@ ShufflenetV2 is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Shufflenet-v2Quantized found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/shufflenetv2.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/shufflenet_v2_quantized). diff --git a/qai_hub_models/models/sinet/README.md b/qai_hub_models/models/sinet/README.md index f13fc9f1..f04e4a37 100644 --- a/qai_hub_models/models/sinet/README.md +++ b/qai_hub_models/models/sinet/README.md @@ -5,7 +5,7 @@ SINet is a machine learning model that is designed to segment people from close-up portrait images in real time. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of SINet found [here](https://github.com/clovaai/ext_portrait_segmentation). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/sinet). diff --git a/qai_hub_models/models/squeezenet1_1/README.md b/qai_hub_models/models/squeezenet1_1/README.md index 8268bf62..ab4dea0f 100644 --- a/qai_hub_models/models/squeezenet1_1/README.md +++ b/qai_hub_models/models/squeezenet1_1/README.md @@ -5,7 +5,7 @@ SqueezeNet is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of SqueezeNet-1_1 found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/squeezenet.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/squeezenet1_1). diff --git a/qai_hub_models/models/squeezenet1_1_quantized/README.md b/qai_hub_models/models/squeezenet1_1_quantized/README.md index 5b1aabf5..624b8dde 100644 --- a/qai_hub_models/models/squeezenet1_1_quantized/README.md +++ b/qai_hub_models/models/squeezenet1_1_quantized/README.md @@ -5,7 +5,7 @@ SqueezeNet is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of SqueezeNet-1_1Quantized found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/squeezenet.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/squeezenet1_1_quantized). diff --git a/qai_hub_models/models/stable_diffusion_v1_5_quantized/README.md b/qai_hub_models/models/stable_diffusion_v1_5_quantized/README.md index eae8ac60..60360247 100644 --- a/qai_hub_models/models/stable_diffusion_v1_5_quantized/README.md +++ b/qai_hub_models/models/stable_diffusion_v1_5_quantized/README.md @@ -5,7 +5,7 @@ Generates high resolution images from text prompts using a latent diffusion model. This model uses CLIP ViT-L/14 as text encoder, U-Net based latent denoising, and VAE based decoder to generate the final image. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Stable-Diffusion-v1.5 found [here](https://github.com/CompVis/stable-diffusion/tree/main). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/stable_diffusion_v1_5_quantized). diff --git a/qai_hub_models/models/stable_diffusion_v2_1_quantized/README.md b/qai_hub_models/models/stable_diffusion_v2_1_quantized/README.md index 295e96d6..13d05ceb 100644 --- a/qai_hub_models/models/stable_diffusion_v2_1_quantized/README.md +++ b/qai_hub_models/models/stable_diffusion_v2_1_quantized/README.md @@ -5,7 +5,7 @@ Generates high resolution images from text prompts using a latent diffusion model. This model uses CLIP ViT-L/14 as text encoder, U-Net based latent denoising, and VAE based decoder to generate the final image. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Stable-Diffusion-v2.1 found [here](https://github.com/CompVis/stable-diffusion/tree/main). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/stable_diffusion_v2_1_quantized). diff --git a/qai_hub_models/models/swin_base/README.md b/qai_hub_models/models/swin_base/README.md index d29d3bd1..b5dccd3d 100644 --- a/qai_hub_models/models/swin_base/README.md +++ b/qai_hub_models/models/swin_base/README.md @@ -5,7 +5,7 @@ SwinBase is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Swin-Base found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/swin_transformer.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/swin_base). diff --git a/qai_hub_models/models/swin_small/README.md b/qai_hub_models/models/swin_small/README.md index a93b950c..a5a901c2 100644 --- a/qai_hub_models/models/swin_small/README.md +++ b/qai_hub_models/models/swin_small/README.md @@ -5,7 +5,7 @@ SwinSmall is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Swin-Small found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/swin_transformer.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/swin_small). diff --git a/qai_hub_models/models/swin_tiny/README.md b/qai_hub_models/models/swin_tiny/README.md index d7387050..37d322e8 100644 --- a/qai_hub_models/models/swin_tiny/README.md +++ b/qai_hub_models/models/swin_tiny/README.md @@ -5,7 +5,7 @@ SwinTiny is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Swin-Tiny found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/swin_transformer.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/swin_tiny). diff --git a/qai_hub_models/models/trocr/README.md b/qai_hub_models/models/trocr/README.md index 2d941f74..ece303c3 100644 --- a/qai_hub_models/models/trocr/README.md +++ b/qai_hub_models/models/trocr/README.md @@ -5,7 +5,7 @@ End-to-end text recognition approach with pre-trained image transformer and text transformer models for both image understanding and wordpiece-level text generation. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of TrOCR found [here](https://huggingface.co/microsoft/trocr-small-stage1). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/trocr). diff --git a/qai_hub_models/models/unet_segmentation/README.md b/qai_hub_models/models/unet_segmentation/README.md index 20cd52b1..0db24dca 100644 --- a/qai_hub_models/models/unet_segmentation/README.md +++ b/qai_hub_models/models/unet_segmentation/README.md @@ -5,7 +5,7 @@ UNet is a machine learning model that produces a segmentation mask for an image. The most basic use case will label each pixel in the image as being in the foreground or the background. More advanced usage will assign a class label to each pixel. This version of the model was trained on the data from Kaggle's Carvana Image Masking Challenge (see https://www.kaggle.com/c/carvana-image-masking-challenge) and is used for vehicle segmentation. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Unet-Segmentation found [here](https://github.com/milesial/Pytorch-UNet). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/unet_segmentation). diff --git a/qai_hub_models/models/vit/README.md b/qai_hub_models/models/vit/README.md index 09a84118..469bad06 100644 --- a/qai_hub_models/models/vit/README.md +++ b/qai_hub_models/models/vit/README.md @@ -5,7 +5,7 @@ VIT is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of VIT found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/vision_transformer.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/vit). diff --git a/qai_hub_models/models/vit_quantized/README.md b/qai_hub_models/models/vit_quantized/README.md index 90fcc7f6..ec8937b7 100644 --- a/qai_hub_models/models/vit_quantized/README.md +++ b/qai_hub_models/models/vit_quantized/README.md @@ -5,7 +5,7 @@ VIT is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of VITQuantized found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/vision_transformer.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/vit_quantized). diff --git a/qai_hub_models/models/whisper_base_en/README.md b/qai_hub_models/models/whisper_base_en/README.md index 23bd5f06..2674652e 100644 --- a/qai_hub_models/models/whisper_base_en/README.md +++ b/qai_hub_models/models/whisper_base_en/README.md @@ -5,7 +5,7 @@ OpenAI’s Whisper ASR (Automatic Speech Recognition) model is a state-of-the-art system designed for transcribing spoken language into written text. It exhibits robust performance in realistic, noisy environments, making it highly reliable for real-world applications. Specifically, it excels in long-form transcription, capable of accurately transcribing audio clips up to 30 seconds long. Time to the first token is the encoder's latency, while time to each additional token is decoder's latency, where we assume a mean decoded length specified below. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Whisper-Base-En found [here](https://github.com/openai/whisper/tree/main). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/whisper_base_en). diff --git a/qai_hub_models/models/whisper_small_en/README.md b/qai_hub_models/models/whisper_small_en/README.md index 9224b34a..e0b6f4a0 100644 --- a/qai_hub_models/models/whisper_small_en/README.md +++ b/qai_hub_models/models/whisper_small_en/README.md @@ -5,7 +5,7 @@ OpenAI’s Whisper ASR (Automatic Speech Recognition) model is a state-of-the-art system designed for transcribing spoken language into written text. It exhibits robust performance in realistic, noisy environments, making it highly reliable for real-world applications. Specifically, it excels in long-form transcription, capable of accurately transcribing audio clips up to 30 seconds long. Time to the first token is the encoder's latency, while time to each additional token is decoder's latency, where we assume a mean decoded length specified below. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Whisper-Small-En found [here](https://github.com/openai/whisper/tree/main). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/whisper_small_en). diff --git a/qai_hub_models/models/whisper_tiny_en/README.md b/qai_hub_models/models/whisper_tiny_en/README.md index 738b2ba3..260bf754 100644 --- a/qai_hub_models/models/whisper_tiny_en/README.md +++ b/qai_hub_models/models/whisper_tiny_en/README.md @@ -5,7 +5,7 @@ OpenAI’s Whisper ASR (Automatic Speech Recognition) model is a state-of-the-art system designed for transcribing spoken language into written text. It exhibits robust performance in realistic, noisy environments, making it highly reliable for real-world applications. Specifically, it excels in long-form transcription, capable of accurately transcribing audio clips up to 30 seconds long. Time to the first token is the encoder's latency, while time to each additional token is decoder's latency, where we assume a mean decoded length specified below. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Whisper-Tiny-En found [here](https://github.com/openai/whisper/tree/main). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/whisper_tiny_en). diff --git a/qai_hub_models/models/wideresnet50/README.md b/qai_hub_models/models/wideresnet50/README.md index 211980c5..aae9101d 100644 --- a/qai_hub_models/models/wideresnet50/README.md +++ b/qai_hub_models/models/wideresnet50/README.md @@ -5,7 +5,7 @@ WideResNet50 is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of WideResNet50 found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/wideresnet50). diff --git a/qai_hub_models/models/wideresnet50_quantized/README.md b/qai_hub_models/models/wideresnet50_quantized/README.md index 6950b4fb..fe99b978 100644 --- a/qai_hub_models/models/wideresnet50_quantized/README.md +++ b/qai_hub_models/models/wideresnet50_quantized/README.md @@ -5,7 +5,7 @@ WideResNet50 is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of WideResNet50-Quantized found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/wideresnet50_quantized). diff --git a/qai_hub_models/models/xlsr/README.md b/qai_hub_models/models/xlsr/README.md index 84535328..94cd2af8 100644 --- a/qai_hub_models/models/xlsr/README.md +++ b/qai_hub_models/models/xlsr/README.md @@ -5,7 +5,7 @@ XLSR is designed for lightweight real-time upscaling of images. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of XLSR found [here](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/xlsr). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/xlsr). diff --git a/qai_hub_models/models/xlsr_quantized/README.md b/qai_hub_models/models/xlsr_quantized/README.md index c5624b58..3e62031f 100644 --- a/qai_hub_models/models/xlsr_quantized/README.md +++ b/qai_hub_models/models/xlsr_quantized/README.md @@ -5,7 +5,7 @@ XLSR is designed for lightweight real-time upscaling of images. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of XLSR-Quantized found [here](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/xlsr). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/xlsr_quantized). diff --git a/qai_hub_models/models/yolonas/README.md b/qai_hub_models/models/yolonas/README.md index 7b19fc1e..224426d0 100644 --- a/qai_hub_models/models/yolonas/README.md +++ b/qai_hub_models/models/yolonas/README.md @@ -5,7 +5,7 @@ YoloNAS is a machine learning model that predicts bounding boxes and classes of objects in an image. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Yolo-NAS found [here](https://github.com/Deci-AI/super-gradients). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/yolonas). diff --git a/qai_hub_models/models/yolonas_quantized/README.md b/qai_hub_models/models/yolonas_quantized/README.md index d3862c42..9b1c51f7 100644 --- a/qai_hub_models/models/yolonas_quantized/README.md +++ b/qai_hub_models/models/yolonas_quantized/README.md @@ -5,7 +5,7 @@ YoloNAS is a machine learning model that predicts bounding boxes and classes of objects in an image. This model is post-training quantized to int8 using samples from the COCO dataset. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Yolo-NAS-Quantized found [here](https://github.com/Deci-AI/super-gradients). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/yolonas_quantized). diff --git a/qai_hub_models/models/yolov11_det/README.md b/qai_hub_models/models/yolov11_det/README.md index fa32ff8f..7929df22 100644 --- a/qai_hub_models/models/yolov11_det/README.md +++ b/qai_hub_models/models/yolov11_det/README.md @@ -5,7 +5,7 @@ Ultralytics YOLOv11 is a machine learning model that predicts bounding boxes and classes of objects in an image. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of YOLOv11-Detection found [here](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/yolo/detect). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/yolov11_det). diff --git a/qai_hub_models/models/yolov6/README.md b/qai_hub_models/models/yolov6/README.md index d5da57f2..93a01796 100644 --- a/qai_hub_models/models/yolov6/README.md +++ b/qai_hub_models/models/yolov6/README.md @@ -5,7 +5,7 @@ YoloV6 is a machine learning model that predicts bounding boxes and classes of objects in an image. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Yolo-v6 found [here](https://github.com/meituan/YOLOv6/). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/yolov6). diff --git a/qai_hub_models/models/yolov7/README.md b/qai_hub_models/models/yolov7/README.md index 1c64ca33..ce694cfa 100644 --- a/qai_hub_models/models/yolov7/README.md +++ b/qai_hub_models/models/yolov7/README.md @@ -5,7 +5,7 @@ YoloV7 is a machine learning model that predicts bounding boxes and classes of objects in an image. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Yolo-v7 found [here](https://github.com/WongKinYiu/yolov7/). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/yolov7). diff --git a/qai_hub_models/models/yolov7_quantized/README.md b/qai_hub_models/models/yolov7_quantized/README.md index a75a4667..37eb10fc 100644 --- a/qai_hub_models/models/yolov7_quantized/README.md +++ b/qai_hub_models/models/yolov7_quantized/README.md @@ -5,7 +5,7 @@ YoloV7 is a machine learning model that predicts bounding boxes and classes of objects in an image. This model is post-training quantized to int8 using samples from the COCO dataset. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of Yolo-v7-Quantized found [here](https://github.com/WongKinYiu/yolov7/). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/yolov7_quantized). diff --git a/qai_hub_models/models/yolov8_det/README.md b/qai_hub_models/models/yolov8_det/README.md index 48707437..b1f4a293 100644 --- a/qai_hub_models/models/yolov8_det/README.md +++ b/qai_hub_models/models/yolov8_det/README.md @@ -5,7 +5,7 @@ Ultralytics YOLOv8 is a machine learning model that predicts bounding boxes and classes of objects in an image. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of YOLOv8-Detection found [here](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/yolo/detect). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/yolov8_det). diff --git a/qai_hub_models/models/yolov8_det_quantized/README.md b/qai_hub_models/models/yolov8_det_quantized/README.md index 8066e76a..0810b3b1 100644 --- a/qai_hub_models/models/yolov8_det_quantized/README.md +++ b/qai_hub_models/models/yolov8_det_quantized/README.md @@ -5,7 +5,7 @@ Ultralytics YOLOv8 is a machine learning model that predicts bounding boxes and classes of objects in an image. This model is post-training quantized to int8 using samples from the COCO dataset. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of YOLOv8-Detection-Quantized found [here](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/yolo/detect). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/yolov8_det_quantized). diff --git a/qai_hub_models/models/yolov8_seg/README.md b/qai_hub_models/models/yolov8_seg/README.md index 7ea43e98..9e2c8b75 100644 --- a/qai_hub_models/models/yolov8_seg/README.md +++ b/qai_hub_models/models/yolov8_seg/README.md @@ -5,7 +5,7 @@ Ultralytics YOLOv8 is a machine learning model that predicts bounding boxes, segmentation masks and classes of objects in an image. -{source_repo_details}This repository contains scripts for optimized on-device +This is based on the implementation of YOLOv8-Segmentation found [here](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/yolo/segment). This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/yolov8_seg).