diff --git a/qai_hub_models/_version.py b/qai_hub_models/_version.py index 63b309e7..cc0bf8d3 100644 --- a/qai_hub_models/_version.py +++ b/qai_hub_models/_version.py @@ -2,4 +2,4 @@ # Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved. # SPDX-License-Identifier: BSD-3-Clause # --------------------------------------------------------------------- -__version__ = "0.11.3" +__version__ = "0.11.4" diff --git a/qai_hub_models/models/aotgan/README.md b/qai_hub_models/models/aotgan/README.md index 89ec5bcb..709849c3 100644 --- a/qai_hub_models/models/aotgan/README.md +++ b/qai_hub_models/models/aotgan/README.md @@ -10,8 +10,7 @@ This is based on the implementation of AOT-GAN found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/aotgan). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/researchmm/AOT-GAN-for-Inpainting) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/baichuan_7b_quantized/README.md b/qai_hub_models/models/baichuan_7b_quantized/README.md index 7a79da7a..6e9ee724 100644 --- a/qai_hub_models/models/baichuan_7b_quantized/README.md +++ b/qai_hub_models/models/baichuan_7b_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of Baichuan-7B found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/baichuan_7b_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -27,7 +26,7 @@ a hosted Qualcomm® device. * [Source Model Implementation](https://github.com/baichuan-inc/Baichuan-7B/) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/controlnet_quantized/README.md b/qai_hub_models/models/controlnet_quantized/README.md index 6e992553..d37819e2 100644 --- a/qai_hub_models/models/controlnet_quantized/README.md +++ b/qai_hub_models/models/controlnet_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of ControlNet found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/controlnet_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/lllyasviel/ControlNet) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/convnext_tiny/README.md b/qai_hub_models/models/convnext_tiny/README.md index 0c961ecb..67aed5c6 100644 --- a/qai_hub_models/models/convnext_tiny/README.md +++ b/qai_hub_models/models/convnext_tiny/README.md @@ -10,8 +10,7 @@ This is based on the implementation of ConvNext-Tiny found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/convnext_tiny). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/convnext.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/convnext_tiny_w8a16_quantized/README.md b/qai_hub_models/models/convnext_tiny_w8a16_quantized/README.md index a62f06f9..0f5910ed 100644 --- a/qai_hub_models/models/convnext_tiny_w8a16_quantized/README.md +++ b/qai_hub_models/models/convnext_tiny_w8a16_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of ConvNext-Tiny-w8a16-Quantized found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/convnext_tiny_w8a16_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/convnext.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/convnext_tiny_w8a8_quantized/README.md b/qai_hub_models/models/convnext_tiny_w8a8_quantized/README.md index d913e14e..2cc33cf1 100644 --- a/qai_hub_models/models/convnext_tiny_w8a8_quantized/README.md +++ b/qai_hub_models/models/convnext_tiny_w8a8_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of ConvNext-Tiny-w8a8-Quantized found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/convnext_tiny_w8a8_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/convnext.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/ddrnet23_slim/README.md b/qai_hub_models/models/ddrnet23_slim/README.md index a72d09f7..b076dea2 100644 --- a/qai_hub_models/models/ddrnet23_slim/README.md +++ b/qai_hub_models/models/ddrnet23_slim/README.md @@ -10,8 +10,7 @@ This is based on the implementation of DDRNet23-Slim found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/ddrnet23_slim). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/chenjun2hao/DDRNet.pytorch) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/deeplabv3_plus_mobilenet/README.md b/qai_hub_models/models/deeplabv3_plus_mobilenet/README.md index 96ebfd8f..a8ce7dc5 100644 --- a/qai_hub_models/models/deeplabv3_plus_mobilenet/README.md +++ b/qai_hub_models/models/deeplabv3_plus_mobilenet/README.md @@ -10,8 +10,7 @@ This is based on the implementation of DeepLabV3-Plus-MobileNet found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/deeplabv3_plus_mobilenet). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/jfzhang95/pytorch-deeplab-xception) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/deeplabv3_plus_mobilenet_quantized/README.md b/qai_hub_models/models/deeplabv3_plus_mobilenet_quantized/README.md index 79770236..c371f6c3 100644 --- a/qai_hub_models/models/deeplabv3_plus_mobilenet_quantized/README.md +++ b/qai_hub_models/models/deeplabv3_plus_mobilenet_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of DeepLabV3-Plus-MobileNet-Quantized found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/deeplabv3_plus_mobilenet_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/jfzhang95/pytorch-deeplab-xception) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/deeplabv3_resnet50/README.md b/qai_hub_models/models/deeplabv3_resnet50/README.md index c7cf9fab..2149f272 100644 --- a/qai_hub_models/models/deeplabv3_resnet50/README.md +++ b/qai_hub_models/models/deeplabv3_resnet50/README.md @@ -10,8 +10,7 @@ This is based on the implementation of DeepLabV3-ResNet50 found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/deeplabv3_resnet50). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/segmentation/deeplabv3.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/densenet121/README.md b/qai_hub_models/models/densenet121/README.md index a4221d1c..8da6afd9 100644 --- a/qai_hub_models/models/densenet121/README.md +++ b/qai_hub_models/models/densenet121/README.md @@ -10,8 +10,7 @@ This is based on the implementation of DenseNet-121 found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/densenet121). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/densenet.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/detr_resnet101/README.md b/qai_hub_models/models/detr_resnet101/README.md index 662a86c6..7b1057a5 100644 --- a/qai_hub_models/models/detr_resnet101/README.md +++ b/qai_hub_models/models/detr_resnet101/README.md @@ -10,8 +10,7 @@ This is based on the implementation of DETR-ResNet101 found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/detr_resnet101). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/facebookresearch/detr) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/detr_resnet101_dc5/README.md b/qai_hub_models/models/detr_resnet101_dc5/README.md index e8c9e777..8a40c445 100644 --- a/qai_hub_models/models/detr_resnet101_dc5/README.md +++ b/qai_hub_models/models/detr_resnet101_dc5/README.md @@ -10,8 +10,7 @@ This is based on the implementation of DETR-ResNet101-DC5 found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/detr_resnet101_dc5). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/facebookresearch/detr) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/detr_resnet50/README.md b/qai_hub_models/models/detr_resnet50/README.md index df378aea..db07b7cf 100644 --- a/qai_hub_models/models/detr_resnet50/README.md +++ b/qai_hub_models/models/detr_resnet50/README.md @@ -10,8 +10,7 @@ This is based on the implementation of DETR-ResNet50 found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/detr_resnet50). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/facebookresearch/detr) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/detr_resnet50_dc5/README.md b/qai_hub_models/models/detr_resnet50_dc5/README.md index 0e3471c6..e0be8280 100644 --- a/qai_hub_models/models/detr_resnet50_dc5/README.md +++ b/qai_hub_models/models/detr_resnet50_dc5/README.md @@ -10,8 +10,7 @@ This is based on the implementation of DETR-ResNet50-DC5 found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/detr_resnet50_dc5). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/facebookresearch/detr) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/efficientnet_b0/README.md b/qai_hub_models/models/efficientnet_b0/README.md index 56096dcb..b2ffa91b 100644 --- a/qai_hub_models/models/efficientnet_b0/README.md +++ b/qai_hub_models/models/efficientnet_b0/README.md @@ -10,8 +10,7 @@ This is based on the implementation of EfficientNet-B0 found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/efficientnet_b0). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/efficientnet.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/esrgan/README.md b/qai_hub_models/models/esrgan/README.md index 7b22d043..524a2fa3 100644 --- a/qai_hub_models/models/esrgan/README.md +++ b/qai_hub_models/models/esrgan/README.md @@ -10,8 +10,7 @@ This is based on the implementation of ESRGAN found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/esrgan). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/xinntao/ESRGAN/) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/fastsam_s/README.md b/qai_hub_models/models/fastsam_s/README.md index 516dc401..a2e3760d 100644 --- a/qai_hub_models/models/fastsam_s/README.md +++ b/qai_hub_models/models/fastsam_s/README.md @@ -10,8 +10,7 @@ This is based on the implementation of FastSam-S found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/fastsam_s). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/CASIA-IVA-Lab/FastSAM) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/fastsam_x/README.md b/qai_hub_models/models/fastsam_x/README.md index 0c34311d..b6890348 100644 --- a/qai_hub_models/models/fastsam_x/README.md +++ b/qai_hub_models/models/fastsam_x/README.md @@ -10,8 +10,7 @@ This is based on the implementation of FastSam-X found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/fastsam_x). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/CASIA-IVA-Lab/FastSAM) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/fcn_resnet50/README.md b/qai_hub_models/models/fcn_resnet50/README.md index dba323b0..5f781abd 100644 --- a/qai_hub_models/models/fcn_resnet50/README.md +++ b/qai_hub_models/models/fcn_resnet50/README.md @@ -10,8 +10,7 @@ This is based on the implementation of FCN-ResNet50 found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/fcn_resnet50). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/segmentation/fcn.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/fcn_resnet50_quantized/README.md b/qai_hub_models/models/fcn_resnet50_quantized/README.md index 3ed8a452..f2a318f8 100644 --- a/qai_hub_models/models/fcn_resnet50_quantized/README.md +++ b/qai_hub_models/models/fcn_resnet50_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of FCN-ResNet50-Quantized found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/fcn_resnet50_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/segmentation/fcn.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/ffnet_122ns_lowres/README.md b/qai_hub_models/models/ffnet_122ns_lowres/README.md index f6d57fa7..4bf440e2 100644 --- a/qai_hub_models/models/ffnet_122ns_lowres/README.md +++ b/qai_hub_models/models/ffnet_122ns_lowres/README.md @@ -10,8 +10,7 @@ This is based on the implementation of FFNet-122NS-LowRes found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/ffnet_122ns_lowres). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/Qualcomm-AI-research/FFNet) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/ffnet_40s/README.md b/qai_hub_models/models/ffnet_40s/README.md index 0bc90d39..f9ee034a 100644 --- a/qai_hub_models/models/ffnet_40s/README.md +++ b/qai_hub_models/models/ffnet_40s/README.md @@ -10,8 +10,7 @@ This is based on the implementation of FFNet-40S found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/ffnet_40s). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/Qualcomm-AI-research/FFNet) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/ffnet_40s_quantized/README.md b/qai_hub_models/models/ffnet_40s_quantized/README.md index b730ceb5..5237d76f 100644 --- a/qai_hub_models/models/ffnet_40s_quantized/README.md +++ b/qai_hub_models/models/ffnet_40s_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of FFNet-40S-Quantized found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/ffnet_40s_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/Qualcomm-AI-research/FFNet) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/ffnet_54s/README.md b/qai_hub_models/models/ffnet_54s/README.md index 4122507a..90096232 100644 --- a/qai_hub_models/models/ffnet_54s/README.md +++ b/qai_hub_models/models/ffnet_54s/README.md @@ -10,8 +10,7 @@ This is based on the implementation of FFNet-54S found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/ffnet_54s). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/Qualcomm-AI-research/FFNet) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/ffnet_54s_quantized/README.md b/qai_hub_models/models/ffnet_54s_quantized/README.md index 5ab17ab3..1f6912e8 100644 --- a/qai_hub_models/models/ffnet_54s_quantized/README.md +++ b/qai_hub_models/models/ffnet_54s_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of FFNet-54S-Quantized found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/ffnet_54s_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/Qualcomm-AI-research/FFNet) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/ffnet_78s/README.md b/qai_hub_models/models/ffnet_78s/README.md index c3d9f2d0..0f2d79dc 100644 --- a/qai_hub_models/models/ffnet_78s/README.md +++ b/qai_hub_models/models/ffnet_78s/README.md @@ -10,8 +10,7 @@ This is based on the implementation of FFNet-78S found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/ffnet_78s). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/Qualcomm-AI-research/FFNet) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/ffnet_78s_lowres/README.md b/qai_hub_models/models/ffnet_78s_lowres/README.md index 306f938a..d39d8172 100644 --- a/qai_hub_models/models/ffnet_78s_lowres/README.md +++ b/qai_hub_models/models/ffnet_78s_lowres/README.md @@ -10,8 +10,7 @@ This is based on the implementation of FFNet-78S-LowRes found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/ffnet_78s_lowres). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/Qualcomm-AI-research/FFNet) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/ffnet_78s_quantized/README.md b/qai_hub_models/models/ffnet_78s_quantized/README.md index eaaccda1..f2c25927 100644 --- a/qai_hub_models/models/ffnet_78s_quantized/README.md +++ b/qai_hub_models/models/ffnet_78s_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of FFNet-78S-Quantized found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/ffnet_78s_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/Qualcomm-AI-research/FFNet) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/googlenet/README.md b/qai_hub_models/models/googlenet/README.md index 214ae1f8..bf12b13f 100644 --- a/qai_hub_models/models/googlenet/README.md +++ b/qai_hub_models/models/googlenet/README.md @@ -10,8 +10,7 @@ This is based on the implementation of GoogLeNet found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/googlenet). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/googlenet.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/googlenet_quantized/README.md b/qai_hub_models/models/googlenet_quantized/README.md index 91e33b0b..49c02f9f 100644 --- a/qai_hub_models/models/googlenet_quantized/README.md +++ b/qai_hub_models/models/googlenet_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of GoogLeNetQuantized found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/googlenet_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/googlenet.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/hrnet_pose/README.md b/qai_hub_models/models/hrnet_pose/README.md index d858ca38..2fa47408 100644 --- a/qai_hub_models/models/hrnet_pose/README.md +++ b/qai_hub_models/models/hrnet_pose/README.md @@ -10,8 +10,7 @@ This is based on the implementation of HRNetPose found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/hrnet_pose). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/hrnet_posenet) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/hrnet_pose_quantized/README.md b/qai_hub_models/models/hrnet_pose_quantized/README.md index 11dac69c..0e936653 100644 --- a/qai_hub_models/models/hrnet_pose_quantized/README.md +++ b/qai_hub_models/models/hrnet_pose_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of HRNetPoseQuantized found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/hrnet_pose_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/hrnet_posenet) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/huggingface_wavlm_base_plus/README.md b/qai_hub_models/models/huggingface_wavlm_base_plus/README.md index 570e2312..04bf447b 100644 --- a/qai_hub_models/models/huggingface_wavlm_base_plus/README.md +++ b/qai_hub_models/models/huggingface_wavlm_base_plus/README.md @@ -10,8 +10,7 @@ This is based on the implementation of HuggingFace-WavLM-Base-Plus found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/huggingface_wavlm_base_plus). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://huggingface.co/patrickvonplaten/wavlm-libri-clean-100h-base-plus/tree/main) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/inception_v3/README.md b/qai_hub_models/models/inception_v3/README.md index 65bf345a..81e30641 100644 --- a/qai_hub_models/models/inception_v3/README.md +++ b/qai_hub_models/models/inception_v3/README.md @@ -10,8 +10,7 @@ This is based on the implementation of Inception-v3 found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/inception_v3). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/inception.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/inception_v3_quantized/README.md b/qai_hub_models/models/inception_v3_quantized/README.md index b326f00a..87f38606 100644 --- a/qai_hub_models/models/inception_v3_quantized/README.md +++ b/qai_hub_models/models/inception_v3_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of Inception-v3-Quantized found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/inception_v3_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/inception.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/lama_dilated/README.md b/qai_hub_models/models/lama_dilated/README.md index a418710e..685dab86 100644 --- a/qai_hub_models/models/lama_dilated/README.md +++ b/qai_hub_models/models/lama_dilated/README.md @@ -10,8 +10,7 @@ This is based on the implementation of LaMa-Dilated found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/lama_dilated). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/advimman/lama) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/litehrnet/README.md b/qai_hub_models/models/litehrnet/README.md index d44fd6a6..c6f890e7 100644 --- a/qai_hub_models/models/litehrnet/README.md +++ b/qai_hub_models/models/litehrnet/README.md @@ -10,8 +10,7 @@ This is based on the implementation of LiteHRNet found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/litehrnet). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/HRNet/Lite-HRNet) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/llama_v2_7b_chat_quantized/README.md b/qai_hub_models/models/llama_v2_7b_chat_quantized/README.md index d7443fc1..52871142 100644 --- a/qai_hub_models/models/llama_v2_7b_chat_quantized/README.md +++ b/qai_hub_models/models/llama_v2_7b_chat_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of Llama-v2-7B-Chat found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/llama_v2_7b_chat_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. ## Deploying Llama 2 on-device @@ -109,7 +108,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/llama_v3_8b_chat_quantized/README.md b/qai_hub_models/models/llama_v3_8b_chat_quantized/README.md index 12f2012c..27678cb2 100644 --- a/qai_hub_models/models/llama_v3_8b_chat_quantized/README.md +++ b/qai_hub_models/models/llama_v3_8b_chat_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of Llama-v3-8B-Chat found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/llama_v3_8b_chat_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. ## Deploying Llama 3 on-device @@ -98,7 +97,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/meta-llama/llama3/tree/main) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/mediapipe_face/README.md b/qai_hub_models/models/mediapipe_face/README.md index a565c33d..1aeb2a40 100644 --- a/qai_hub_models/models/mediapipe_face/README.md +++ b/qai_hub_models/models/mediapipe_face/README.md @@ -10,8 +10,7 @@ This is based on the implementation of MediaPipe-Face-Detection found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/mediapipe_face). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/zmurez/MediaPipePyTorch/) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/mediapipe_hand/README.md b/qai_hub_models/models/mediapipe_hand/README.md index 0e49e035..7c170f6a 100644 --- a/qai_hub_models/models/mediapipe_hand/README.md +++ b/qai_hub_models/models/mediapipe_hand/README.md @@ -10,8 +10,7 @@ This is based on the implementation of MediaPipe-Hand-Detection found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/mediapipe_hand). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/zmurez/MediaPipePyTorch/) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/mediapipe_pose/README.md b/qai_hub_models/models/mediapipe_pose/README.md index a63082b8..4df97c19 100644 --- a/qai_hub_models/models/mediapipe_pose/README.md +++ b/qai_hub_models/models/mediapipe_pose/README.md @@ -10,8 +10,7 @@ This is based on the implementation of MediaPipe-Pose-Estimation found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/mediapipe_pose). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/zmurez/MediaPipePyTorch/) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/mediapipe_selfie/README.md b/qai_hub_models/models/mediapipe_selfie/README.md index ec08249a..49115f4e 100644 --- a/qai_hub_models/models/mediapipe_selfie/README.md +++ b/qai_hub_models/models/mediapipe_selfie/README.md @@ -10,8 +10,7 @@ This is based on the implementation of MediaPipe-Selfie-Segmentation found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/mediapipe_selfie). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/google/mediapipe/tree/master/mediapipe/modules/selfie_segmentation) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/midas/README.md b/qai_hub_models/models/midas/README.md index 8eed4994..d8e6479e 100644 --- a/qai_hub_models/models/midas/README.md +++ b/qai_hub_models/models/midas/README.md @@ -10,8 +10,7 @@ This is based on the implementation of Midas-V2 found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/midas). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/isl-org/MiDaS) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/midas_quantized/README.md b/qai_hub_models/models/midas_quantized/README.md index 56c96394..c2a4db5a 100644 --- a/qai_hub_models/models/midas_quantized/README.md +++ b/qai_hub_models/models/midas_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of Midas-V2-Quantized found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/midas_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/isl-org/MiDaS) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/mnasnet05/README.md b/qai_hub_models/models/mnasnet05/README.md index ab0d56a1..6f322636 100644 --- a/qai_hub_models/models/mnasnet05/README.md +++ b/qai_hub_models/models/mnasnet05/README.md @@ -10,8 +10,7 @@ This is based on the implementation of MNASNet05 found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/mnasnet05). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/mnasnet.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/mobilenet_v2/README.md b/qai_hub_models/models/mobilenet_v2/README.md index 7426d634..bf6d9dca 100644 --- a/qai_hub_models/models/mobilenet_v2/README.md +++ b/qai_hub_models/models/mobilenet_v2/README.md @@ -10,8 +10,7 @@ This is based on the implementation of MobileNet-v2 found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/mobilenet_v2). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/tonylins/pytorch-mobilenet-v2/tree/master) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/mobilenet_v2_quantized/README.md b/qai_hub_models/models/mobilenet_v2_quantized/README.md index c2ca082f..9a8a7c06 100644 --- a/qai_hub_models/models/mobilenet_v2_quantized/README.md +++ b/qai_hub_models/models/mobilenet_v2_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of MobileNet-v2-Quantized found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/mobilenet_v2_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/mobilenetv2) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/mobilenet_v3_large/README.md b/qai_hub_models/models/mobilenet_v3_large/README.md index 3084f4fb..cbc69327 100644 --- a/qai_hub_models/models/mobilenet_v3_large/README.md +++ b/qai_hub_models/models/mobilenet_v3_large/README.md @@ -10,8 +10,7 @@ This is based on the implementation of MobileNet-v3-Large found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/mobilenet_v3_large). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/mobilenetv3.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/mobilenet_v3_large_quantized/README.md b/qai_hub_models/models/mobilenet_v3_large_quantized/README.md index 1feab19d..f1d80ca8 100644 --- a/qai_hub_models/models/mobilenet_v3_large_quantized/README.md +++ b/qai_hub_models/models/mobilenet_v3_large_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of MobileNet-v3-Large-Quantized found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/mobilenet_v3_large_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/mobilenetv3.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/mobilenet_v3_small/README.md b/qai_hub_models/models/mobilenet_v3_small/README.md index bea9dec8..acbc1178 100644 --- a/qai_hub_models/models/mobilenet_v3_small/README.md +++ b/qai_hub_models/models/mobilenet_v3_small/README.md @@ -10,8 +10,7 @@ This is based on the implementation of MobileNet-v3-Small found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/mobilenet_v3_small). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/mobilenetv3.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/openai_clip/README.md b/qai_hub_models/models/openai_clip/README.md index 0455ec79..6bab600a 100644 --- a/qai_hub_models/models/openai_clip/README.md +++ b/qai_hub_models/models/openai_clip/README.md @@ -10,8 +10,7 @@ This is based on the implementation of OpenAI-Clip found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/openai_clip). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/openai/CLIP/) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/openpose/README.md b/qai_hub_models/models/openpose/README.md index 1a2f38c8..1585e5e7 100644 --- a/qai_hub_models/models/openpose/README.md +++ b/qai_hub_models/models/openpose/README.md @@ -10,8 +10,7 @@ This is based on the implementation of OpenPose found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/openpose). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/CMU-Perceptual-Computing-Lab/openpose) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/posenet_mobilenet/README.md b/qai_hub_models/models/posenet_mobilenet/README.md index 8f4ea678..90a83c3b 100644 --- a/qai_hub_models/models/posenet_mobilenet/README.md +++ b/qai_hub_models/models/posenet_mobilenet/README.md @@ -10,8 +10,7 @@ This is based on the implementation of Posenet-Mobilenet found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/posenet_mobilenet). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/rwightman/posenet-pytorch) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/posenet_mobilenet_quantized/README.md b/qai_hub_models/models/posenet_mobilenet_quantized/README.md index 00394618..f039d3c5 100644 --- a/qai_hub_models/models/posenet_mobilenet_quantized/README.md +++ b/qai_hub_models/models/posenet_mobilenet_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of Posenet-Mobilenet-Quantized found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/posenet_mobilenet_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/rwightman/posenet-pytorch) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/quicksrnetlarge/README.md b/qai_hub_models/models/quicksrnetlarge/README.md index 528f3c94..58607804 100644 --- a/qai_hub_models/models/quicksrnetlarge/README.md +++ b/qai_hub_models/models/quicksrnetlarge/README.md @@ -10,8 +10,7 @@ This is based on the implementation of QuickSRNetLarge found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/quicksrnetlarge). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/quicksrnet) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/quicksrnetlarge_quantized/README.md b/qai_hub_models/models/quicksrnetlarge_quantized/README.md index 025a873c..35690b40 100644 --- a/qai_hub_models/models/quicksrnetlarge_quantized/README.md +++ b/qai_hub_models/models/quicksrnetlarge_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of QuickSRNetLarge-Quantized found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/quicksrnetlarge_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/quicksrnet) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/quicksrnetmedium/README.md b/qai_hub_models/models/quicksrnetmedium/README.md index 0e95ef93..a3adabe4 100644 --- a/qai_hub_models/models/quicksrnetmedium/README.md +++ b/qai_hub_models/models/quicksrnetmedium/README.md @@ -10,8 +10,7 @@ This is based on the implementation of QuickSRNetMedium found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/quicksrnetmedium). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/quicksrnet) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/quicksrnetmedium_quantized/README.md b/qai_hub_models/models/quicksrnetmedium_quantized/README.md index 2ffc7d9f..ed5b04f5 100644 --- a/qai_hub_models/models/quicksrnetmedium_quantized/README.md +++ b/qai_hub_models/models/quicksrnetmedium_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of QuickSRNetMedium-Quantized found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/quicksrnetmedium_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/quicksrnet) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/quicksrnetsmall/README.md b/qai_hub_models/models/quicksrnetsmall/README.md index ada2e6c6..8b3c02c1 100644 --- a/qai_hub_models/models/quicksrnetsmall/README.md +++ b/qai_hub_models/models/quicksrnetsmall/README.md @@ -10,8 +10,7 @@ This is based on the implementation of QuickSRNetSmall found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/quicksrnetsmall). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/quicksrnet) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/quicksrnetsmall_quantized/README.md b/qai_hub_models/models/quicksrnetsmall_quantized/README.md index 8573495f..9eb783fb 100644 --- a/qai_hub_models/models/quicksrnetsmall_quantized/README.md +++ b/qai_hub_models/models/quicksrnetsmall_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of QuickSRNetSmall-Quantized found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/quicksrnetsmall_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/quicksrnet) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/real_esrgan_general_x4v3/README.md b/qai_hub_models/models/real_esrgan_general_x4v3/README.md index 11cbbee5..87d2214e 100644 --- a/qai_hub_models/models/real_esrgan_general_x4v3/README.md +++ b/qai_hub_models/models/real_esrgan_general_x4v3/README.md @@ -10,8 +10,7 @@ This is based on the implementation of Real-ESRGAN-General-x4v3 found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/real_esrgan_general_x4v3). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/xinntao/Real-ESRGAN/tree/master) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/real_esrgan_x4plus/README.md b/qai_hub_models/models/real_esrgan_x4plus/README.md index 3c6db231..8dd6beec 100644 --- a/qai_hub_models/models/real_esrgan_x4plus/README.md +++ b/qai_hub_models/models/real_esrgan_x4plus/README.md @@ -10,8 +10,7 @@ This is based on the implementation of Real-ESRGAN-x4plus found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/real_esrgan_x4plus). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/xinntao/Real-ESRGAN) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/regnet/README.md b/qai_hub_models/models/regnet/README.md index 3caff192..9186826f 100644 --- a/qai_hub_models/models/regnet/README.md +++ b/qai_hub_models/models/regnet/README.md @@ -10,8 +10,7 @@ This is based on the implementation of RegNet found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/regnet). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/regnet.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/regnet_quantized/README.md b/qai_hub_models/models/regnet_quantized/README.md index c20388d0..85fb43e9 100644 --- a/qai_hub_models/models/regnet_quantized/README.md +++ b/qai_hub_models/models/regnet_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of RegNetQuantized found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/regnet_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/regnet.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/resnet101/README.md b/qai_hub_models/models/resnet101/README.md index 3557c576..858e0ecd 100644 --- a/qai_hub_models/models/resnet101/README.md +++ b/qai_hub_models/models/resnet101/README.md @@ -10,8 +10,7 @@ This is based on the implementation of ResNet101 found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/resnet101). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/resnet101_quantized/README.md b/qai_hub_models/models/resnet101_quantized/README.md index 4c46a553..acc1f71d 100644 --- a/qai_hub_models/models/resnet101_quantized/README.md +++ b/qai_hub_models/models/resnet101_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of ResNet101Quantized found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/resnet101_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/resnet18/README.md b/qai_hub_models/models/resnet18/README.md index 2b9ced95..c78fd6a9 100644 --- a/qai_hub_models/models/resnet18/README.md +++ b/qai_hub_models/models/resnet18/README.md @@ -10,8 +10,7 @@ This is based on the implementation of ResNet18 found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/resnet18). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/resnet18_quantized/README.md b/qai_hub_models/models/resnet18_quantized/README.md index 266febea..705dd764 100644 --- a/qai_hub_models/models/resnet18_quantized/README.md +++ b/qai_hub_models/models/resnet18_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of ResNet18Quantized found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/resnet18_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/resnet50/README.md b/qai_hub_models/models/resnet50/README.md index 4ec67961..96ba5cac 100644 --- a/qai_hub_models/models/resnet50/README.md +++ b/qai_hub_models/models/resnet50/README.md @@ -10,8 +10,7 @@ This is based on the implementation of ResNet50 found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/resnet50). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/resnet50_quantized/README.md b/qai_hub_models/models/resnet50_quantized/README.md index 1e962511..e6e0a463 100644 --- a/qai_hub_models/models/resnet50_quantized/README.md +++ b/qai_hub_models/models/resnet50_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of ResNet50Quantized found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/resnet50_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/resnext101/README.md b/qai_hub_models/models/resnext101/README.md index cf629f9a..1776cdf7 100644 --- a/qai_hub_models/models/resnext101/README.md +++ b/qai_hub_models/models/resnext101/README.md @@ -10,8 +10,7 @@ This is based on the implementation of ResNeXt101 found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/resnext101). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/resnext101_quantized/README.md b/qai_hub_models/models/resnext101_quantized/README.md index e91fdd6a..3ddf62df 100644 --- a/qai_hub_models/models/resnext101_quantized/README.md +++ b/qai_hub_models/models/resnext101_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of ResNeXt101Quantized found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/resnext101_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/resnext50/README.md b/qai_hub_models/models/resnext50/README.md index 68e67be5..bb1c8865 100644 --- a/qai_hub_models/models/resnext50/README.md +++ b/qai_hub_models/models/resnext50/README.md @@ -10,8 +10,7 @@ This is based on the implementation of ResNeXt50 found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/resnext50). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/resnext50_quantized/README.md b/qai_hub_models/models/resnext50_quantized/README.md index 3ce0b330..6c69089a 100644 --- a/qai_hub_models/models/resnext50_quantized/README.md +++ b/qai_hub_models/models/resnext50_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of ResNeXt50Quantized found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/resnext50_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/riffusion_quantized/README.md b/qai_hub_models/models/riffusion_quantized/README.md index 9c5b1a50..fedfc292 100644 --- a/qai_hub_models/models/riffusion_quantized/README.md +++ b/qai_hub_models/models/riffusion_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of Riffusion found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/riffusion_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/CompVis/stable-diffusion/tree/main) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/sam/README.md b/qai_hub_models/models/sam/README.md index a0ba93db..5ad08396 100644 --- a/qai_hub_models/models/sam/README.md +++ b/qai_hub_models/models/sam/README.md @@ -10,8 +10,7 @@ This is based on the implementation of Segment-Anything-Model found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/sam). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/facebookresearch/segment-anything) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/sesr_m5/README.md b/qai_hub_models/models/sesr_m5/README.md index 9cec4f6c..35457e96 100644 --- a/qai_hub_models/models/sesr_m5/README.md +++ b/qai_hub_models/models/sesr_m5/README.md @@ -10,8 +10,7 @@ This is based on the implementation of SESR-M5 found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/sesr_m5). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/sesr) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/sesr_m5_quantized/README.md b/qai_hub_models/models/sesr_m5_quantized/README.md index f8346830..18ee0ea3 100644 --- a/qai_hub_models/models/sesr_m5_quantized/README.md +++ b/qai_hub_models/models/sesr_m5_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of SESR-M5-Quantized found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/sesr_m5_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/sesr) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/shufflenet_v2/README.md b/qai_hub_models/models/shufflenet_v2/README.md index 6fcab0d3..14b27b6d 100644 --- a/qai_hub_models/models/shufflenet_v2/README.md +++ b/qai_hub_models/models/shufflenet_v2/README.md @@ -10,8 +10,7 @@ This is based on the implementation of Shufflenet-v2 found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/shufflenet_v2). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/shufflenetv2.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/shufflenet_v2_quantized/README.md b/qai_hub_models/models/shufflenet_v2_quantized/README.md index f2663608..f97d918f 100644 --- a/qai_hub_models/models/shufflenet_v2_quantized/README.md +++ b/qai_hub_models/models/shufflenet_v2_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of Shufflenet-v2Quantized found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/shufflenet_v2_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/shufflenetv2.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/sinet/README.md b/qai_hub_models/models/sinet/README.md index 82d1c945..388eec6c 100644 --- a/qai_hub_models/models/sinet/README.md +++ b/qai_hub_models/models/sinet/README.md @@ -10,8 +10,7 @@ This is based on the implementation of SINet found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/sinet). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/clovaai/ext_portrait_segmentation) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/squeezenet1_1/README.md b/qai_hub_models/models/squeezenet1_1/README.md index 48b5f7ed..99b00954 100644 --- a/qai_hub_models/models/squeezenet1_1/README.md +++ b/qai_hub_models/models/squeezenet1_1/README.md @@ -10,8 +10,7 @@ This is based on the implementation of SqueezeNet-1_1 found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/squeezenet1_1). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/squeezenet.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/squeezenet1_1_quantized/README.md b/qai_hub_models/models/squeezenet1_1_quantized/README.md index e7e60338..daed51c8 100644 --- a/qai_hub_models/models/squeezenet1_1_quantized/README.md +++ b/qai_hub_models/models/squeezenet1_1_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of SqueezeNet-1_1Quantized found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/squeezenet1_1_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/squeezenet.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/stable_diffusion_v1_5_quantized/README.md b/qai_hub_models/models/stable_diffusion_v1_5_quantized/README.md index 286aab8d..c4690f12 100644 --- a/qai_hub_models/models/stable_diffusion_v1_5_quantized/README.md +++ b/qai_hub_models/models/stable_diffusion_v1_5_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of Stable-Diffusion-v1.5 found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/stable_diffusion_v1_5_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/CompVis/stable-diffusion/tree/main) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/stable_diffusion_v1_5_quantized/perf.yaml b/qai_hub_models/models/stable_diffusion_v1_5_quantized/perf.yaml index 44922d59..e0524c31 100644 --- a/qai_hub_models/models/stable_diffusion_v1_5_quantized/perf.yaml +++ b/qai_hub_models/models/stable_diffusion_v1_5_quantized/perf.yaml @@ -34,11 +34,11 @@ models: - name: TextEncoder_Quantized performance_metrics: - torchscript_onnx_qnn: - inference_time: 7014.0 - throughput: 142.571998859424 + inference_time: 7012.0 + throughput: 142.6126640045636 estimated_peak_memory_range: - min: 28672 - max: 1314216 + min: 24576 + max: 1218672 primary_compute_unit: NPU precision: int8 layer_info: @@ -46,7 +46,7 @@ models: layers_on_gpu: 0 layers_on_cpu: 0 total_layers: 569 - job_id: jgdx1kwep + job_id: jqpy80e0g job_status: Passed reference_device_info: name: Samsung Galaxy S23 @@ -55,13 +55,13 @@ models: os_name: Android manufacturer: Samsung chipset: Snapdragon® 8 Gen 2 - timestamp: '2024-08-01T13:33:40Z' + timestamp: '2024-08-16T13:05:16Z' - torchscript_onnx_qnn: - inference_time: 4776.0 - throughput: 209.38023450586266 + inference_time: 4792.0 + throughput: 208.6811352253756 estimated_peak_memory_range: - min: 12288 - max: 8121568 + min: 40960 + max: 8404032 primary_compute_unit: NPU precision: int8 layer_info: @@ -69,7 +69,7 @@ models: layers_on_gpu: 0 layers_on_cpu: 0 total_layers: 569 - job_id: j57yrmzl5 + job_id: j2p0o7q0p job_status: Passed reference_device_info: name: Samsung Galaxy S24 @@ -78,13 +78,13 @@ models: os_name: Android manufacturer: Samsung chipset: Snapdragon® 8 Gen 3 - timestamp: '2024-08-01T13:33:40Z' + timestamp: '2024-08-16T13:05:17Z' - torchscript_onnx_qnn: - inference_time: 7566.0 - throughput: 132.17023526301878 + inference_time: 7546.0 + throughput: 132.520540683806 estimated_peak_memory_range: - min: 69632 - max: 69632 + min: 53248 + max: 53248 primary_compute_unit: NPU precision: int8 layer_info: @@ -92,7 +92,7 @@ models: layers_on_gpu: 0 layers_on_cpu: 0 total_layers: 569 - job_id: jp4lr7qv5 + job_id: j1p8jv9q5 job_status: Passed reference_device_info: name: Snapdragon X Elite CRD @@ -101,15 +101,15 @@ models: os_name: Windows manufacturer: Qualcomm chipset: Snapdragon® X Elite - timestamp: '2024-08-01T13:33:40Z' + timestamp: '2024-08-16T13:05:17Z' - name: VAEDecoder_Quantized performance_metrics: - torchscript_onnx_qnn: - inference_time: 234265.0 - throughput: 4.268670095831643 + inference_time: 234171.0 + throughput: 4.270383608559557 estimated_peak_memory_range: - min: 319488 - max: 1780344 + min: 106496 + max: 2492648 primary_compute_unit: NPU precision: int8 layer_info: @@ -117,7 +117,7 @@ models: layers_on_gpu: 0 layers_on_cpu: 0 total_layers: 170 - job_id: jpxkoqv15 + job_id: jogk6mnv5 job_status: Passed reference_device_info: name: Samsung Galaxy S23 @@ -126,13 +126,13 @@ models: os_name: Android manufacturer: Samsung chipset: Snapdragon® 8 Gen 2 - timestamp: '2024-08-01T13:33:41Z' + timestamp: '2024-08-16T13:05:17Z' - torchscript_onnx_qnn: - inference_time: 175746.0 - throughput: 5.690029929557429 + inference_time: 175906.0 + throughput: 5.684854410878537 estimated_peak_memory_range: - min: 299008 - max: 8284720 + min: 319488 + max: 8319392 primary_compute_unit: NPU precision: int8 layer_info: @@ -140,7 +140,7 @@ models: layers_on_gpu: 0 layers_on_cpu: 0 total_layers: 170 - job_id: j5mnx7rwp + job_id: jn5q4okeg job_status: Passed reference_device_info: name: Samsung Galaxy S24 @@ -149,13 +149,13 @@ models: os_name: Android manufacturer: Samsung chipset: Snapdragon® 8 Gen 3 - timestamp: '2024-08-01T13:33:42Z' + timestamp: '2024-08-16T13:05:18Z' - torchscript_onnx_qnn: - inference_time: 229311.0 - throughput: 4.360889795953966 + inference_time: 229305.0 + throughput: 4.361003903098493 estimated_peak_memory_range: - min: 49152 - max: 49152 + min: 40960 + max: 40960 primary_compute_unit: NPU precision: int8 layer_info: @@ -163,7 +163,7 @@ models: layers_on_gpu: 0 layers_on_cpu: 0 total_layers: 170 - job_id: jgn6v42r5 + job_id: j1glwrz2p job_status: Passed reference_device_info: name: Snapdragon X Elite CRD @@ -172,15 +172,15 @@ models: os_name: Windows manufacturer: Qualcomm chipset: Snapdragon® X Elite - timestamp: '2024-08-01T13:33:42Z' + timestamp: '2024-08-16T13:05:18Z' - name: UNet_Quantized performance_metrics: - torchscript_onnx_qnn: - inference_time: 116253.0 - throughput: 8.601928552381445 + inference_time: 116181.0 + throughput: 8.607259362546372 estimated_peak_memory_range: - min: 372736 - max: 1712776 + min: 503808 + max: 2393528 primary_compute_unit: NPU precision: int8 layer_info: @@ -188,7 +188,7 @@ models: layers_on_gpu: 0 layers_on_cpu: 0 total_layers: 4421 - job_id: jprv3rk9g + job_id: jw56oljn5 job_status: Passed reference_device_info: name: Samsung Galaxy S23 @@ -197,13 +197,13 @@ models: os_name: Android manufacturer: Samsung chipset: Snapdragon® 8 Gen 2 - timestamp: '2024-08-01T13:33:43Z' + timestamp: '2024-08-16T13:05:18Z' - torchscript_onnx_qnn: - inference_time: 81867.0 - throughput: 12.214933978281847 + inference_time: 81718.0 + throughput: 12.237206001125823 estimated_peak_memory_range: - min: 380928 - max: 7888160 + min: 397312 + max: 7905920 primary_compute_unit: NPU precision: int8 layer_info: @@ -211,7 +211,7 @@ models: layers_on_gpu: 0 layers_on_cpu: 0 total_layers: 4421 - job_id: jpy13le7p + job_id: j1p3o23mp job_status: Passed reference_device_info: name: Samsung Galaxy S24 @@ -220,13 +220,13 @@ models: os_name: Android manufacturer: Samsung chipset: Snapdragon® 8 Gen 3 - timestamp: '2024-08-01T13:33:43Z' + timestamp: '2024-08-16T13:05:19Z' - torchscript_onnx_qnn: - inference_time: 118942.0 - throughput: 8.40745909771149 + inference_time: 118773.0 + throughput: 8.419421922490802 estimated_peak_memory_range: - min: 1134592 - max: 1134592 + min: 159744 + max: 159744 primary_compute_unit: NPU precision: int8 layer_info: @@ -234,7 +234,7 @@ models: layers_on_gpu: 0 layers_on_cpu: 0 total_layers: 4421 - job_id: jp0z0wy65 + job_id: jwgodq015 job_status: Passed reference_device_info: name: Snapdragon X Elite CRD @@ -243,4 +243,4 @@ models: os_name: Windows manufacturer: Qualcomm chipset: Snapdragon® X Elite - timestamp: '2024-08-01T13:33:44Z' + timestamp: '2024-08-16T13:05:19Z' diff --git a/qai_hub_models/models/stable_diffusion_v2_1_quantized/README.md b/qai_hub_models/models/stable_diffusion_v2_1_quantized/README.md index 69677b00..ca77cd00 100644 --- a/qai_hub_models/models/stable_diffusion_v2_1_quantized/README.md +++ b/qai_hub_models/models/stable_diffusion_v2_1_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of Stable-Diffusion-v2.1 found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/stable_diffusion_v2_1_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/CompVis/stable-diffusion/tree/main) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/stable_diffusion_v2_1_quantized/perf.yaml b/qai_hub_models/models/stable_diffusion_v2_1_quantized/perf.yaml index 00082487..99a7e976 100644 --- a/qai_hub_models/models/stable_diffusion_v2_1_quantized/perf.yaml +++ b/qai_hub_models/models/stable_diffusion_v2_1_quantized/perf.yaml @@ -34,11 +34,11 @@ models: - name: TextEncoder_Quantized performance_metrics: - torchscript_onnx_qnn: - inference_time: 11701.0 - throughput: 85.46278095889241 + inference_time: 11633.0 + throughput: 85.96234849136079 estimated_peak_memory_range: - min: 81920 - max: 1461936 + min: 28672 + max: 1274224 primary_compute_unit: NPU precision: int8 layer_info: @@ -46,7 +46,7 @@ models: layers_on_gpu: 0 layers_on_cpu: 0 total_layers: 1040 - job_id: j5weeed35 + job_id: j7gj34m1p job_status: Passed reference_device_info: name: Samsung Galaxy S23 @@ -55,13 +55,13 @@ models: os_name: Android manufacturer: Samsung chipset: Snapdragon® 8 Gen 2 - timestamp: '2024-08-05T13:55:21Z' + timestamp: '2024-08-16T13:07:01Z' - torchscript_onnx_qnn: - inference_time: 7773.0 - throughput: 128.65045670912133 + inference_time: 7759.0 + throughput: 128.88258796236627 estimated_peak_memory_range: min: 12288 - max: 8522928 + max: 7985728 primary_compute_unit: NPU precision: int8 layer_info: @@ -69,7 +69,7 @@ models: layers_on_gpu: 0 layers_on_cpu: 0 total_layers: 1040 - job_id: jg9lll3wg + job_id: jlpe6318g job_status: Passed reference_device_info: name: Samsung Galaxy S24 @@ -78,13 +78,13 @@ models: os_name: Android manufacturer: Samsung chipset: Snapdragon® 8 Gen 3 - timestamp: '2024-08-05T13:55:21Z' + timestamp: '2024-08-16T13:07:02Z' - torchscript_onnx_qnn: - inference_time: 11798.0 - throughput: 84.76012883539583 + inference_time: 11773.0 + throughput: 84.94011721736176 estimated_peak_memory_range: - min: 24576 - max: 24576 + min: 12288 + max: 12288 primary_compute_unit: NPU precision: int8 layer_info: @@ -92,7 +92,7 @@ models: layers_on_gpu: 0 layers_on_cpu: 0 total_layers: 1040 - job_id: jp1444d8p + job_id: jygzzk94g job_status: Passed reference_device_info: name: Snapdragon X Elite CRD @@ -101,15 +101,15 @@ models: os_name: Windows manufacturer: Qualcomm chipset: Snapdragon® X Elite - timestamp: '2024-08-05T13:55:22Z' + timestamp: '2024-08-16T13:07:02Z' - name: VAEDecoder_Quantized performance_metrics: - torchscript_onnx_qnn: - inference_time: 216248.0 - throughput: 4.624320224926936 + inference_time: 217134.0 + throughput: 4.605451011817587 estimated_peak_memory_range: - min: 0 - max: 1529720 + min: 266240 + max: 1580008 primary_compute_unit: NPU precision: int8 layer_info: @@ -117,7 +117,7 @@ models: layers_on_gpu: 0 layers_on_cpu: 0 total_layers: 170 - job_id: jgdxxxrrp + job_id: jz5wynv4g job_status: Passed reference_device_info: name: Samsung Galaxy S23 @@ -126,13 +126,13 @@ models: os_name: Android manufacturer: Samsung chipset: Snapdragon® 8 Gen 2 - timestamp: '2024-08-05T13:55:23Z' + timestamp: '2024-08-16T13:07:02Z' - torchscript_onnx_qnn: - inference_time: 161702.0 - throughput: 6.184215408591112 + inference_time: 161705.0 + throughput: 6.184100677159024 estimated_peak_memory_range: - min: 286720 - max: 8556832 + min: 303104 + max: 8557744 primary_compute_unit: NPU precision: int8 layer_info: @@ -140,7 +140,7 @@ models: layers_on_gpu: 0 layers_on_cpu: 0 total_layers: 170 - job_id: j57yyyjv5 + job_id: jmg9oe1mg job_status: Passed reference_device_info: name: Samsung Galaxy S24 @@ -149,13 +149,13 @@ models: os_name: Android manufacturer: Samsung chipset: Snapdragon® 8 Gen 3 - timestamp: '2024-08-05T13:55:23Z' + timestamp: '2024-08-16T13:07:03Z' - torchscript_onnx_qnn: - inference_time: 220255.0 - throughput: 4.5401920501237205 + inference_time: 220179.0 + throughput: 4.541759205010469 estimated_peak_memory_range: - min: 40960 - max: 40960 + min: 57344 + max: 57344 primary_compute_unit: NPU precision: int8 layer_info: @@ -163,7 +163,7 @@ models: layers_on_gpu: 0 layers_on_cpu: 0 total_layers: 170 - job_id: jp4lllx85 + job_id: jnp1oxln5 job_status: Passed reference_device_info: name: Snapdragon X Elite CRD @@ -172,15 +172,15 @@ models: os_name: Windows manufacturer: Qualcomm chipset: Snapdragon® X Elite - timestamp: '2024-08-05T13:55:24Z' + timestamp: '2024-08-16T13:07:03Z' - name: UNet_Quantized performance_metrics: - torchscript_onnx_qnn: - inference_time: 100062.0 - throughput: 9.993803841618197 + inference_time: 101094.0 + throughput: 9.891783884305696 estimated_peak_memory_range: - min: 520192 - max: 1845960 + min: 466944 + max: 1857256 primary_compute_unit: NPU precision: int8 layer_info: @@ -188,7 +188,7 @@ models: layers_on_gpu: 0 layers_on_cpu: 0 total_layers: 6361 - job_id: jpxkkk735 + job_id: jvgd6l96p job_status: Passed reference_device_info: name: Samsung Galaxy S23 @@ -197,13 +197,13 @@ models: os_name: Android manufacturer: Samsung chipset: Snapdragon® 8 Gen 2 - timestamp: '2024-08-05T13:55:25Z' + timestamp: '2024-08-16T13:07:03Z' - torchscript_onnx_qnn: - inference_time: 72225.0 - throughput: 13.845621322256836 + inference_time: 72620.0 + throughput: 13.770311209033324 estimated_peak_memory_range: - min: 344064 - max: 7878176 + min: 446464 + max: 7997632 primary_compute_unit: NPU precision: int8 layer_info: @@ -211,7 +211,7 @@ models: layers_on_gpu: 0 layers_on_cpu: 0 total_layers: 6361 - job_id: j5mnnnwdp + job_id: jz57o3wng job_status: Passed reference_device_info: name: Samsung Galaxy S24 @@ -220,13 +220,13 @@ models: os_name: Android manufacturer: Samsung chipset: Snapdragon® 8 Gen 3 - timestamp: '2024-08-05T13:55:25Z' + timestamp: '2024-08-16T13:07:04Z' - torchscript_onnx_qnn: - inference_time: 102503.0 - throughput: 9.755812025013903 + inference_time: 102486.0 + throughput: 9.757430283160627 estimated_peak_memory_range: - min: 204800 - max: 204800 + min: 200704 + max: 200704 primary_compute_unit: NPU precision: int8 layer_info: @@ -234,7 +234,7 @@ models: layers_on_gpu: 0 layers_on_cpu: 0 total_layers: 6361 - job_id: jgn6669k5 + job_id: j1p3o22mp job_status: Passed reference_device_info: name: Snapdragon X Elite CRD @@ -243,4 +243,4 @@ models: os_name: Windows manufacturer: Qualcomm chipset: Snapdragon® X Elite - timestamp: '2024-08-05T13:55:25Z' + timestamp: '2024-08-16T13:21:40Z' diff --git a/qai_hub_models/models/swin_base/README.md b/qai_hub_models/models/swin_base/README.md index ec878b5d..6ffedb98 100644 --- a/qai_hub_models/models/swin_base/README.md +++ b/qai_hub_models/models/swin_base/README.md @@ -10,8 +10,7 @@ This is based on the implementation of Swin-Base found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/swin_base). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/swin_transformer.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/swin_small/README.md b/qai_hub_models/models/swin_small/README.md index a661caf5..8594f83c 100644 --- a/qai_hub_models/models/swin_small/README.md +++ b/qai_hub_models/models/swin_small/README.md @@ -10,8 +10,7 @@ This is based on the implementation of Swin-Small found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/swin_small). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/swin_transformer.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/swin_tiny/README.md b/qai_hub_models/models/swin_tiny/README.md index e0733e34..08b4cf3a 100644 --- a/qai_hub_models/models/swin_tiny/README.md +++ b/qai_hub_models/models/swin_tiny/README.md @@ -10,8 +10,7 @@ This is based on the implementation of Swin-Tiny found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/swin_tiny). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/swin_transformer.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/trocr/README.md b/qai_hub_models/models/trocr/README.md index 429f2e2f..6051cbb9 100644 --- a/qai_hub_models/models/trocr/README.md +++ b/qai_hub_models/models/trocr/README.md @@ -10,8 +10,7 @@ This is based on the implementation of TrOCR found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/trocr). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://huggingface.co/microsoft/trocr-small-stage1) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/trocr/app.py b/qai_hub_models/models/trocr/app.py index 4aa88b74..99eaf52b 100644 --- a/qai_hub_models/models/trocr/app.py +++ b/qai_hub_models/models/trocr/app.py @@ -4,12 +4,16 @@ # --------------------------------------------------------------------- from __future__ import annotations -from typing import Generator, List +from typing import Generator, List, Tuple +import numpy as np import torch from PIL.Image import Image -from qai_hub_models.models.trocr.model import KVCache, TrOCR +from qai_hub_models.models.trocr.model import TrOCR +from qai_hub_models.utils.model_adapters import TorchNumpyAdapter + +KVCacheNp = Tuple[np.ndarray, ...] class TrOCRApp: @@ -29,8 +33,14 @@ class TrOCRApp: """ def __init__(self, model: TrOCR): + self.model = model self.encoder = model.encoder self.decoder = model.decoder + # Wraps torch Module so it takes np ndarray as input and outputs + if isinstance(self.encoder, torch.nn.Module): + self.encoder = TorchNumpyAdapter(self.encoder) + if isinstance(self.decoder, torch.nn.Module): + self.decoder = TorchNumpyAdapter(self.decoder) self.io_processor = model.io_processor self.pad_token_id = model.pad_token_id @@ -38,7 +48,7 @@ def __init__(self, model: TrOCR): self.start_token_id = model.start_token_id self.max_seq_len = model.max_seq_len - def preprocess_image(self, image: Image) -> torch.Tensor: + def preprocess_image(self, image: Image) -> np.ndarray: """Convert a raw image (resize, normalize) into a pyTorch tensor that can be used as input to TrOCR inference. This also converts the image to RGB, which is the expected input channel layout for TrOCR. @@ -46,20 +56,22 @@ def preprocess_image(self, image: Image) -> torch.Tensor: assert ( self.io_processor is not None ), "TrOCR processor most be provided to use type Image as an input." - return self.io_processor(image.convert("RGB"), return_tensors="pt").pixel_values + return self.io_processor( + image.convert("RGB"), return_tensors="pt" + ).pixel_values.numpy() def predict(self, *args, **kwargs): # See predict_text_from_image. return self.predict_text_from_image(*args, **kwargs) def predict_text_from_image( - self, pixel_values_or_image: torch.Tensor | Image, raw_output: bool = False - ) -> torch.Tensor | List[str]: + self, pixel_values_or_image: np.ndarray | Image, raw_output: bool = False + ) -> np.ndarray | List[str]: """ From the provided image or tensor, predict the line of text contained within. Parameters: - pixel_values_or_image: torch.Tensor + pixel_values_or_image: np.ndarrray Input PIL image (before pre-processing) or pyTorch tensor (after image pre-processing). raw_output: bool If false, return a list of predicted strings (one for each batch). Otherwise, return a tensor of predicted token IDs. @@ -68,7 +80,7 @@ def predict_text_from_image( The output word / token sequence (representative of the text contained in the input image). The prediction will be a list of strings (one string per batch) if self.io_processor != None and raw_output=False. - Otherwise, a `torch.Tensor` of shape [batch_size, predicted_sequence_length] is returned. It contains predicted token IDs. + Otherwise, a `np.ndarray` of shape [batch_size, predicted_sequence_length] is returned. It contains predicted token IDs. """ gen = self.stream_predicted_text_from_image(pixel_values_or_image, raw_output) _ = last = next(gen) @@ -77,8 +89,8 @@ def predict_text_from_image( return last def stream_predicted_text_from_image( - self, pixel_values_or_image: torch.Tensor | Image, raw_output: bool = False - ) -> Generator[torch.Tensor | List[str], None, None]: + self, pixel_values_or_image: np.ndarray | Image, raw_output: bool = False + ) -> Generator[np.ndarray | List[str], None, None]: """ From the provided image or tensor, predict the line of text contained within. The returned generator will produce a single output per decoder iteration. @@ -87,8 +99,8 @@ def stream_predicted_text_from_image( (eg. get the prediction one word at as time as they're predicted, instead of waiting for the entire output sequence to be predicted) Parameters: - pixel_values_or_image: torch.Tensor - Input PIL image (before pre-processing) or pyTorch tensor (after image pre-processing). + pixel_values_or_image: np.ndarray + Input PIL image (before pre-processing) or np tensor (after image pre-processing). raw_output: bool If false, return a list of predicted strings (one for each batch). Otherwise, return a tensor of predicted token IDs. @@ -97,7 +109,7 @@ def stream_predicted_text_from_image( The generator will produce one output for every decoder iteration. The prediction will be a list of strings (one string per batch) if self.io_processor != None and raw_output=False. - Otherwise, a `torch.Tensor` of shape [batch_size, predicted_sequence_length] is returned. It contains predicted token IDs. + Otherwise, a `np.ndarray` of shape [batch_size, predicted_sequence_length] is returned. It contains predicted token IDs. """ if isinstance(pixel_values_or_image, Image): pixel_values = self.preprocess_image(pixel_values_or_image) @@ -105,7 +117,7 @@ def stream_predicted_text_from_image( pixel_values = pixel_values_or_image batch_size = pixel_values.shape[0] - eos_token_id_tensor = torch.tensor([self.eos_token_id], dtype=torch.int32) + eos_token_id_tensor = np.array([self.eos_token_id], dtype=np.int32) # Run encoder kv_cache_cross_attn = self.encoder(pixel_values) @@ -113,16 +125,19 @@ def stream_predicted_text_from_image( # Initial KV Cache initial_attn_cache = get_empty_attn_cache( batch_size, - self.decoder.num_decoder_layers, - self.decoder.decoder_attention_heads, - self.decoder.embeddings_per_head, + # -1 because current token's project kv value will be appended to + # kv_cache. + self.max_seq_len - 1, + self.model.num_decoder_layers, + self.model.decoder_attention_heads, + self.model.embeddings_per_head, ) initial_kv_cache = combine_kv_caches(kv_cache_cross_attn, initial_attn_cache) kv_cache = initial_kv_cache # Prepare decoder input IDs. Shape: [batch_size, 1] initial_input_ids = ( - torch.ones((batch_size, 1), dtype=torch.int32) * self.start_token_id + np.ones((batch_size, 1), dtype=np.int32) * self.start_token_id ) input_ids = initial_input_ids @@ -130,13 +145,14 @@ def stream_predicted_text_from_image( output_ids = input_ids # Keep track of which sequences are already finished. Shape: [batch_size] - unfinished_sequences = torch.ones(batch_size, dtype=torch.int32) + unfinished_sequences = np.ones(batch_size, dtype=np.int32) + decode_pos = np.array([0], dtype=np.int32) while unfinished_sequences.max() != 0 and ( self.max_seq_len is None or output_ids.shape[-1] < self.max_seq_len ): # Get next tokens. Shape: [batch_size] - outputs = self.decoder(input_ids, *kv_cache) + outputs = self.decoder(input_ids, decode_pos, *kv_cache) next_tokens = outputs[0] kv_cache_attn = outputs[1:] @@ -145,29 +161,33 @@ def stream_predicted_text_from_image( 1 - unfinished_sequences ) - input_ids = torch.unsqueeze(next_tokens, -1) - output_ids = torch.cat([output_ids, input_ids], dim=-1) + input_ids = np.expand_dims(next_tokens, -1) + output_ids = np.concatenate([output_ids, input_ids], axis=-1) yield self.io_processor.batch_decode( - output_ids, skip_special_tokens=True + torch.from_numpy(output_ids), skip_special_tokens=True ) if self.io_processor and not raw_output else output_ids # if eos_token was found in one sentence, set sentence to finished if eos_token_id_tensor is not None: - unfinished_sequences = unfinished_sequences.mul( - torch.unsqueeze(next_tokens, -1) - .ne(eos_token_id_tensor.unsqueeze(1)) - .prod(dim=0) - .type(torch.int32) + unfinished_sequences = unfinished_sequences * ( + np.prod( + np.expand_dims(next_tokens, -1) + != np.expand_dims(eos_token_id_tensor, 1), + axis=0, + ).astype(np.int32) ) # Re-construct kv cache with new sequence. + # kv_cache are inserted from the back. Clip 1 from the front + kv_cache_attn = [v[:, :, 1:] for v in kv_cache_attn] kv_cache = combine_kv_caches(kv_cache_cross_attn, kv_cache_attn) + decode_pos += 1 def combine_kv_caches( - kv_cache_cross_attn: KVCache, - kv_cache_attn: KVCache, -) -> KVCache: + kv_cache_cross_attn: KVCacheNp, + kv_cache_attn: KVCacheNp, +) -> KVCacheNp: """ Generates full KV Cache from cross attention KV cache and attention KV cache. @@ -188,7 +208,7 @@ def combine_kv_caches( len(tuple) == 4 * number of source model decoder layers. """ # Construct remaining kv cache with a new empty sequence. - kv_cache = [torch.Tensor()] * len(kv_cache_cross_attn) * 2 + kv_cache = [None] * len(kv_cache_cross_attn) * 2 # Combine KV Cache. for i in range(0, len(kv_cache_cross_attn) // 2): @@ -197,21 +217,24 @@ def combine_kv_caches( kv_cache[4 * i + 2] = kv_cache_cross_attn[2 * i] kv_cache[4 * i + 3] = kv_cache_cross_attn[2 * i + 1] + none_list = [v for v in kv_cache if v is None] + assert len(none_list) == 0 return (*kv_cache,) def get_empty_attn_cache( batch_size: int, + max_decode_len: int, num_decoder_layers: int, decoder_attention_heads: int, embeddings_per_head: int, -) -> KVCache: +) -> KVCacheNp: """ Generates empty cross attn KV Cache for use in the first iteration of the decoder. Parameters: batch_size: Batch size. - num_decoder_layers: NUmber of decoder layers in the decoder. + num_decoder_layers: Number of decoder layers in the decoder. decoder_attention_heads: Number of attention heads in the decoder. embeddings_per_head: The count of the embeddings in each decoder attention head. @@ -222,23 +245,23 @@ def get_empty_attn_cache( kv_cache = [] for i in range(0, num_decoder_layers): kv_cache.append( - torch.zeros( + np.zeros( ( batch_size, decoder_attention_heads, - 0, + max_decode_len, embeddings_per_head, ) - ) + ).astype(np.float32) ) kv_cache.append( - torch.zeros( + np.zeros( ( batch_size, decoder_attention_heads, - 0, + max_decode_len, embeddings_per_head, ) - ) + ).astype(np.float32) ) return (*kv_cache,) diff --git a/qai_hub_models/models/trocr/model.py b/qai_hub_models/models/trocr/model.py index 5fdfe868..66a5c013 100644 --- a/qai_hub_models/models/trocr/model.py +++ b/qai_hub_models/models/trocr/model.py @@ -22,7 +22,7 @@ HUGGINGFACE_TROCR_MODEL = "microsoft/trocr-small-stage1" MODEL_ID = __name__.split(".")[-2] TROCR_BATCH_SIZE = 1 -TROCR_EXPORT_SEQ_LEN = 1 # -1 TODO(#5428): Dynamic sequence length support. This limits the input size to a seq len of 1. +MAX_DECODE_LEN = 20 MODEL_ASSET_VERSION = 1 DEFAULT_NUM_DECODER_LAYERS = 6 @@ -38,10 +38,13 @@ def __init__( encoder: Callable[[torch.Tensor], KVCache], decoder: Callable[..., Tuple[torch.Tensor, ...]], io_processor: TrOCRProcessor, - pad_token_id: int, - eos_token_id: int, - start_token_id: int, - max_seq_len: int, + pad_token_id: int = 1, + eos_token_id: int = 2, + start_token_id: int = 2, + max_seq_len: int = 20, + num_decoder_layers: int = DEFAULT_NUM_DECODER_LAYERS, + decoder_attention_heads: int = 8, + embeddings_per_head: int = 32, ): self.encoder = encoder self.decoder = decoder @@ -50,6 +53,9 @@ def __init__( self.eos_token_id = eos_token_id self.start_token_id = start_token_id self.max_seq_len = max_seq_len + self.num_decoder_layers = num_decoder_layers + self.decoder_attention_heads = decoder_attention_heads + self.embeddings_per_head = embeddings_per_head @classmethod def from_pretrained(cls, hf_trocr_model: str = HUGGINGFACE_TROCR_MODEL) -> TrOCR: @@ -155,7 +161,7 @@ class TrOCRDecoder(BaseModel): Outputs: (output_ids, Updated Attention KV Cache) """ - def __init__(self, decoder: TrOCRForCausalLM): + def __init__(self, decoder: TrOCRForCausalLM, max_decode_len: int = MAX_DECODE_LEN): super().__init__() self.decoder = copy.deepcopy(decoder) # Delete unused layers that exist only to generate initial KV cache. @@ -170,8 +176,25 @@ def __init__(self, decoder: TrOCRForCausalLM): decoder.config.d_model // decoder.config.decoder_attention_heads ) + # Since kv cache is a fixed size, mask out elements + # that correspond to not yet used entries. + # The kv cache for current token is inserted at the first + # index, with the previous cache shifted down by one element. + # + # Mask values: 1 for non-padded, 0 for padded + # https://github.com/huggingface/transformers/blob/main/src/transformers/models/trocr/modeling_trocr.py#L799 + self.attn_mask = torch.nn.Embedding(max_decode_len, max_decode_len) + attn_mask = torch.zeros([max_decode_len, max_decode_len], dtype=torch.float32) + for c_idx in range(0, max_decode_len): + attn_mask[c_idx, -(c_idx + 1) :] = 1 + self.attn_mask.weight = torch.nn.Parameter(attn_mask) + def forward( - self, input_ids: torch.IntTensor, *kv_cache_args, **kv_cache_kwargs + self, + input_ids: torch.IntTensor, + index: torch.IntTensor, + *kv_cache_args, + **kv_cache_kwargs, ) -> Tuple[torch.Tensor, ...]: """ Generate the next token in the predicted output text sequence. @@ -180,6 +203,9 @@ def forward( input_ids : torch.IntTensor Next token ID in each batch sequence (always shape (batch_size, 1)) + index: torch.tensor, shape = (batch_size, 1) + index to get the attn_mask to mask out padded kv cache. + kv_cache: Tuple[kv_cache_attn_0_key, kv_cache_attn_0_val, kv_cache_cross_attn_0_key, kv_cache_cross_attn_0_val, kv_cache_attn_1_key, ...] @@ -213,10 +239,13 @@ def forward( kv_cache.append((*curr_tuple,)) curr_tuple = [] kv_cache = (*kv_cache,) # type: ignore + attn_mask = self.attn_mask(index) + # (tgt_len,) -> (batch, 1, tgt_len, src_len) # Run decoder outputs = self.decoder( input_ids=input_ids, + attention_mask=attn_mask, encoder_hidden_states=encoder_hidden_states, return_dict=False, use_cache=True, @@ -241,6 +270,7 @@ def get_input_spec( decoder_attention_heads: int = 8, embeddings_per_head: int = 32, num_decoder_layers: int = DEFAULT_NUM_DECODER_LAYERS, + max_decode_len: int = MAX_DECODE_LEN, ) -> InputSpec: """ Returns the input specification (name -> (shape, type). This can be @@ -252,7 +282,7 @@ def get_input_spec( ( batch_size, decoder_attention_heads, - TROCR_EXPORT_SEQ_LEN, + max_decode_len - 1, embeddings_per_head, ), "float32", @@ -268,7 +298,10 @@ def get_input_spec( "float32", ) - decoder_input_specs: InputSpec = {"input_ids": input_ids_spec} + decoder_input_specs: InputSpec = { + "input_ids": input_ids_spec, + "index": ((1,), "int32"), + } for i in range(0, num_decoder_layers): decoder_input_specs[f"kv_{i}_attn_key"] = attn_cache_spec decoder_input_specs[f"kv_{i}_attn_val"] = attn_cache_spec diff --git a/qai_hub_models/models/trocr/test.py b/qai_hub_models/models/trocr/test.py index 9b47bce3..25a4d704 100644 --- a/qai_hub_models/models/trocr/test.py +++ b/qai_hub_models/models/trocr/test.py @@ -34,13 +34,14 @@ def trocr_app(source_huggingface_model: VisionEncoderDecoderModel) -> TrOCRApp: @pytest.fixture(scope="module") -def processed_sample_image(trocr_app: TrOCRApp) -> torch.Tensor: +def processed_sample_image(trocr_app: TrOCRApp) -> np.ndarray: """Huggingface-provided image preprocessing and token decoding.""" return trocr_app.preprocess_image(load_image(DEFAULT_SAMPLE_IMAGE)) def test_predict_text_from_image( - trocr_app: TrOCRApp, processed_sample_image: torch.Tensor + trocr_app: TrOCRApp, + processed_sample_image: np.ndarray, ): """Verify our driver produces the correct sentences from a given image input.""" assert trocr_app.predict_text_from_image(processed_sample_image)[0] == IMAGE_TEXT @@ -49,10 +50,12 @@ def test_predict_text_from_image( def test_task( source_huggingface_model: VisionEncoderDecoderModel, trocr_app: TrOCRApp, - processed_sample_image: torch.Tensor, + processed_sample_image: np.ndarray, ): """Verify that raw (numeric) outputs of both networks are the same.""" - source_out = source_huggingface_model.generate(processed_sample_image).numpy() + source_out = source_huggingface_model.generate( + torch.from_numpy(processed_sample_image) + ).numpy() qaihm_out = trocr_app.predict_text_from_image( processed_sample_image, raw_output=True ) diff --git a/qai_hub_models/models/unet_segmentation/README.md b/qai_hub_models/models/unet_segmentation/README.md index 957276c6..f8474142 100644 --- a/qai_hub_models/models/unet_segmentation/README.md +++ b/qai_hub_models/models/unet_segmentation/README.md @@ -10,8 +10,7 @@ This is based on the implementation of Unet-Segmentation found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/unet_segmentation). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/milesial/Pytorch-UNet) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/vit/README.md b/qai_hub_models/models/vit/README.md index 924b05f0..06e0a6df 100644 --- a/qai_hub_models/models/vit/README.md +++ b/qai_hub_models/models/vit/README.md @@ -10,8 +10,7 @@ This is based on the implementation of VIT found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/vit). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/vision_transformer.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/whisper_base_en/README.md b/qai_hub_models/models/whisper_base_en/README.md index 441351db..9c784e92 100644 --- a/qai_hub_models/models/whisper_base_en/README.md +++ b/qai_hub_models/models/whisper_base_en/README.md @@ -10,8 +10,7 @@ This is based on the implementation of Whisper-Base-En found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/whisper_base_en). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/openai/whisper/tree/main) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/whisper_small_en/README.md b/qai_hub_models/models/whisper_small_en/README.md index 5a1422a0..a7227ed0 100644 --- a/qai_hub_models/models/whisper_small_en/README.md +++ b/qai_hub_models/models/whisper_small_en/README.md @@ -10,8 +10,7 @@ This is based on the implementation of Whisper-Small-En found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/whisper_small_en). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/openai/whisper/tree/main) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/whisper_tiny_en/README.md b/qai_hub_models/models/whisper_tiny_en/README.md index 2ce1b0c2..e13d6b04 100644 --- a/qai_hub_models/models/whisper_tiny_en/README.md +++ b/qai_hub_models/models/whisper_tiny_en/README.md @@ -10,8 +10,7 @@ This is based on the implementation of Whisper-Tiny-En found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/whisper_tiny_en). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/openai/whisper/tree/main) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/wideresnet50/README.md b/qai_hub_models/models/wideresnet50/README.md index d212e4b8..4e152e5b 100644 --- a/qai_hub_models/models/wideresnet50/README.md +++ b/qai_hub_models/models/wideresnet50/README.md @@ -10,8 +10,7 @@ This is based on the implementation of WideResNet50 found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/wideresnet50). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/wideresnet50_quantized/README.md b/qai_hub_models/models/wideresnet50_quantized/README.md index ee7cb919..1f6f16ad 100644 --- a/qai_hub_models/models/wideresnet50_quantized/README.md +++ b/qai_hub_models/models/wideresnet50_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of WideResNet50-Quantized found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/wideresnet50_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/xlsr/README.md b/qai_hub_models/models/xlsr/README.md index dc29fc0f..1b42556e 100644 --- a/qai_hub_models/models/xlsr/README.md +++ b/qai_hub_models/models/xlsr/README.md @@ -10,8 +10,7 @@ This is based on the implementation of XLSR found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/xlsr). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/xlsr) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/xlsr_quantized/README.md b/qai_hub_models/models/xlsr_quantized/README.md index 483777d1..d1f27eab 100644 --- a/qai_hub_models/models/xlsr_quantized/README.md +++ b/qai_hub_models/models/xlsr_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of XLSR-Quantized found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/xlsr_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/xlsr) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/yolonas/README.md b/qai_hub_models/models/yolonas/README.md index d6f5aca9..e35a296d 100644 --- a/qai_hub_models/models/yolonas/README.md +++ b/qai_hub_models/models/yolonas/README.md @@ -10,8 +10,7 @@ This is based on the implementation of Yolo-NAS found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/yolonas). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/Deci-AI/super-gradients) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/yolonas_quantized/README.md b/qai_hub_models/models/yolonas_quantized/README.md index 61c64bb3..77c14dcf 100644 --- a/qai_hub_models/models/yolonas_quantized/README.md +++ b/qai_hub_models/models/yolonas_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of Yolo-NAS-Quantized found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/yolonas_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/Deci-AI/super-gradients) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/yolov6/README.md b/qai_hub_models/models/yolov6/README.md index 97f127f0..82404fcf 100644 --- a/qai_hub_models/models/yolov6/README.md +++ b/qai_hub_models/models/yolov6/README.md @@ -10,8 +10,7 @@ This is based on the implementation of Yolo-v6 found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/yolov6). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -50,7 +49,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/meituan/YOLOv6/) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/yolov7/README.md b/qai_hub_models/models/yolov7/README.md index d6bbd49d..fe6030f0 100644 --- a/qai_hub_models/models/yolov7/README.md +++ b/qai_hub_models/models/yolov7/README.md @@ -10,8 +10,7 @@ This is based on the implementation of Yolo-v7 found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/yolov7). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/WongKinYiu/yolov7/) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/yolov7_quantized/README.md b/qai_hub_models/models/yolov7_quantized/README.md index 2535d8d0..bb4f1089 100644 --- a/qai_hub_models/models/yolov7_quantized/README.md +++ b/qai_hub_models/models/yolov7_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of Yolo-v7-Quantized found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/yolov7_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/WongKinYiu/yolov7/) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/yolov8_det/README.md b/qai_hub_models/models/yolov8_det/README.md index aa52c80d..fc6ab59b 100644 --- a/qai_hub_models/models/yolov8_det/README.md +++ b/qai_hub_models/models/yolov8_det/README.md @@ -10,8 +10,7 @@ This is based on the implementation of YOLOv8-Detection found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/yolov8_det). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/yolo/detect) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/yolov8_det_quantized/README.md b/qai_hub_models/models/yolov8_det_quantized/README.md index 75da973b..025830a7 100644 --- a/qai_hub_models/models/yolov8_det_quantized/README.md +++ b/qai_hub_models/models/yolov8_det_quantized/README.md @@ -10,8 +10,7 @@ This is based on the implementation of YOLOv8-Detection-Quantized found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/yolov8_det_quantized). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/yolo/detect) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). diff --git a/qai_hub_models/models/yolov8_seg/README.md b/qai_hub_models/models/yolov8_seg/README.md index 75df2424..7f827999 100644 --- a/qai_hub_models/models/yolov8_seg/README.md +++ b/qai_hub_models/models/yolov8_seg/README.md @@ -10,8 +10,7 @@ This is based on the implementation of YOLOv8-Segmentation found export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/yolov8_seg). -[Sign up](https://myaccount.qualcomm.com/signup) for early access to run these models on -a hosted Qualcomm® device. +[Sign up](https://myaccount.qualcomm.com/signup) to start using Qualcomm AI Hub and run these models on a hosted Qualcomm® device. @@ -55,7 +54,7 @@ script requires access to Deployment instructions for Qualcomm® AI Hub. * [Source Model Implementation](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/yolo/segment) ## Community -* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI. +* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).