Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue: Android Application Not Running on GPU/NPU in Volterra Dev Kit 2023 #36

Open
ito-xinthe opened this issue Jan 6, 2025 · 1 comment
Labels
question Further information is requested

Comments

@ito-xinthe
Copy link

Description:

I am facing an issue while trying to run an Android application Image Classification, Semantic Segmentation, Super Resolution using GPU and NPU on the Volterra Dev Kit 2023. Despite enabling the appropriate settings and integrating QnnDelegate for TensorFlow Lite, there is no evidence that the application is utilizing either the GPU or NPU.

Environment Details:

  • Device: Volterra Dev Kit 2023 (ARM64 Architecture)
  • OS: Windows 11 (ARM64 Version)
  • Android Subsystem: Windows Subsystem for Android (WSA)
  • TensorFlow Lite Version: 2.12.0
  • QnnDelegate Version: Latest (from Qualcomm SDK)
  • NDK Version: 23.2.8568313
  • CMake Version: 3.22.1

Steps to Reproduce:

  1. Installed Windows Subsystem for Android (WSA) and enabled Developer Mode.
  2. Configured ADB and successfully deployed the application.
  3. Added TensorFlow Lite dependencies and QnnDelegate libraries in the Android project.
  4. Used the following code to add QnnDelegate with GPU backend:
Interpreter.Options options = new Interpreter.Options();
QnnDelegate delegate = new QnnDelegate(new QnnDelegate.Options()
        .setBackend(QnnDelegate.BACKEND_GPU));
options.addDelegate(delegate);
Interpreter tflite = new Interpreter(model, options);
  1. Built and installed the APK on WSA.
  2. Verified inference time and results.

Expected Behavior:

  • Application should utilize GPU or NPU as specified by the QnnDelegate.
  • Logs or traces should indicate backend usage (GPU or NPU).

Actual Behavior:

  • The application runs without any errors.
  • No logs or traces confirm whether the GPU or NPU is being utilized.
  • Performance does not reflect acceleration benefits expected from GPU/NPU.

Logs and Observations:

  • Checked adb logcat output; no logs related to GPU or NPU initialization were observed.
  • Ran performance benchmarks, but results matched CPU-only processing.

Attempts to Resolve:

  • Verified that the required QNN libraries (libQnnCpu.so, libQnnGpu.so) are loaded.
  • Checked compatibility of TensorFlow Lite with the NNAPI and QNN SDK.
  • Enabled NNAPI Fallback in TensorFlow Lite options.
  • Updated dependencies and rebuilt the APK.

Request:

I would like assistance to:

  1. Verify whether the GPU and NPU on the Volterra Dev Kit 2023 are supported by the QnnDelegate.
  2. Provide a method to log or trace backend usage (e.g., GPU/NPU).
  3. Investigate potential compatibility issues with WSA and QnnDelegate.
@mestrona-3 mestrona-3 added the question Further information is requested label Jan 7, 2025
@gustavla
Copy link

gustavla commented Jan 9, 2025

Hi @ito-xinthe,

  1. Verify whether the GPU and NPU on the Volterra Dev Kit 2023 are supported by the QnnDelegate.

Unfortunately, my colleagues and I here at the ai-hub-apps side don't know much about the Volterra Dev Kit and getting GPU/NPU to work in that context. You may want to reach out via the Qualcomm Discord (https://discord.gg/TzvP3JhfzX) to see if anyone in there has any experience with this.

  1. Provide a method to log or trace backend usage (e.g., GPU/NPU).

The easiest is to check the ADB logcat for log messages that suggests how many layers were delegated to a particular delegate. This is also the suggestions in the TFLite delegate FAQ: https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/faq.html

I do not know where to find the ADB logcat in the WSA context.

  1. Investigate potential compatibility issues with WSA and QnnDelegate.

WSA falls outside of what we test and are familiar with (here at ai-hub-apps). As said, you may want to try our Discord channel.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants