How to make inference on InnerEye-DeepLearning models deployed locally? #819
-
Hey, python InnerEye/Scripts/submit_for_inference.py --image_file Tests/ML/test_data/test_img.nii.gz --model=GlaucomaPublic It requires an Azure subscription while I wanna run the toolkit locally. File "/home/ai/InnerEye-DeepLearning/InnerEye/Azure/azure_config.py", line 205, in get_workspace raise ValueError("The values for 'subscription_id' and 'resource_group' were not found. " ValueError: The values for 'subscription_id' and 'resource_group' were not found. Was the Azure setup completed? Any ideas how to run inferences on InnerEye deeplearning models will be highly appreciated. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
Hi! Apologies for the delayed response on this. The To run inference locally, see the "Testing an existing model" docs page. This shows you how to use the Note that this will run inference on all images in your test set. You can further control which / how many images are passed through the model through the use of the |
Beta Was this translation helpful? Give feedback.
Hi! Apologies for the delayed response on this.
The
submit_for_inference.py
script is not for running inference locally, it is only for submitting inference jobs to AzureML, hence the prompting forsubscription_id
andresource_group
settings.To run inference locally, see the "Testing an existing model" docs page. This shows you how to use the
--no-train
and--local_weights_path
flags to run your model locally in testing mode.Note that this will run inference on all images in your test set. You can further control which / how many images are passed through the model through the use of the
--restrict_subjects
flag, as in the debugging and monitoring docs page.