Community
cancel
Showing results for 
Search instead for 
Did you mean: 
dopeuser
Novice
595 Views

UNet model stuck on Starting inference

Jump to solution

I am trying to run the UNet model on Neural Compute Stick. I successfully converted the model to FP16 and ran on the CPU. However, as soon as I change the device to MYRIAD, it gets stuck on Starting inference.

[ INFO ] Creating Inference Engine
[ INFO ] Loading network
[ INFO ] Preparing input blobs
[ WARNING ] Image ..\cat.2000.jpg is resized from (499, 459) to (224, 224)
[ INFO ] Batch size is 1
[ INFO ] Loading model to the plugin
[ INFO ] Starting inference

 

I am using the segmentation_demo code. My IR file is attached below.

0 Kudos

Accepted Solutions
327 Views

Hello dopeuser,


Thanks for your patience. The engineering team already checked your issue; the U-net is a heavy model that seems to be exceeding the memory capability of Intel NCS 2. When checking the memory consumption at runtime, your model was almost twice as much as the unet-camvid-onnx-0001 model. Unfortunately, both these models are not supported on MYRIAD devices. 


Best regards,

Randall.


View solution in original post

11 Replies
584 Views

Hi dopeuser,


UNet is not supported by Intel Neural Compute Stick (Myriad device). Check the MYRIAD Plugin to see the Networks supported and additional information.


Regards,

Randall B.


dopeuser
Novice
576 Views

I tried running this model from the model zoo https://docs.openvinotoolkit.org/2020.4/omz_models_intel_unet_camvid_onnx_0001_description_unet_camv...

It worked fine. I wonder how is this the case? 

556 Views

Hi, dope user,


Could you share with us more information to test from our end? For example, the commands used to run your model and used to convert the model to FP16.


Regards,

Randall B.


dopeuser
Novice
548 Views
I am using the following code to convert the exported pytorch graph: 
 
python "%openvino_dir%\deployment_tools\model_optimizer\mo.py" --input_model ".\temp\unet.onnx" --log_level=ERROR --input_shape "(1,3, 224,224)" --output_dir converted_unet --input=input.1 --output=Conv_338 --reverse_input_channels --data_type FP16
 
I have attached the exported graph (unet.xml).
 
For testing the downloaded model from your website and my converted model, I used the OpenVino's python segmentation demo available in C:\Program Files (x86)\IntelSWTools\openvino_2020.4.287\deployment_tools\open_model_zoo\demos\python_demos\segmentation_demo\segmentation_demo.py
 
But this does not work only when I set device="MYRIAD" for my model. The downloaded model works well on "MYRIAD". Both the models work well on CPU.
 
To freeze the PyTorch model I am using:
weights = torch.load(modelfname)
inference_model.load_state_dict(weights)
dummy_input = torch.randn(13224224)
os.makedirs('./temp'exist_ok=True)
torch.onnx.export(inference_model, dummy_input, temp/unet.onnx"opset_version=11)
 
 
I also tried using opset_version=10
 
539 Views

Hi dopeuser,


Thanks for your reply, we are currently looking into your issue and we will come back to you as soon as possible.


Regards,

Randall B.


dopeuser
Novice
501 Views

Is there any update on this?

490 Views

Hi dopeuser,


The engineering team is still working on that to check the UNet model on Intel NCS2.


Regards,

Randall.


466 Views

Hello dopeuser,


Thanks for your patience. Additionally, we need your .bin file, could you provide us the onnx model to reproduce and see if all layers are supported on MYRIAD.


Regards,

Randall.


dopeuser
Novice
456 Views

I have attached the files. Let me know if you need anything else.

328 Views

Hello dopeuser,


Thanks for your patience. The engineering team already checked your issue; the U-net is a heavy model that seems to be exceeding the memory capability of Intel NCS 2. When checking the memory consumption at runtime, your model was almost twice as much as the unet-camvid-onnx-0001 model. Unfortunately, both these models are not supported on MYRIAD devices. 


Best regards,

Randall.


View solution in original post

dopeuser
Novice
322 Views

Thank's for your reply.