Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6573 Diskussionen

UNet model stuck on Starting inference

dopeuser
Anfänger
4.697Aufrufe

I am trying to run the UNet model on Neural Compute Stick. I successfully converted the model to FP16 and ran on the CPU. However, as soon as I change the device to MYRIAD, it gets stuck on Starting inference.

[ INFO ] Creating Inference Engine
[ INFO ] Loading network
[ INFO ] Preparing input blobs
[ WARNING ] Image ..\cat.2000.jpg is resized from (499, 459) to (224, 224)
[ INFO ] Batch size is 1
[ INFO ] Loading model to the plugin
[ INFO ] Starting inference

 

I am using the segmentation_demo code. My IR file is attached below.

0 Kudos
1 Lösung
RandallMan_B_Intel
Mitarbeiter
4.429Aufrufe

Hello dopeuser,


Thanks for your patience. The engineering team already checked your issue; the U-net is a heavy model that seems to be exceeding the memory capability of Intel NCS 2. When checking the memory consumption at runtime, your model was almost twice as much as the unet-camvid-onnx-0001 model. Unfortunately, both these models are not supported on MYRIAD devices. 


Best regards,

Randall.


Lösung in ursprünglichem Beitrag anzeigen

11 Antworten
RandallMan_B_Intel
Mitarbeiter
4.686Aufrufe

Hi dopeuser,


UNet is not supported by Intel Neural Compute Stick (Myriad device). Check the MYRIAD Plugin to see the Networks supported and additional information.


Regards,

Randall B.


dopeuser
Anfänger
4.678Aufrufe
RandallMan_B_Intel
Mitarbeiter
4.658Aufrufe

Hi, dope user,


Could you share with us more information to test from our end? For example, the commands used to run your model and used to convert the model to FP16.


Regards,

Randall B.


dopeuser
Anfänger
4.650Aufrufe
I am using the following code to convert the exported pytorch graph: 
 
python "%openvino_dir%\deployment_tools\model_optimizer\mo.py" --input_model ".\temp\unet.onnx" --log_level=ERROR --input_shape "(1,3, 224,224)" --output_dir converted_unet --input=input.1 --output=Conv_338 --reverse_input_channels --data_type FP16
 
I have attached the exported graph (unet.xml).
 
For testing the downloaded model from your website and my converted model, I used the OpenVino's python segmentation demo available in C:\Program Files (x86)\IntelSWTools\openvino_2020.4.287\deployment_tools\open_model_zoo\demos\python_demos\segmentation_demo\segmentation_demo.py
 
But this does not work only when I set device="MYRIAD" for my model. The downloaded model works well on "MYRIAD". Both the models work well on CPU.
 
To freeze the PyTorch model I am using:
weights = torch.load(modelfname)
inference_model.load_state_dict(weights)
dummy_input = torch.randn(13224224)
os.makedirs('./temp'exist_ok=True)
torch.onnx.export(inference_model, dummy_input, temp/unet.onnx"opset_version=11)
 
 
I also tried using opset_version=10
 
RandallMan_B_Intel
Mitarbeiter
4.641Aufrufe

Hi dopeuser,


Thanks for your reply, we are currently looking into your issue and we will come back to you as soon as possible.


Regards,

Randall B.


dopeuser
Anfänger
4.603Aufrufe
RandallMan_B_Intel
Mitarbeiter
4.592Aufrufe

Hi dopeuser,


The engineering team is still working on that to check the UNet model on Intel NCS2.


Regards,

Randall.


RandallMan_B_Intel
Mitarbeiter
4.568Aufrufe

Hello dopeuser,


Thanks for your patience. Additionally, we need your .bin file, could you provide us the onnx model to reproduce and see if all layers are supported on MYRIAD.


Regards,

Randall.


dopeuser
Anfänger
4.558Aufrufe

I have attached the files. Let me know if you need anything else.

RandallMan_B_Intel
Mitarbeiter
4.430Aufrufe

Hello dopeuser,


Thanks for your patience. The engineering team already checked your issue; the U-net is a heavy model that seems to be exceeding the memory capability of Intel NCS 2. When checking the memory consumption at runtime, your model was almost twice as much as the unet-camvid-onnx-0001 model. Unfortunately, both these models are not supported on MYRIAD devices. 


Best regards,

Randall.


dopeuser
Anfänger
4.424Aufrufe
Antworten