Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Beginner
27 Views

same output for all inputs: Neural compute stick 2

I have a custom model that I converted to a .onnx file and used mo.py to convert to a .xml and .bin file

plugin = IEPlugin(device=device)
net = IENetwork(model=model_xml, weights=model_bin)
exec_net = plugin.load(network=net)
res = exec_net.infer(inputs={input_blob: images})

when device= "CPU" , the inference works fine and I get correct outputs in the form of 2 dimensional vectors, 

However when I used my Neural compute stick 2 by making device="MYRIAD"  I get the same output vector for every image I input 

array([[ 0.01548004, -0.02479553]], dtype=float32)

case number reference of this chat 04029608

the onnx file is created from pytorch from a FP32 model and converted to FP16 by the model optimizer mo.py using --data_type=FP16

0 Kudos
3 Replies
Highlighted
Innovator
27 Views

@Lam, Carson I may not be useful because I do not know what your model is like. When running "mo.py", try the following option. --disable_fusing or --disable_gfusing or --disable_fusing --disable_gfusing
0 Kudos
Highlighted
Beginner
27 Views

Hi Hyodo, Katsuya,

 

Thank you for the reply, I tried all 3 of your suggestion, but I get the same result, I guess one difference is that with both --disable_fusing --disable_gfusing the result is a different output than before, but it still gives the same output regardless of the input image. I used pytorch to make a modified densenet121 from torchvision and modified the initial convolution to be 3 convolutions to take take a 896 sized input instead of 224. Thanks

0 Kudos
Highlighted
27 Views

@Lam, Carson

I think you can try to use the corss_check_tool to check the pre-layer difference between CPU and ncs2. That might give us some insights of where the major difference comes from. Please check the doc <INSTALL_DIR>/deployment_tools/documents/_docs_IE_DG_Croos_Check_Tool.html for details about how to use corss_check_tool.

0 Kudos