Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Gracious__Oniel
Beginner
331 Views

CPU Extension Error with Store Traffic Monitor Python Sample

Jump to solution

I am trying to implement the Store Traffic Monitor in Python on a Windows 10 Platform.

I have been referring to the sample implementation link as below. I have completed all of the steps including Model optimization.

store-traffic-monitor-python: https://github.com/intel-iot-devkit/store-traffic-monitor-python

However, I am not sure what CPU extension needs to be provided.

If I use -d CPU the run throws the following error

python store-traffic-monitor.py -d CPU -m resources/mobilenet-ssd.xml -l resources/labels.txt 

(The sample shows a CPU Extension (libcpu_extension.so) that is used in Linux, which CPU extension should I use on Windows)

Initializing plugin for CPU device...
Reading IR...
Loading IR to the plugin...
Traceback (most recent call last):
  File "store-traffic-monitor.py", line 530, in <module>
    sys.exit(main() or 0)
  File "store-traffic-monitor.py", line 329, in main
    exec_net = plugin.load(network=net, num_requests=2)
  File "ie_api.pyx", line 305, in inference_engine.ie_api.IEPlugin.load
  File "ie_api.pyx", line 318, in inference_engine.ie_api.IEPlugin.load
RuntimeError: Unsupported primitive of type: PriorBox name: conv11_mbox_priorbox

However, If I use -d GPU I get the following error.

python store-traffic-monitor.py -d GPU -m resources/mobilenet-ssd.xml -l resources/labels.txt 

Initializing plugin for GPU device...
Reading IR...
Loading IR to the plugin...
Traceback (most recent call last):
  File "store-traffic-monitor.py", line 530, in <module>
    sys.exit(main() or 0)
  File "store-traffic-monitor.py", line 331, in main
    n, c, h, w = net.inputs[input_blob]
TypeError: 'inference_engine.ie_api.InputInfo' object is not iterable

Can you please advice.

Thanks

Oniel

 

 

0 Kudos
1 Solution
Mikhail_T_Intel
Employee
331 Views

Hello Oniel,

Regarding "Unsupported primitive..." error on CPU. You have to specify a path to CPU extensions library with '-e' command line option of the sample.

And the second error is caused by changes in Python API in the R5 release. Now `inputs` property of `IENetwork` returns the dictionary with `InputInfo` objects. Please refer to the documentation bundled with the R5 package (not on the web)  for more details.

About the issue with detections, I'm not really familiar with the sample, but if you are using ssd-mobilenet how it's recommended in the readme, I suppose that one aspect about convertation to an IR is missed. Caffe mobilenet-based models require some preprocessing, which either can be embedded in the IR with Model Optimizer or made in runtime with OpenCV, I recommend the first variant since it will be done once. So referring to the .prototxt file of an original Mobilenet model we see commented preprocessing block which includes mean values subtraction and scaling factor. So the model was trained with such preprocessing for images, which means that you have to make the same preprocessing in inferring deployment model. The mentioned SSD Mobilenet uses same Mobilenet topology in backbone so the preprocessing also applicable for the SSD model

So to embed this mean values subtraction and scaling to the IR please add the following additional options to the Model Optimizer command line:

`--mean_values [103.94,116.78,123.68] --scale 58.8235`

Please note that Model Optimizer treats scale factor differently than Caffe: in Caffe scale is a multiplication factor but in MO it's a division factor so you should specify scale value like 1/0.017=58.8235

I hope it will help with missed detections

View solution in original post

8 Replies
Gracious__Oniel
Beginner
331 Views

If I change the line from    

n, c, h, w = net.inputs[input_blob]

to 

n, c, h, w = net.inputs[input_blob].shape

I stop getting an error for GPU however people are not detected even in the sample videos provided. Am I doing something wrong?

 

Mikhail_T_Intel
Employee
332 Views

Hello Oniel,

Regarding "Unsupported primitive..." error on CPU. You have to specify a path to CPU extensions library with '-e' command line option of the sample.

And the second error is caused by changes in Python API in the R5 release. Now `inputs` property of `IENetwork` returns the dictionary with `InputInfo` objects. Please refer to the documentation bundled with the R5 package (not on the web)  for more details.

About the issue with detections, I'm not really familiar with the sample, but if you are using ssd-mobilenet how it's recommended in the readme, I suppose that one aspect about convertation to an IR is missed. Caffe mobilenet-based models require some preprocessing, which either can be embedded in the IR with Model Optimizer or made in runtime with OpenCV, I recommend the first variant since it will be done once. So referring to the .prototxt file of an original Mobilenet model we see commented preprocessing block which includes mean values subtraction and scaling factor. So the model was trained with such preprocessing for images, which means that you have to make the same preprocessing in inferring deployment model. The mentioned SSD Mobilenet uses same Mobilenet topology in backbone so the preprocessing also applicable for the SSD model

So to embed this mean values subtraction and scaling to the IR please add the following additional options to the Model Optimizer command line:

`--mean_values [103.94,116.78,123.68] --scale 58.8235`

Please note that Model Optimizer treats scale factor differently than Caffe: in Caffe scale is a multiplication factor but in MO it's a division factor so you should specify scale value like 1/0.017=58.8235

I hope it will help with missed detections

View solution in original post

Gracious__Oniel
Beginner
331 Views

Hi Mikhail,

Thanks for your inputs.

The model now works on the GPU.

However, could you please help me with the extension library i need to specify on windows with the -e command line option. On Linux, it is libcpu_extension_avx2.so (this is provided in the sample), is their a Windows equivalent.

Thanks

Oniel 

 

Mikhail_T_Intel
Employee
331 Views

Yes, on Windows should be a similar library called cpu_extension_avx2.dll provided with OpenVINO package

Gracious__Oniel
Beginner
331 Views

I tried running the program passing the cpu_extension_avx2.dll from

-e \computer_vision_sdk_2018.4.420\deployment_tools\inference_engine\bin\intel64\Debug\cpu_extension_avx2.dll

however the application just exits at the below line
exec_net = plugin.load(network=net, num_requests=2)
Initializing plugin for CPU device...
Reading IR...
Loading IR to the plugin...
Returns out of the program.................

Mikhail_T_Intel
Employee
331 Views

Do you really work with Debug version of Inference Engine? If it's not so, passing Debug version of extensions library to Release version of the plugin may lead to some corruptions.

Gracious__Oniel
Beginner
331 Views

Thanks for pointing it out. I changed to the Release, the demo seems to work as expected.

Supra__Morne_
Beginner
331 Views

Hi Mikhail

Thanks for the info provided in this issue. I am new to the python language, so I do not quite understand what to do when you say:

"And the second error is caused by changes in Python API in the R5 release. Now `inputs` property of `IENetwork` returns the dictionary with `InputInfo` objects. Please refer to the documentation bundled with the R5 package (not on the web)  for more details."

Can you maybe assist in showing me what to do in this particular example? I can see the inputs section in the code,  but I do not know how to change it.


Regards

Morne

Mikhail T. (Intel) wrote:

Hello Oniel,

Regarding "Unsupported primitive..." error on CPU. You have to specify a path to CPU extensions library with '-e' command line option of the sample.

And the second error is caused by changes in Python API in the R5 release. Now `inputs` property of `IENetwork` returns the dictionary with `InputInfo` objects. Please refer to the documentation bundled with the R5 package (not on the web)  for more details.

About the issue with detections, I'm not really familiar with the sample, but if you are using ssd-mobilenet how it's recommended in the readme, I suppose that one aspect about convertation to an IR is missed. Caffe mobilenet-based models require some preprocessing, which either can be embedded in the IR with Model Optimizer or made in runtime with OpenCV, I recommend the first variant since it will be done once. So referring to the .prototxt file of an original Mobilenet model we see commented preprocessing block which includes mean values subtraction and scaling factor. So the model was trained with such preprocessing for images, which means that you have to make the same preprocessing in inferring deployment model. The mentioned SSD Mobilenet uses same Mobilenet topology in backbone so the preprocessing also applicable for the SSD model

So to embed this mean values subtraction and scaling to the IR please add the following additional options to the Model Optimizer command line:

`--mean_values [103.94,116.78,123.68] --scale 58.8235`

Please note that Model Optimizer treats scale factor differently than Caffe: in Caffe scale is a multiplication factor but in MO it's a division factor so you should specify scale value like 1/0.017=58.8235

I hope it will help with missed detections

Reply