- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi, I'm trying to convert my own trained Faster RCNN model to IR model and it works but IR model doesn't work properly.
OpenVINO version: 2022.1
Here's my cli input for converting.
mo --input_model "traffic_light.onnx" --input_shape "[1,3,512,512]" --mean_values="[0.485, 0.456, 0.406]"
And the outputs.
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: C:\Users\taejinki\Anaconda3\aiclass\openvino_notebooks\notebooks\AI_Car\traffic_light.onnx
- Path for generated IR: C:\Users\taejinki\Anaconda3\aiclass\openvino_notebooks\notebooks\AI_Car\.
- IR output name: traffic_light
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: [1,3,512,512]
- Source layout: Not specified
- Target layout: Not specified
- Layout: Not specified
- Mean values: [0.485, 0.456, 0.406]
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- User transformations: Not specified
- Reverse input channels: False
- Enable IR generation for fixed input shape: False
- Use the transformations config file: None
Advanced parameters:
- Force the usage of legacy Frontend of Model Optimizer for model conversion into IR: False
- Force the usage of new Frontend of Model Optimizer for model conversion into IR: False
OpenVINO runtime found in: C:\Users\taejinki\Anaconda3\aiclass\openvino_env\lib\site-packages\openvino
OpenVINO runtime version: 2022.1.0-7019-cdb9bec7210-releases/2022/1
Model Optimizer version: 2022.1.0-7019-cdb9bec7210-releases/2022/1
[ WARNING ]
Detected not satisfied dependencies:
numpy: installed: 1.22.4, required: < 1.20
Please install required versions of components or run pip installation
pip install openvino-dev
C:\Users\taejinki\Anaconda3\aiclass\openvino_env\lib\site-packages\numpy\lib\function_base.py:935: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
return array(a, order=order, subok=subok, copy=True)
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: C:\Users\taejinki\Anaconda3\aiclass\openvino_notebooks\notebooks\AI_Car\traffic_light.xml
[ SUCCESS ] BIN file: C:\Users\taejinki\Anaconda3\aiclass\openvino_notebooks\notebooks\AI_Car\traffic_light.bin
[ SUCCESS ] Total execution time: 3.87 seconds.
It's been a while, check for a new version of Intel(R) Distribution of OpenVINO(TM) toolkit here https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html?cid=other&source=prod&campid=ww_2022_bu_IOTG_OpenVINO-2022-1&content=upg_all&medium=organic or on the GitHub*
[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.
Find more information about API v2.0 and IR v11 at https://docs.openvino.ai
It works, but when I try to apply inferencing, there's no output comes.
here's my code to inference.
And my output.
{<ConstOutput: names[boxes] shape{?,4} type: f32>: array([], shape=(0, 4), dtype=float32), <ConstOutput: names[labels] shape{?} type: i64>: array([], dtype=int64), <ConstOutput: names[scores] shape{?} type: f32>: array([], dtype=float32)}
Just in case, if anybody wants to know onnx model works or not. My codes to check onnx model is here.
with my output.
time: 2.242720099999133 [array([[255.5864 , 170.9059 , 262.63242 , 183.95753 ], [287.64737 , 176.37906 , 293.07993 , 187.33441 ], [160.74478 , 194.56476 , 166.92159 , 209.77167 ], [160.96953 , 204.03824 , 166.76172 , 217.38097 ], [331.62326 , 206.20482 , 336.98813 , 217.04013 ], [209.70558 , 157.65472 , 217.29362 , 172.55331 ], [331.95093 , 210.57849 , 337.20444 , 218.72993 ], [253.57298 , 164.53847 , 264.3959 , 186.01653 ], [331.36334 , 220.86516 , 337.39163 , 232.05266 ], [331.9028 , 212.67885 , 336.9614 , 222.62915 ], [158.352 , 190.83386 , 168.80211 , 213.45125 ], [ 15.901425 , 1.7063928, 28.52686 , 25.742188 ], [287.55887 , 178.88332 , 293.83124 , 191.73332 ], [348.28012 , 161.01564 , 358.7922 , 174.09166 ]], dtype=float32), array([1, 1, 1, 1, 1, 1, 1, 1, 3, 1, 1, 3, 1, 2], dtype=int64), array([0.98102945, 0.98010844, 0.938019 , 0.9213216 , 0.89558107, 0.28256923, 0.269735 , 0.15260203, 0.14682811, 0.08892567, 0.07378212, 0.06822211, 0.06646542, 0.05358411], dtype=float32)]
I used same image, but no output from IR mode.
I'm attaching my models(pytorch, onnx, ir) below.
Link: https://drive.google.com/drive/folders/1TRzo6fkKZj_VpmAv6Uui5CoEtVJIgvA_?usp=sharing
Regards,
Jason.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Jason,
Thank you for your patience. After some investigation of your model, it seems that you need to add --use_new_frontend parameter when performing model optimization.
mo --input_model traffic_light.onnx --mean_values =[0.485, 0.456, 0.406] --use_new_frontend
The IR file generated should be able to detect the traffic light.
Hope this information help.
Regards,
Aznie
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Jason,
Thanks for reaching out.
I've tested your inference code with both XML and onnx files and seeing the same result on our end. Tested also with a random image that has traffic lights and I observed similar output.
Plus, I tested with Object Detection Python Demo but when I used architecture_type =ssd, the model unable to even load. When I ran with architecture_type=retinaface-pytorch, able to load but got "IndexError:list index out of range" error.
However, we are still looking into this matter and will update you soon.
Can you share the source repository of your model and any related files for us to investigate this further?
Regards,
Aznie
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Aznie,
Thank you for your help in checking this model. I've checked that onnx model doesn't need to be converted to IR model anymore, but I want to know why my IR model get no outputs from same image that I'd tried with onnx model. If you don't bother, please let me know how you've tested with xml file.
As far as I know, this model is based on Fast RCNN model, and I made the model based on this link.(https://www.kaggle.com/code/meemr5/traffic-light-detection-pytorch-starter)
I've just uploaded my jupyter notebook files on google drive.
If you need any other help, please let me know.
Regards,
Jason.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Jason,
Previously we tested the XML and onnx files with Object Detection Python Demo and but the model is unable to load. Apart from that, we can test both XML and onnx models with Benchmark_app py for performance measurement. I tested your model with benchmark_app and there is no issue with both model files.
For further checking, can you share the Google drive link here or privately to my email:
noor.aznie.syaarriehaahx.binti.baharuddin@intel.com
Regards,
Aznie
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Aznie,
I just added you to share the Google Drive.
For benchmark_app, is it able to check model works well? or it could check both model files are identical shape?
Regards,
Jason.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Jason
I need more time to investigate this issue. we're deep dive into the layer to check further.
Thank
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Hari,
Thank you for your notification. I can wait for it!
Regards,
Jason.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Jason,
Thank you for your patience. After some investigation of your model, it seems that you need to add --use_new_frontend parameter when performing model optimization.
mo --input_model traffic_light.onnx --mean_values =[0.485, 0.456, 0.406] --use_new_frontend
The IR file generated should be able to detect the traffic light.
Hope this information help.
Regards,
Aznie
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Aznie
Thank you for your help. I could see the outputs on my model too.
Hari is helping me for it and found out that there is an issue could be occur in pytorch to onnx.
I'll find out this issue by myself.
Regards,
Jason.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Jason,
I'm glad to hear that. Hope everything is going well for you. This thread will no longer be monitored since this issue has been resolved. If you need any additional information from Intel, please submit a new question.
Regards,
Aznie

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page