Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Problem with Tensorflow Object Detection API Faster RCNN inference with OpenVINO

Di_Fey
Beginner
861 Views

Hello. I have converted my custom tensorflow object detection (Faster RCNN) into the .xml and .bin files. I used the code from the sample here and try to run the model but with error "can't find output layer named bbox_pred" so i have searched in this forum and learned that using the object detection ssd sample will work. However the inference results outputs a image with no bounding box, the probabilities are very small and the bounding box proposal is also very wide. Anyone can help with that? Im using the 2020.1 release of OpenVINO.

Any help would be much appreciated! Thanks

0 Kudos
4 Replies
JesusE_Intel
Moderator
861 Views

Hi Di Fey,

What base model did you use to train your custom trained model? The demo by default expects the name of the output box prediction layer to be "bbox_pred", if you model has a different name for the output box prediction layer you can specify it with the -bbox_name "<string>" when running the demo. See the demo documentation or print the help menu with the -h parameter.

Could you share your model for my to test from my end? Let me know if you would like to share it privately and I can start a private message.

Regards,

Jesus

0 Kudos
Di_Fey
Beginner
861 Views

Hello Jesus, 

I am using Resnet 50 as a base model. I have successfully done inference with the object_detection_sample_ssd code by adding the --disable_resnet_optimization command during model conversion. The new problem i am facing now is when using the object_detection_demo_ssd_async code to run the model with a video as an input, the detection runtime of the video is very laggy but the detection is accurate. 

0 Kudos
JesusE_Intel
Moderator
861 Views

Could you share the model and steps to reproduce? Also share the model optimizer command used to convert to IR format.

Regards,

Jesus

0 Kudos
Di_Fey
Beginner
822 Views

Hello Jesus, sorry for the inactivity due to the Covid situation. 

Here is my model conversion command:

mo_tf.py --input_model=C:\Users\nicka\OneDrive\Desktop\tensorflow\new\frozen_inference_graph.pb --tensorflow_use_custom_operations_config extensions\front\tf\faster_rcnn_support.json --tensorflow_object_detection_api_pipeline_config C:\Users\nicka\OneDrive\Desktop\tensorflow\new\pipeline.config --output_dir C:\Users\nicka\OneDrive\Desktop\tensorflow --data_type FP16 --reverse_input_channels

 I can share the model privately. Thanks

0 Kudos
Reply