Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6506 Discussions

Questions with custom trained Faster-RCNN, unsupported layer

Sapiain__Roberto
Beginner
1,068 Views

Hi.

I have a custom model with 3 classes, trained in tensorflow (via TFOD API v1.12), using tf v1.12.3 (thus, I guess I should use the 1.10 json file). The virtualenv for conversion to intel-IR format uses tf 1.14; it worked fine for an SSD-inception-v2 custom trained set, using the same dataset and TFRecord files. And the SSD model runs fine with OpenCV.DNN module.

I have a set of questions, that I'd love to have an answer.:

1.- The version in the name of the json file, means:

-- Version of TF running in the virtualenv for mo_tf.py ? or,

-- Version of TF used for training the model? (and it's respective TFOD api?)

2.- Should I specify another input layer, (1024x600) or just use the default 600x600, in this case? (images may be larger than that, but in tensorflow based code, It resizes first. Also works as expected)

 

3.- I'm getting this error:

$ python detector_v0.1/ocv-fotodetectorfrcnn.py -m intel-tf/frcnn_1024-600_50k.bin -p intel-tf/frcnn_1024-600_50k.xml -i img/t05.jpg
[INFO] loading model...
[INFO] iniciando inferencia
Traceback (most recent call last):
  File "detector_v0.1/ocv-fotodetector_frcnn.py", line 45, in <module>
    cvOut = cvNet.forward()
cv2.error: OpenCV(4.1.1-openvino) /home/jenkins/workspace/OpenCV/OpenVINO/build/opencv/modules/dnn/src/dnn.cpp:2182: error: (-215:Assertion failed) inp.total() in function 'allocateLayers'

 

Do you have any ideas why?

Addenda: Complete mo_tf.py, and modifiers used for generating the model:

$ mo_tf.py --data_type FP16 --reverse_input_channels --output_dir intel-tf --input_model modelos-exportados/frcnn_1024-600_50k.pb --tensorflow_use_custom_operations_config intel-base-files/faster_rcnn_support_api_v1.10.json --tensorflow_object_detection_api_pipeline_config modelos-exportados/mining_mini_frcnn_1024-600.config 
Model Optimizer arguments:
Common parameters:
    - Path to the Input Model:     /home/rsapiain/WORK_DL/minero_mini/modelos-exportados/frcnn_1024-600_50k.pb
    - Path for generated IR:     /home/rsapiain/WORK_DL/minero_mini/intel-tf
    - IR output name:     frcnn_1024-600_50k
    - Log level:     ERROR
    - Batch:     Not specified, inherited from the model
    - Input layers:     Not specified, inherited from the model
    - Output layers:     Not specified, inherited from the model
    - Input shapes:     Not specified, inherited from the model
    - Mean values:     Not specified
    - Scale values:     Not specified
    - Scale factor:     Not specified
    - Precision of IR:     FP16
    - Enable fusing:     True
    - Enable grouped convolutions fusing:     True
    - Move mean values to preprocess section:     False
    - Reverse input channels:     True
TensorFlow specific parameters:
    - Input model in text protobuf format:     False
    - Path to model dump for TensorBoard:     None
    - List of shared libraries with TensorFlow custom layers implementation:     None
    - Update the configuration file with input/output node names:     None
    - Use configuration file used to generate the model with Object Detection API:     /home/rsapiain/WORK_DL/minero_mini/modelos-exportados/mining_mini_frcnn_1024-600.config
    - Operations to offload:     None
    - Patterns to offload:     None
    - Use the config file:     /home/rsapiain/WORK_DL/minero_mini/intel-base-files/faster_rcnn_support_api_v1.10.json
Model Optimizer version:     2019.2.0-436-gf5827d4
[ WARNING ] Model Optimizer removes pre-processing block of the model which resizes image keeping aspect ratio. The Inference Engine does not support dynamic image size so the Intermediate Representation file is generated with the input image size of a fixed size.
Specify the "--input_shape" command line parameter to override the default shape which is equal to (600, 600).
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
The g
raph output nodes "num_detections", "detection_boxes", "detection_classes", "detection_scores" have been replaced with a single layer of type "Detection Output". Refer to IR catalogue in the documentation for information about this layer.

[ SUCCESS ] Generated IR model.
[ SUCCESS ] XML file: /home/rsapiain/WORK_DL/minero_mini/intel-tf/frcnn_1024-600_50k.xml
[ SUCCESS ] BIN file: /home/rsapiain/WORK_DL/minero_mini/intel-tf/frcnn_1024-600_50k.bin
[ SUCCESS ] Total execution time: 57.98 seconds. 

Thank you in advance for your help.

 

0 Kudos
5 Replies
Shubha_R_Intel
Employee
1,068 Views

Dear Sapiain, Roberto,

For question 1) all the information you need is on The Tensorflow Object Detection API Model Optimizer Page . All the *.json version numbers versus TF API version are broken down there.

Example:

sd_v2_support.json — for frozen SSD topologies from the models zoo

faster_rcnn_support.json — for frozen Faster R-CNN topologies from the models zoo

faster_rcnn_support_api_v1.7.json — for Faster R-CNN topologies trained manually using the TensorFlow* Object Detection API version 1.7.0 or higher

faster_rcnn_support_api_v1.10.json — for Faster R-CNN topologies trained manually using the TensorFlow* Object Detection API version 1.7.0 or higher

mask_rcnn_support.json — for frozen Mask R-CNN topologies from the models zoo

mask_rcnn_support_api_v1.7.json — for Mask R-CNN topologies trained manually using the TensorFlow* Object Detection API version 1.7.0 or higher up to 1.9.0 inclusively

mask_rcnn_support_api_v1.11.json — for Mask R-CNN topologies trained manually using the TensorFlow* Object Detection API version 1.10.0 or higher

rfcn_support.json — for the frozen RFCN topology from the models zoo frozen with TensorFlow* version 1.9.0 or lower.

rfcn_support_api_v1.10.json — for the frozen RFCN topology from the models zoo frozen with TensorFlow* version 1.10.0 or higher.

 

For 2)  Please read the Custom Input Shape Model Optimizer Page . Take a look at your pipeline.config . Is it a fixed_shape_resizer or a keep_aspect_ratio_resizer ? If you don't supply an --input_shape then Model Optimizer will take it from your pipeline.config. you can certainly specify other sizes, please read the referenced document to find out how to accommodate other sizes different from pipeline.config. Depending upon whether it's fixed_shape_resizer or keep_aspect_ratio_resizer Model Optimizer handles them slightly differently.

For 3) it seems like you've chosen to use OpenCV for inference rather than Inference Engine API. For OpenCV questions please ask questions in one of the following two forums:

https://answers.opencv.org/questions/

https://github.com/opencv/opencv

This forum is for OpenVino questions, namely Model Optimizer and Inference Engine.

Hope it helps,

Thanks,

Shubha

0 Kudos
Sapiain__Roberto
Beginner
1,068 Views

Hello Shubha.

Thank you for the clarifications. Will help me, and will check for (2) what you say. It's a keep_aspect_ratio resizer..

 

I did some testing on the point (3), and it is as you say, I was getting the error with the pair (OPENCV, CPU) for (<backend>, <target>).

But later tried with (INFERENCE_ENGINE, CPU) and (INFERENCE_ENGINE, MYRIAD) and I get another error, that is OpenCV related, mentioned in this post, with Faster-RCNN as well.

https://software.intel.com/en-us/forums/computer-vision/topic/812589

which traces to an open bug in OpenCV:  https://github.com/opencv/opencv/issues/14839

So for me, will have to try with another model, R-FCN: for the case I'm thinking, I need something more accurate and precise than SSD (which I know it works well).

 

Again, thank you very much for your reply.

0 Kudos
Shubha_R_Intel
Employee
1,068 Views

Dearest Sapiain, Roberto,

I am happy to help. If you have further Model Optimizer or Inference Engine questions, please do not hesitate to ask !

Sincerely,

Shubha

0 Kudos
Sapiain__Roberto
Beginner
1,068 Views

Hi Shubha. 

Will bother again.

Seems like with R-FCN I'm getting the same issue:

$ python detector_v0.1/ocv-fotodetector_r-fcn.py -m intel-tf/rfcn_1024-600_50k.bin -p intel-tf/rfcn_1024-600_50k.xml -i img/t09.png
[INFO] loading model...
[INFO] iniciando inferencia, imagen de W*H: (1920, 1090)
[INFO]: resized img to: 1000 * 567
Traceback (most recent call last):
  File "detector_v0.1/ocv-fotodetector_r-fcn.py", line 61, in <module>
    cvOut = cvNet.forward()
cv2.error: OpenCV(4.1.1-openvino) /home/jenkins/workspace/OpenCV/OpenVINO/build/opencv/modules/dnn/src/dnn.cpp:2182: error: (-215:Assertion failed) inp.total() in function 'allocateLayers'
 

SSD seems to work all OK. 

This is the snippet, and with this same pattern, SSD works (the reverse channels is not required, since I'm loading the mo_tf.py output).

cvNet.setInput(cv2.dnn.blobFromImage(frame, size=(300, 300), crop=False))

cvOut = cvNet.forward()

 

OpenCV issue again. Hope there is a fix soon. Tensorflow based code and model, work correctly.

Kind regards.

 

0 Kudos
Shubha_R_Intel
Employee
1,068 Views

Dearest Sapiain, Roberto,

If it's an OpenCV issue, please post it to :

https://github.com/opencv/opencv

https://answers.opencv.org/questions/

Thanks for your patience and also thank you for sharing your troubleshooting discoveries to the OpenVino community !

Shubha

0 Kudos
Reply