* I trained an SSD MobileNet V1 FPN using TF object detection API in the TF 1.15 environment.
* It isn't getting converted inside OpenVino 2020.4 (The conversion has to happen in this version only). Even though using the same command, I was able to convert the pre-trained model(downloaded using the downloader.py script) in the same environment.
Thanks for reaching out to us.
Based on your first error, it seems like some of your node names in the model are not matching with those in the ssd_v2_support JSON file.
Feel free to share your model with us for further investigation.
If you are not allowed to share the model, you can try this out on your end.
Check whether all the node names in the JSON file match the node names of the model. These node names are stated under the "start_points" and "end_points" of the "id: ObjectDetectionAPISSDPostprocessorReplacement" in the JSON file.
You can check the node names through Netron or TensorBoard.
Below are the steps to check the node names through TensorBoard:
1. Dump the input graph of the model.
python mo_tf.py --input_model=<MODEL.PB> --tensorboard_logdir=<any_directory>
2. Start TensorBoard through the terminal.
3. Visualize the input graph of the model in TensorBoard.
Copy the generated output URL in the terminal and paste it to the browser.
We noticed that transfer.txt does not have “Postprocessor/ToFloat” while pretrained.txt does include “Postprocessor/ToFloat”. Please remove this node name from the JSON file.
The second error “Data node "Preprocessor/mul" has 2 producers” can be fixed by using the latest OpenVINO™ toolkit. This error was rectified by means of GitHub Pull Request #3063.
We recommend you use OpenVINO 2021.4.1, which has the latest and greatest features.
On another note, please share your model with us for further investigation if your problem cannot be solved by using the methods above.
Glad that you’ve been able to fix the issue by replacing ObjectDetectionAPI.py.
This thread will no longer be monitored since this issue has been resolved.
If you need any additional information from Intel, please submit a new question.