Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Tensorflow 2.0 YOLOv3

Lukas_S_1
Beginner
1,224 Views

Hi,

i have Yolov3-tiny implementation in Tensorflow 2.0. For exporting model to .pb format i used this function:

tf.keras.experimental.export_saved_model()

then i freezed graph with freeze_graph.py script. I tried freeze graph with various versions of TF. Freeze worked for TF >= 1.13.0, so for 1.12 which is recomended version for OpenVINO i got error:

Op type not registered 'LeakyRelu' in binary running on...

When i had freezed graph (with TF1.13, TF1.14 and TF2.0.0), i used mo_tf.py (OpenVino 2019 R1) script to convert it to IR as follows:

python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo_tf.py \
--input_model /home/src/yolov3-tiny/2/frozen_model.pb \
--tensorflow_use_custom_operations_config /home/tf2_yolo_v3_tiny.json \
--input yolov3-tiny-input \
--input_shape '[1, 416, 416, 3]' \ 
--reverse_input_channels \ 
--data_type FP32 \
--log_level DEBUG 2> /tmp/debug_logs.txt

mo_tf.py works fine when i used TF1.13 and TF1.14 (i have docker containers with different TF versions), but when i tried it with TF1.12 i got very similar error, that i got when i freezing model with TF1.12:

[ ERROR ]  Cannot infer shapes or values for node "yolo_darknet/leaky_re_lu/LeakyRelu".
[ ERROR ]  Op type not registered 'LeakyRelu' in binary running on lukas-ThinkPad-P51. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function tf_native_tf_node_infer at 0x7f74b89c6c80>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "yolo_darknet/leaky_re_lu/LeakyRelu" node.

During loading model (net =IENetwork(model=model_xml, weights=model_bin)) i got error, regardless of TF version:

Traceback (most recent call last):
  File "inference_async.py", line 349, in <module>
    sys.exit(main() or 0)
  File "inference_async.py", line 175, in main
    net = IENetwork(model=model_xml, weights=model_bin)
  File "ie_api.pyx", line 271, in openvino.inference_engine.ie_api.IENetwork.__cinit__
RuntimeError: Error reading network: in Layer yolo_darknet/max_pooling2d/MaxPool: trying to connect an edge to non existing output port: 2.1

I read a lot of similar issues and it seems that i have to wait for official support of (at least) TF 1.13. Is there any chance to give me access to alpha version of OpenVINO? Or have you any advice how to fix it?

Thanks,

Lukas

0 Kudos
7 Replies
Shubha_R_Intel
Employee
1,224 Views

Dearest Lukas, 

We just released OpenVino 2019R2 today. Can you install it and try it ? Please report your findings here.

Thanks,

Shubha

0 Kudos
Lukas_S_1
Beginner
1,224 Views

Hi Shubha,

Sorry for delay, i was be busy with serving model.

However now is everything working! Thanks for fast release new version of openVINO. I used for freezing, model optimizer and inference TF1.14, model is still implemented in TF2.0.

Best regards,

Lukas

 

 

0 Kudos
Shubha_R_Intel
Employee
1,224 Views

Dear Lukas 

I'm so very happy to hear that OpenVino 2019R2 is working for you now ! And thanks for reporting your success back to the OpenVino community !

Shubha

0 Kudos
SPaul19
Innovator
1,224 Views

I am having similar problem. I am currently on the stable release of TensorFlow 2.0. Following are the steps with which I am converting my custom image classification model (built using tf.keras):

- After fine-tuning the model I am using tf.keras.models.save_model(fine_tuned_model, save_format="tf"). You can refer here for further details: https://www.tensorflow.org/api_docs/python/tf/keras/experimental/export_saved_model.

After I get the model, I am running that through the model optimizer using:

mo_tf.py --input_shape [1,224,224,3] --input_model saved_model.pb --data_type FP16

But I am getting this:

Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/home/sayak/NCS_With_Custom_Models_In_TF_Keras/grocery_dataset_experiment/v2/saved_model.pb
	- Path for generated IR: 	/home/sayak/NCS_With_Custom_Models_In_TF_Keras/grocery_dataset_experiment/v2/.
	- IR output name: 	saved_model
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	Not specified, inherited from the model
	- Output layers: 	Not specified, inherited from the model
	- Input shapes: 	[1,224,224,3]
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP16
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	False
	- Reverse input channels: 	False
TensorFlow specific parameters:
	- Input model in text protobuf format: 	False
	- Path to model dump for TensorBoard: 	None
	- List of shared libraries with TensorFlow custom layers implementation: 	None
	- Update the configuration file with input/output node names: 	None
	- Use configuration file used to generate the model with Object Detection API: 	None
	- Operations to offload: 	None
	- Patterns to offload: 	None
	- Use the config file: 	None
Model Optimizer version: 	2019.1.1-83-g28dfbfd
[ FRAMEWORK ERROR ]  Cannot load input model: TensorFlow cannot read the model file: "/home/sayak/NCS_With_Custom_Models_In_TF_Keras/grocery_dataset_experiment/v2/saved_model.pb" is incorrect TensorFlow model file. 
The file should contain one of the following TensorFlow graphs:
1. frozen graph in text or binary format
2. inference graph for freezing with checkpoint (--input_checkpoint) in text or binary format
3. meta graph

Make sure that --input_model_is_text is provided for a model in text format. By default, a model is interpreted in binary format. Framework error details: Error parsing message. 
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #43.

 

Need help.

0 Kudos
Shubha_R_Intel
Employee
1,224 Views

Dear Sayak,

It looks like Lukas fixed it. He was able to get it working in OpenVino 2019R2. 

According to Lukas:

 I used for freezing, model optimizer and inference TF1.14, model is still implemented in TF2.0

And by the way, we just released OpenVino 2019R3. You should always try the latest and greatest as bug fixes and improvements are always made with each new release.

Let me know if this helps,

Thanks,

Shubha

0 Kudos
Gupta__Shubham
New Contributor I
1,224 Views

Hi Shubha,

I also have the same problem. Can you help me out. How do i use TF1.14 for freezing?

 

Thanks

Shubham

0 Kudos
Mandaliya__Parth
Beginner
1,224 Views

Hello Everyone,

I'm also facing one similar kind of issue.

@Lukas, can you please provide little bit more insight on how did you freeze the model using freeze_graph.py?

 

Thanks.

0 Kudos
Reply