Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Deploying AI with multiple models

SDwiv1
Beginner
1,142 Views

Hi,

I have a solution which uses 2 models,

The first model is the human pose estimation model from CMU caffe which has been ported to tensorflow,

We use the output from the human pose estimation model to input into another CNN, we built ourselves

My question is, how do we get our solution to work on Movidius? Do we need to convert both the models to IR format?

Is there any existing example which uses 2 models or any resource?

Also I have found the birds example on ncsappzoo that uses Tinyolo and google net model for classification, is that example still relevant for 
NCSDK 2 and NCS 2 device?? Do I require 2 NCS 2 devices to run a solution with 2 models that need to execute sequentially?

I have tried converting our existing TF solution using mo.py and mo_tf.py, however, I have been unsuccessful in my attempts to do so.

 

 

Thank you,
Sweta

0 Kudos
6 Replies
Severine_H_Intel
Employee
1,142 Views

Dear Sweta, 

My question is, how do we get our solution to work on Movidius? Do we need to convert both the models to IR format?

Yes, you need to go thorugh the Model Optimizer and then, create a Inference code that targets Movidius. 

Is there any existing example which uses 2 models or any resource?

Yes, we have numerous samples inside the software: https://docs.openvinotoolkit.org/R5/_docs_IE_DG_Samples_Overview.html , including one sample for Human Pose Estimation. We have several samples that handle multiple models like the interactive_face_detection_demo. 

Do I require 2 NCS 2 devices to run a solution with 2 models that need to execute sequentially? You can run 2 models sequentially on one device. 

I have tried converting our existing TF solution using mo.py and mo_tf.py, however, I have been unsuccessful in my attempts to do so.

Can you indicate what are your issues? 

Best, 

Severine

0 Kudos
SDwiv1
Beginner
1,142 Views

Hi Severine,

Could you point me to a python tutorial that uses multiple models such as this one:

https://github.com/intel-iot-devkit/inference-tutorials-generic/blob/openvino_toolkit_r3_0/face_detection_tutorial/Readme.md

I would like to understand how to modify my existing TF program to use on NCS 2

Thank you,

Sweta

0 Kudos
Hyodo__Katsuya
Innovator
1,142 Views

@Dwivedi, Sweta

I created a Python sample that loads two models into one NCS2.

FaceDetection + EmotionRecognition

Please change the program and specify "num_requests = 1" to improve the stability of the operation.

I do not know if it will help you.

https://github.com/PINTO0309/OpenVINO-EmotionRecognition.git

 

self.num_requests = 1

 

0 Kudos
SDwiv1
Beginner
1,142 Views

Hi, 

Thank you for the tutorial indeed. I will try following it..

However, converting an existing tensorflow model to IR format using mo_tf.py is resulting in the following error:

Model Optimizer version:   1.5.12.49d067a0
[ ERROR ]  Shape [-1 -1 -1  3] is not fully defined for output 0 of "image". Use —input_shape with positive integers to override model input shapes.
[ ERROR ]  Cannot infer shapes or values for node "image".
[ ERROR ]  Not all output shapes were inferred or fully defined for node "image". 
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40. 
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function tf_placeholder_ext.<locals>.<lambda> at 0x7f97aa2a7510>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via —input_shape).
[ ERROR ]  Run Model Optimizer with —log_level=DEBUG for more information.
[ ERROR ]  Stopped shape/value propagation at "image" node.

This model is obtained from the following repository:

https://github.com/ildoonet/tf-pose-estimation

This is basically CMU's pose estimation ported to TF. I have used the model file "graph_opt.pb" under the tf-pose-estimation/models/graph/mobilenet_thin/

 

 

0 Kudos
Hyodo__Katsuya
Innovator
1,142 Views
$ sudo python3 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo_tf.py \
> --input_model models/graph/cmu/graph_opt.pb \
> --output_dir irmodels \
> --input image \
> --output Openpose/concat_stage7 \
> --data_type FP16 \
> --input_shape [1,368,368,3]
Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/home/b920405/git/tf-pose-estimation/models/graph/cmu/graph_opt.pb
	- Path for generated IR: 	/home/b920405/git/tf-pose-estimation/irmodels
	- IR output name: 	graph_opt
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	image
	- Output layers: 	Openpose/concat_stage7
	- Input shapes: 	[1,368,368,3]
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP16
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	False
	- Reverse input channels: 	False
TensorFlow specific parameters:
	- Input model in text protobuf format: 	False
	- Offload unsupported operations: 	False
	- Path to model dump for TensorBoard: 	None
	- List of shared libraries with TensorFlow custom layers implementation: 	None
	- Update the configuration file with input/output node names: 	None
	- Use configuration file used to generate the model with Object Detection API: 	None
	- Operations to offload: 	None
	- Patterns to offload: 	None
	- Use the config file: 	None
Model Optimizer version: 	1.5.12.49d067a0
/usr/local/lib/python3.5/dist-packages/h5py/__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters

[ SUCCESS ] Generated IR model.
[ SUCCESS ] XML file: /home/b920405/git/tf-pose-estimation/irmodels/graph_opt.xml
[ SUCCESS ] BIN file: /home/b920405/git/tf-pose-estimation/irmodels/graph_opt.bin
[ SUCCESS ] Total execution time: 9.53 seconds. 
$ sudo python3 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo_tf.py \
> --input_model models/graph/mobilenet_thin/graph_opt.pb \
> --output_dir irmodels \
> --input image \
> --output Openpose/concat_stage7 \
> --data_type FP16 \
> --input_shape [1,368,368,3]
Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/home/b920405/git/tf-pose-estimation/models/graph/mobilenet_thin/graph_opt.pb
	- Path for generated IR: 	/home/b920405/git/tf-pose-estimation/irmodels
	- IR output name: 	graph_opt
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	image
	- Output layers: 	Openpose/concat_stage7
	- Input shapes: 	[1,368,368,3]
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP16
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	False
	- Reverse input channels: 	False
TensorFlow specific parameters:
	- Input model in text protobuf format: 	False
	- Offload unsupported operations: 	False
	- Path to model dump for TensorBoard: 	None
	- List of shared libraries with TensorFlow custom layers implementation: 	None
	- Update the configuration file with input/output node names: 	None
	- Use configuration file used to generate the model with Object Detection API: 	None
	- Operations to offload: 	None
	- Patterns to offload: 	None
	- Use the config file: 	None
Model Optimizer version: 	1.5.12.49d067a0
/usr/local/lib/python3.5/dist-packages/h5py/__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters

[ SUCCESS ] Generated IR model.
[ SUCCESS ] XML file: /home/b920405/git/tf-pose-estimation/irmodels/graph_opt.xml
[ SUCCESS ] BIN file: /home/b920405/git/tf-pose-estimation/irmodels/graph_opt.bin
[ SUCCESS ] Total execution time: 5.86 seconds. 

 

0 Kudos
SDwiv1
Beginner
1,142 Views

Dear Sir, 

Using your commands  I could convert the graph_opt.pb model, 

However with my own model, I'm not sure what i'm doing wrong.. let me explain a little bit,

We have 2 input nodes - input1 and input 2, which are both just matrices with values (eg. joint distance)
I do not understand in this case, what should the value for --input_shape be? Since it's not an image.. we have specified the tensor as following:

self.geo_inputs = tf.placeholder(dtype = tf.float32, shape=(None, n_frame, 182), name='input1') 
self.traj_inputs = tf.placeholder(dtype = tf.float32, shape = (None, n_frame, 28), name='input2')

 Currently, I have tried to specify as command as:

sweta@sweta-VirtualBox:~/intel/computer_vision_sdk/deployment_tools/model_optimizer$ sudo python3 mo_tf.py —input_model ~/Desktop/12-3-19/fall_v1.1/frozen_model.pb —output_dir irmodels2 —input input1,input2 —output output —input_shape [1,30,128,1],[1,30,14,1]

Getting the error message:

Model Optimizer arguments:
Common parameters:
  - Path to the Input Model:   /home/sweta/Desktop/12-3-19/fall_v1.1/frozen_model.pb
  - Path for generated IR:   /home/sweta/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/irmodels2
  - IR output name:   frozen_model
  - Log level:   ERROR
  - Batch:   Not specified, inherited from the model
  - Input layers:   input1,input2
  - Output layers:   output
  - Input shapes:   [1,30,128,1],[1,30,14,1]
  - Mean values:   Not specified
  - Scale values:   Not specified
  - Scale factor:   Not specified
  - Precision of IR:   FP32
  - Enable fusing:   True
  - Enable grouped convolutions fusing:   True
  - Move mean values to preprocess section:   False
  - Reverse input channels:   False
TensorFlow specific parameters:
  - Input model in text protobuf format:   False
  - Offload unsupported operations:   False
  - Path to model dump for TensorBoard:   None
  - List of shared libraries with TensorFlow custom layers implementation:   None
  - Update the configuration file with input/output node names:   None
  - Use configuration file used to generate the model with Object Detection API:   None
  - Operations to offload:   None
  - Patterns to offload:   None
  - Use the config file:   None
Model Optimizer version:   1.5.12.49d067a0
[ ERROR ]  Cannot infer shapes or values for node "spatial_temporal_network/motion/conblock_1/cov/conv1d/Conv2D".
[ ERROR ]  index 4 is out of bounds for axis 0 with size 4
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function Convolution.infer at 0x7fb883773f28>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via —input_shape).
[ ERROR ]  Run Model Optimizer with —log_level=DEBUG for more information.
[ ERROR ]  Stopped shape/value propagation at "spatial_temporal_network/motion/conblock_1/cov/conv1d/Conv2D" node. 
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.

 

0 Kudos
Reply