Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6403 Discussions

Does NCS2 support for Customer Models(like LR,SVM)?

lei__xu
Beginner
293 Views

I have some models(not deep learning models) which i want to work with ncs2.

But when i use mo_tf.py to compile my TensorFlow models. I always get some error like below.

So, I think ncs2 just support deep learning models. Is it right?

---------------------------------------------------------------------------------------------------

root@ubuntu:/opt/intel/computer_vision_sdk/deployment_tools/model_optimizer# ./mo_tf.py --input_meta_graph /home/xuleilx/work/OpenVINO/testModel/variable/model_name.meta Model Optimizer arguments:
Common parameters:
    - Path to the Input Model:     None
    - Path for generated IR:     /opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/.
    - IR output name:     model_name
    - Log level:     ERROR
    - Batch:     Not specified, inherited from the model
    - Input layers:     Not specified, inherited from the model
    - Output layers:     Not specified, inherited from the model
    - Input shapes:     Not specified, inherited from the model
    - Mean values:     Not specified
    - Scale values:     Not specified
    - Scale factor:     Not specified
    - Precision of IR:     FP32
    - Enable fusing:     True
    - Enable grouped convolutions fusing:     True
    - Move mean values to preprocess section:     False
    - Reverse input channels:     False
TensorFlow specific parameters:
    - Input model in text protobuf format:     False
    - Offload unsupported operations:     False
    - Path to model dump for TensorBoard:     None
    - List of shared libraries with TensorFlow custom layers implementation:     None
    - Update the configuration file with input/output node names:     None
    - Use configuration file used to generate the model with Object Detection API:     None
    - Operations to offload:     None
    - Patterns to offload:     None
    - Use the config file:     None
Model Optimizer version:     1.5.12.49d067a0
[ ERROR ]  Shape [-1  1] is not fully defined for output 0 of "Placeholder_1". Use --input_shape with positive integers to override model input shapes.
[ ERROR ]  Cannot infer shapes or values for node "Placeholder_1".
[ ERROR ]  Not all output shapes were inferred or fully defined for node "Placeholder_1". 
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40. 
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function tf_placeholder_ext.<locals>.<lambda> at 0x7fb4980141e0>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Stopped shape/value propagation at "Placeholder_1" node. 
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38. 
 

0 Kudos
0 Replies
Reply