Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

IR v6 not supported

goh__richard
Beginner
1,380 Views

Hi

I have an existing custom model    model.xml .

When I run it on latest security_barrier_camera_demo , I get the following error

[ ERROR ] The support of IR v6 has been removed from the product. Please, convert the original model using the Model Optimizer which comes with this version of the OpenVINO to generate supported IR version.

 

How can I convert this model to a version supported by my version?

 

[ INFO ] OpenVINO Inference Engine
[ INFO ] version: 2022.1.0
[ INFO ] build: custom_master_ec3283ebe1dd4ee3b76f21fb9994d1da8b077154

 

0 Kudos
13 Replies
Peh_Intel
Moderator
1,359 Views

Hi Richard,


Thanks for reaching out to us.


There is no way to upgrade previously converted Intermediate Representation (IR).


Please re-convert your original model into IR with the current Model Optimizer of OpenVINO™.



Regards,

Peh


0 Kudos
goh__richard
Beginner
1,352 Views

Hi Peh,

Thanks.

1. how can i convert the model below?

ls vlp/model_sgall
checkpoint model.ckpt-153000.data-00000-of-00001 model.ckpt-196000.meta model.ckpt-51000.index
events.out.tfevents.1640656946.deep model.ckpt-153000.index model.ckpt-197000.data-00000-of-00001 model.ckpt-51000.meta
events.out.tfevents.1640911021.deep model.ckpt-153000.meta model.ckpt-197000.index model.ckpt-58000.data-00000-of-00001
events.out.tfevents.1640911147.deep model.ckpt-159000.data-00000-of-00001 model.ckpt-197000.meta model.ckpt-58000.index
graph.pbtxt model.ckpt-159000.index model.ckpt-198000.data-00000-of-00001 model.ckpt-58000.meta
model.ckpt-107000.data-00000-of-00001 model.ckpt-159000.meta model.ckpt-198000.index model.ckpt-65000.data-00000-of-00001
model.ckpt-107000.index model.ckpt-16000.data-00000-of-00001 model.ckpt-198000.meta model.ckpt-65000.index
model.ckpt-107000.meta model.ckpt-16000.index model.ckpt-199000.data-00000-of-00001 model.ckpt-65000.meta
model.ckpt-114000.data-00000-of-00001 model.ckpt-16000.meta model.ckpt-199000.index model.ckpt-71000.data-00000-of-00001
model.ckpt-114000.index model.ckpt-166000.data-00000-of-00001 model.ckpt-199000.meta model.ckpt-71000.index
model.ckpt-114000.meta model.ckpt-166000.index

 

2. Can this IR Model be converted to another "model" and then converted back to IR?

 

Thanks

Rgds

Richard

 

0 Kudos
goh__richard
Beginner
1,334 Views

HI

found a model  graph.ph and tried to convert it but failed

 

root@strattonnew:/opt/intel/openvino_2021.4.752/deployment_tools/model_optimizer# python3 mo.py --input_shape=[1,128,128,3] --input_model /synnfs/graph.pb --model_name tf --output_dir out
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /synnfs/graph.pb
- Path for generated IR: /opt/intel/openvino_2021.4.752/deployment_tools/model_optimizer/out
- IR output name: tf
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: [1,128,128,3]
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: None
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Use the config file: None
[ WARNING ] Could not find the Inference Engine Python API. At this moment, the Inference Engine dependency is not required, but will be required in future releases.
[ WARNING ] Consider building the Inference Engine Python API from sources or try to install OpenVINO (TM) Toolkit using "install_prerequisites.sh"
Model Optimizer version: 2021.4.2-3974-e2a469a3450-releases/2021/4
2022-01-27 10:59:38.100975: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/intel/openvino_2019.3.376/opencv/lib:/opt/intel/opencl:/opt/intel/openvino_2019.3.376/deployment_tools/inference_engine/external/hddl/lib:/opt/intel/openvino_2019.3.376/deployment_tools/inference_engine/external/gna/lib:/opt/intel/openvino_2019.3.376/deployment_tools/inference_engine/external/mkltiny_lnx/lib:/opt/intel/openvino_2019.3.376/deployment_tools/inference_engine/external/tbb/lib:/opt/intel/openvino_2019.3.376/deployment_tools/inference_engine/lib/intel64:/opt/intel/openvino_2019.3.376/openvx/lib:.:/opt/intel/openvino_2019.3.376/deployment_tools/inference_engine/lib/intel64/:/opt/intel/openvino_2019.3.376/deployment_tools/inference_engine/external/tbb/lib/:/opt/intel/openvino_2019.3.376/opencv/lib/:/opt/pylon5/lib64:/opt/intel/openvino_2019.3.376/opencv/lib:/opt/intel/opencl:/opt/intel/openvino_2019.3.376/deployment_tools/inference_engine/external/hddl/lib:/opt/intel/openvino_2019.3.376/deployment_tools/inference_engine/external/gna/lib:/opt/intel/openvino_2019.3.376/deployment_tools/inference_engine/external/mkltiny_lnx/lib:/opt/intel/openvino_2019.3.376/deployment_tools/inference_engine/external/tbb/lib:/opt/intel/openvino_2019.3.376/deployment_tools/inference_engine/lib/intel64:/opt/intel/openvino_2019.3.376/openvx/lib:.:/opt/intel/openvino_2019.3.376/deployment_tools/inference_engine/lib/intel64/:/opt/intel/openvino_2019.3.376/deployment_tools/inference_engine/external/tbb/lib/:/opt/intel/openvino_2019.3.376/opencv/lib/:/opt/pylon5/lib64
2022-01-27 10:59:38.101008: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2022-01-27 10:59:50.780847: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2022-01-27 10:59:50.781047: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/intel/openvino_2019.3.376/opencv/lib:/opt/intel/opencl:/opt/intel/openvino_2019.3.376/deployment_tools/in

2022-01-27 10:59:50.781072: W tensorflow/stream_executor/cuda/cuda_driver.cc:326] failed call to cuInit: UNKNOWN ERROR (303)
2022-01-27 10:59:50.781105: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (strattonnew): /proc/driver/nvidia/version does not exist
2022-01-27 10:59:50.781359: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-01-27 10:59:50.781925: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2022-01-27 10:59:50.783896: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:196] None of the MLIR optimization passes are enabled (registered 0 passes)
2022-01-27 10:59:50.801712: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 1999965000 Hz

 


[ ERROR ] Cannot infer shapes or values for node "ssd_heads/head_5/feature_map_4_mbox_conf/biases".
[ ERROR ] Attempting to use uninitialized value ssd_heads/head_5/feature_map_4_mbox_conf/biases
[[{{node _retval_ssd_heads/head_5/feature_map_4_mbox_conf/biases_0_0}}]]
[ ERROR ]
[ ERROR ] It can happen due to bug in custom shape infer function <function tf_native_tf_node_infer at 0x7fac16bb2940>.
[ ERROR ] Or because the node inputs have incorrect values/shapes.
[ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "ssd_heads/head_5/feature_map_4_mbox_conf/biases" node.
For more information please refer to Model Optimizer FAQ, question #38. (https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?question=38#question-38)

 

 

0 Kudos
Peh_Intel
Moderator
1,323 Views

Hi Richard,

 

From the error message, it shows that Model Optimizer cannot infer shapes or values for the specified node. Please ensure that you specify the correct input shape for your model.


If possible, please share your model with us to further investigate and better assist you in IR conversion.



Regards,

Peh


0 Kudos
goh__richard
Beginner
1,318 Views

thanks.

Please download model  here http://anpr.optasia.com.sg/download.cgi?f=graph.pb

 

thanks for your help

 

0 Kudos
Munesh_Intel
Moderator
1,300 Views

Hi Richard,

We are also getting the same error as you.

We would like to obtain more info about your model, as follows:

  • The source repository of the model.
  • Did you retrain the model? If yes, which TF version did you use?  

 

 

Regards,

Peh

 

0 Kudos
goh__richard
Beginner
1,293 Views

Hi Peh

Thanks for your assistance

Is the following info helpful?

 

2019.3.376

ssd_detector

vlp

 

0 Kudos
Munesh_Intel
Moderator
1,275 Views

Hi Richard,

Are you using the training extension from the following repository?

https://github.com/malikaoudjif/training_toolbox_tensorflow/blob/master/models/ssd_detector/README.md

 

This training extension from 2018 is outdated and not compatible with the latest version of OpenVINO.

 

If yes, then we would suggest you retrain your model using our latest training extension.

 

The datasets (vlp_test, bitvehicle, etc.) are available here.

https://github.com/openvinotoolkit/training_extensions/tree/master/data

 

For your use case, you can use MMDetection, which is an Object Detection and Instance Segmentation toolbox.

The basic tutorials for MMDetection are available here:

Getting Started


 

Regards,

Munesh


0 Kudos
goh__richard
Beginner
1,268 Views

Thanks for your assistance.

>If yes, then we would suggest you retrain your model using our latest training extension.

 

May we know where to find the lastest training extension?

Thanks

 

0 Kudos
goh__richard
Beginner
1,256 Views

HI

Thanks for your help.

It seems MMdetection is for python?

Is there similar for cpp?

Thanks

 

0 Kudos
goh__richard
Beginner
1,253 Views

Hi

We are usinsg security_barrier_camera_demo for the above model.

For this program, how can we train the same model for it if not using ssd/vlp?

thanks

 

0 Kudos
Munesh_Intel
Moderator
1,242 Views

Hi Richard,

We don’t have a similar training extension in C++.

 

Our training extensions have been updated and enhanced.

You can retrain your custom model by following the steps mentioned in the example here:

3: Train with customized models and standard datasets

 

The prerequisites and installation steps for MMDetection are available here:

Get Started

 

 

Regards,

Munesh

 

0 Kudos
Munesh_Intel
Moderator
1,205 Views

Hi Richard,

This thread will no longer be monitored since we have provided a suggestion and tutorials. If you need any additional information from Intel, please submit a new question.



Regards,

Munesh



0 Kudos
Reply