<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Issues converting a model trained with TensorFlow (Keras H5) format to IR in Intel® Distribution of OpenVINO™ Toolkit</title>
    <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Issues-converting-a-model-trained-with-TensorFlow-Keras-H5/m-p/1382410#M27229</link>
    <description>&lt;P&gt;Hello, I'm following these instructions in order to covert a &lt;STRONG&gt;Keras H5&lt;/STRONG&gt; model to &lt;STRONG&gt;IR&lt;/STRONG&gt; format:&lt;BR /&gt;&lt;A href="https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html#keras-h5" target="_self"&gt;https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html#keras-h5&lt;/A&gt;&lt;BR /&gt;The trained H5 model is working great, but I need to use it in a environment with &lt;STRONG&gt;OpenVINO 2021.4&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;1) I serialized the H5 model into the SavedFormat format, and I get an structure like this:&lt;/P&gt;
&lt;PRE&gt;# ls model/&lt;BR /&gt;assets saved_model.pb variables&lt;/PRE&gt;
&lt;P&gt;2) I run the &lt;STRONG&gt;mo&lt;/STRONG&gt; script:&lt;/P&gt;
&lt;PRE&gt;# mo --saved_model_dir model/&lt;/PRE&gt;
&lt;P&gt;and I get this output:&lt;/P&gt;
&lt;PRE&gt;Model Optimizer arguments:&lt;BR /&gt;Common parameters:&lt;BR /&gt;- Path to the Input Model: None&lt;BR /&gt;- Path for generated IR: /opt/intel/openvino_2021.4.689/deployment_tools/model_optimizer/.&lt;BR /&gt;- IR output name: saved_model&lt;BR /&gt;- Log level: ERROR&lt;BR /&gt;- Batch: Not specified, inherited from the model&lt;BR /&gt;- Input layers: Not specified, inherited from the model&lt;BR /&gt;- Output layers: Not specified, inherited from the model&lt;BR /&gt;- Input shapes: Not specified, inherited from the model&lt;BR /&gt;- Source layout: Not specified&lt;BR /&gt;- Target layout: Not specified&lt;BR /&gt;- Layout: Not specified&lt;BR /&gt;- Mean values: Not specified&lt;BR /&gt;- Scale values: Not specified&lt;BR /&gt;- Scale factor: Not specified&lt;BR /&gt;- Precision of IR: FP32&lt;BR /&gt;- Enable fusing: True&lt;BR /&gt;- User transformations: Not specified&lt;BR /&gt;- Reverse input channels: False&lt;BR /&gt;- Enable IR generation for fixed input shape: False&lt;BR /&gt;- Use the transformations config file: None&lt;BR /&gt;Advanced parameters:&lt;BR /&gt;- Force the usage of legacy Frontend of Model Optimizer for model conversion into IR: False&lt;BR /&gt;- Force the usage of new Frontend of Model Optimizer for model conversion into IR: False&lt;BR /&gt;TensorFlow specific parameters:&lt;BR /&gt;- Input model in text protobuf format: False&lt;BR /&gt;- Path to model dump for TensorBoard: None&lt;BR /&gt;- List of shared libraries with TensorFlow custom layers implementation: None&lt;BR /&gt;- Update the configuration file with input/output node names: None&lt;BR /&gt;- Use configuration file used to generate the model with Object Detection API: None&lt;BR /&gt;- Use the config file: None&lt;BR /&gt;OpenVINO runtime found in: /home/openvino/.local/lib/python3.6/site-packages/openvino&lt;BR /&gt;OpenVINO runtime version: 2022.1.0-7019-cdb9bec7210-releases/2022/1&lt;BR /&gt;Model Optimizer version: 2022.1.0-7019-cdb9bec7210-releases/2022/1&lt;BR /&gt;[ WARNING ] &lt;BR /&gt;Detected not satisfied dependencies:&lt;BR /&gt;networkx: installed: 2.5.1, required: ~= 2.6&lt;BR /&gt;fastjsonschema: not installed, required: ~= 2.15.1&lt;BR /&gt;&lt;BR /&gt;Please install required versions of components or run pip installation&lt;BR /&gt;pip install openvino-dev[tensorflow]&lt;BR /&gt;[ WARNING ] The model contains input(s) with partially defined shapes: name="conv2d_input" shape="[-1, 30, 30, 3]". Starting from the 2022.1 release the Model Optimizer can generate an IR with partially defined input shapes ("-1" dimension in the TensorFlow model or dimension with string value in the ONNX model). Some of the OpenVINO plugins require model input shapes to be static, so you should call "reshape" method in the Inference Engine and specify static input shapes. For optimal performance, it is still recommended to update input shapes with fixed ones using "--input" or "--input_shape" command-line parameters.&lt;BR /&gt;[ SUCCESS ] Generated IR version 11 model.&lt;BR /&gt;[ SUCCESS ] XML file: /opt/intel/openvino_2021.4.689/deployment_tools/model_optimizer/saved_model.xml&lt;BR /&gt;[ SUCCESS ] BIN file: /opt/intel/openvino_2021.4.689/deployment_tools/model_optimizer/saved_model.bin&lt;BR /&gt;[ SUCCESS ] Total execution time: 3.62 seconds. &lt;BR /&gt;[ SUCCESS ] Memory consumed: 372 MB. &lt;BR /&gt;It's been a while, check for a new version of Intel(R) Distribution of OpenVINO(TM) toolkit here https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html?cid=other&amp;amp;source=prod&amp;amp;campid=ww_2022_bu_IOTG_OpenVINO-2022-1&amp;amp;content=upg_all&amp;amp;medium=organic or on the GitHub*&lt;BR /&gt;[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.&lt;BR /&gt;Find more information about API v2.0 and IR v11 at https://docs.openvino.ai&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;3) But, when I execute the converted IR model with OpenVINO I get this error:&lt;/P&gt;
&lt;PRE&gt;Unknown model format! Cannot find reader for model format: xml and read the model: ./model.xml. Please check that reader library exists in your PATH.&lt;BR /&gt;ia_video_ingestion_rsd | Traceback (most recent call last):&lt;BR /&gt;ia_video_ingestion_rsd | File "udf.pyx", line 376, in udf.load_udf&lt;BR /&gt;ia_video_ingestion_rsd | File "/app/app.py", line 53, in __init__&lt;BR /&gt;ia_video_ingestion_rsd | self.neural_net = self.ie_core.read_network(model=model_xml, weights=model_bin)&lt;BR /&gt;ia_video_ingestion_rsd | File "ie_api.pyx", line 326, in openvino.inference_engine.ie_api.IECore.read_network&lt;BR /&gt;ia_video_ingestion_rsd | File "ie_api.pyx", line 351, in openvino.inference_engine.ie_api.IECore.read_network&lt;BR /&gt;ia_video_ingestion_rsd | RuntimeError: Unknown model format! Cannot find reader for model format: xml and read the model: ./model.xml. Please check that reader library exists in your PATH.&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Do you have any idea what is the correct way to convert from Keras H5 to IR format?&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Note:&lt;/STRONG&gt; I also tried with the version 2022 of OpenVINO, no luck.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Regards,&lt;/P&gt;
&lt;P&gt;Miguel&lt;/P&gt;</description>
    <pubDate>Fri, 06 May 2022 20:35:13 GMT</pubDate>
    <dc:creator>mvasquez</dc:creator>
    <dc:date>2022-05-06T20:35:13Z</dc:date>
    <item>
      <title>Issues converting a model trained with TensorFlow (Keras H5) format to IR</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Issues-converting-a-model-trained-with-TensorFlow-Keras-H5/m-p/1382410#M27229</link>
      <description>&lt;P&gt;Hello, I'm following these instructions in order to covert a &lt;STRONG&gt;Keras H5&lt;/STRONG&gt; model to &lt;STRONG&gt;IR&lt;/STRONG&gt; format:&lt;BR /&gt;&lt;A href="https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html#keras-h5" target="_self"&gt;https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html#keras-h5&lt;/A&gt;&lt;BR /&gt;The trained H5 model is working great, but I need to use it in a environment with &lt;STRONG&gt;OpenVINO 2021.4&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;1) I serialized the H5 model into the SavedFormat format, and I get an structure like this:&lt;/P&gt;
&lt;PRE&gt;# ls model/&lt;BR /&gt;assets saved_model.pb variables&lt;/PRE&gt;
&lt;P&gt;2) I run the &lt;STRONG&gt;mo&lt;/STRONG&gt; script:&lt;/P&gt;
&lt;PRE&gt;# mo --saved_model_dir model/&lt;/PRE&gt;
&lt;P&gt;and I get this output:&lt;/P&gt;
&lt;PRE&gt;Model Optimizer arguments:&lt;BR /&gt;Common parameters:&lt;BR /&gt;- Path to the Input Model: None&lt;BR /&gt;- Path for generated IR: /opt/intel/openvino_2021.4.689/deployment_tools/model_optimizer/.&lt;BR /&gt;- IR output name: saved_model&lt;BR /&gt;- Log level: ERROR&lt;BR /&gt;- Batch: Not specified, inherited from the model&lt;BR /&gt;- Input layers: Not specified, inherited from the model&lt;BR /&gt;- Output layers: Not specified, inherited from the model&lt;BR /&gt;- Input shapes: Not specified, inherited from the model&lt;BR /&gt;- Source layout: Not specified&lt;BR /&gt;- Target layout: Not specified&lt;BR /&gt;- Layout: Not specified&lt;BR /&gt;- Mean values: Not specified&lt;BR /&gt;- Scale values: Not specified&lt;BR /&gt;- Scale factor: Not specified&lt;BR /&gt;- Precision of IR: FP32&lt;BR /&gt;- Enable fusing: True&lt;BR /&gt;- User transformations: Not specified&lt;BR /&gt;- Reverse input channels: False&lt;BR /&gt;- Enable IR generation for fixed input shape: False&lt;BR /&gt;- Use the transformations config file: None&lt;BR /&gt;Advanced parameters:&lt;BR /&gt;- Force the usage of legacy Frontend of Model Optimizer for model conversion into IR: False&lt;BR /&gt;- Force the usage of new Frontend of Model Optimizer for model conversion into IR: False&lt;BR /&gt;TensorFlow specific parameters:&lt;BR /&gt;- Input model in text protobuf format: False&lt;BR /&gt;- Path to model dump for TensorBoard: None&lt;BR /&gt;- List of shared libraries with TensorFlow custom layers implementation: None&lt;BR /&gt;- Update the configuration file with input/output node names: None&lt;BR /&gt;- Use configuration file used to generate the model with Object Detection API: None&lt;BR /&gt;- Use the config file: None&lt;BR /&gt;OpenVINO runtime found in: /home/openvino/.local/lib/python3.6/site-packages/openvino&lt;BR /&gt;OpenVINO runtime version: 2022.1.0-7019-cdb9bec7210-releases/2022/1&lt;BR /&gt;Model Optimizer version: 2022.1.0-7019-cdb9bec7210-releases/2022/1&lt;BR /&gt;[ WARNING ] &lt;BR /&gt;Detected not satisfied dependencies:&lt;BR /&gt;networkx: installed: 2.5.1, required: ~= 2.6&lt;BR /&gt;fastjsonschema: not installed, required: ~= 2.15.1&lt;BR /&gt;&lt;BR /&gt;Please install required versions of components or run pip installation&lt;BR /&gt;pip install openvino-dev[tensorflow]&lt;BR /&gt;[ WARNING ] The model contains input(s) with partially defined shapes: name="conv2d_input" shape="[-1, 30, 30, 3]". Starting from the 2022.1 release the Model Optimizer can generate an IR with partially defined input shapes ("-1" dimension in the TensorFlow model or dimension with string value in the ONNX model). Some of the OpenVINO plugins require model input shapes to be static, so you should call "reshape" method in the Inference Engine and specify static input shapes. For optimal performance, it is still recommended to update input shapes with fixed ones using "--input" or "--input_shape" command-line parameters.&lt;BR /&gt;[ SUCCESS ] Generated IR version 11 model.&lt;BR /&gt;[ SUCCESS ] XML file: /opt/intel/openvino_2021.4.689/deployment_tools/model_optimizer/saved_model.xml&lt;BR /&gt;[ SUCCESS ] BIN file: /opt/intel/openvino_2021.4.689/deployment_tools/model_optimizer/saved_model.bin&lt;BR /&gt;[ SUCCESS ] Total execution time: 3.62 seconds. &lt;BR /&gt;[ SUCCESS ] Memory consumed: 372 MB. &lt;BR /&gt;It's been a while, check for a new version of Intel(R) Distribution of OpenVINO(TM) toolkit here https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html?cid=other&amp;amp;source=prod&amp;amp;campid=ww_2022_bu_IOTG_OpenVINO-2022-1&amp;amp;content=upg_all&amp;amp;medium=organic or on the GitHub*&lt;BR /&gt;[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.&lt;BR /&gt;Find more information about API v2.0 and IR v11 at https://docs.openvino.ai&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;3) But, when I execute the converted IR model with OpenVINO I get this error:&lt;/P&gt;
&lt;PRE&gt;Unknown model format! Cannot find reader for model format: xml and read the model: ./model.xml. Please check that reader library exists in your PATH.&lt;BR /&gt;ia_video_ingestion_rsd | Traceback (most recent call last):&lt;BR /&gt;ia_video_ingestion_rsd | File "udf.pyx", line 376, in udf.load_udf&lt;BR /&gt;ia_video_ingestion_rsd | File "/app/app.py", line 53, in __init__&lt;BR /&gt;ia_video_ingestion_rsd | self.neural_net = self.ie_core.read_network(model=model_xml, weights=model_bin)&lt;BR /&gt;ia_video_ingestion_rsd | File "ie_api.pyx", line 326, in openvino.inference_engine.ie_api.IECore.read_network&lt;BR /&gt;ia_video_ingestion_rsd | File "ie_api.pyx", line 351, in openvino.inference_engine.ie_api.IECore.read_network&lt;BR /&gt;ia_video_ingestion_rsd | RuntimeError: Unknown model format! Cannot find reader for model format: xml and read the model: ./model.xml. Please check that reader library exists in your PATH.&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Do you have any idea what is the correct way to convert from Keras H5 to IR format?&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Note:&lt;/STRONG&gt; I also tried with the version 2022 of OpenVINO, no luck.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Regards,&lt;/P&gt;
&lt;P&gt;Miguel&lt;/P&gt;</description>
      <pubDate>Fri, 06 May 2022 20:35:13 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Issues-converting-a-model-trained-with-TensorFlow-Keras-H5/m-p/1382410#M27229</guid>
      <dc:creator>mvasquez</dc:creator>
      <dc:date>2022-05-06T20:35:13Z</dc:date>
    </item>
    <item>
      <title>Re: Issues converting a model trained with TensorFlow (Keras H5) format to IR</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Issues-converting-a-model-trained-with-TensorFlow-Keras-H5/m-p/1382728#M27236</link>
      <description>&lt;P&gt;&lt;SPAN&gt;Hi Miguel,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Thank you for reaching out to us.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Based on your error log, the model was converted by OpenVINO 2022.1 Model Optimizer instead of OpenVINO 2021.4 Model Optimizer.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;For your information, Inference Engine of OpenVINO™ Toolkit 2021.4 is unable to read and load model IRv11.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Please ensure that you have initialized the environments for OpenVINO™ Toolkit 2021.4 by running the following script before converting the model:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT face="courier new,courier"&gt;&lt;SPAN&gt;source /opt/intel/openvino_2021/bin/setupvars.sh&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Regards,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Hairul&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 09 May 2022 03:21:21 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Issues-converting-a-model-trained-with-TensorFlow-Keras-H5/m-p/1382728#M27236</guid>
      <dc:creator>Hairul_Intel</dc:creator>
      <dc:date>2022-05-09T03:21:21Z</dc:date>
    </item>
    <item>
      <title>Re: Issues converting a model trained with TensorFlow (Keras H5) format to IR</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Issues-converting-a-model-trained-with-TensorFlow-Keras-H5/m-p/1382881#M27243</link>
      <description>&lt;P&gt;Hi Hairul, thanks for the response. I've repeated the same procedure in OpenVino 2021.4, but&amp;nbsp; executing previously the script:&lt;/P&gt;
&lt;P&gt;/opt/intel/openvino_2021.4.689/bin/setup_vars.sh&lt;/P&gt;
&lt;P&gt;Now I'm getting this error, do you know how to perform correctly this convertion?&lt;/P&gt;
&lt;PRE&gt;/opt/intel/openvino_2021.4.689/deployment_tools/model_optimizer$ python3 mo.py --saved_model_dir ~/Downloads/model&lt;BR /&gt;[ WARNING ] Telemetry will not be sent as TID is not specified.&lt;BR /&gt;Model Optimizer arguments:&lt;BR /&gt;Common parameters:&lt;BR /&gt;- Path to the Input Model: None&lt;BR /&gt;- Path for generated IR: /opt/intel/openvino_2021.4.689/deployment_tools/model_optimizer/.&lt;BR /&gt;- IR output name: saved_model&lt;BR /&gt;- Log level: ERROR&lt;BR /&gt;- Batch: Not specified, inherited from the model&lt;BR /&gt;- Input layers: Not specified, inherited from the model&lt;BR /&gt;- Output layers: Not specified, inherited from the model&lt;BR /&gt;- Input shapes: Not specified, inherited from the model&lt;BR /&gt;- Mean values: Not specified&lt;BR /&gt;- Scale values: Not specified&lt;BR /&gt;- Scale factor: Not specified&lt;BR /&gt;- Precision of IR: FP32&lt;BR /&gt;- Enable fusing: True&lt;BR /&gt;- Enable grouped convolutions fusing: True&lt;BR /&gt;- Move mean values to preprocess section: None&lt;BR /&gt;- Reverse input channels: False&lt;BR /&gt;TensorFlow specific parameters:&lt;BR /&gt;- Input model in text protobuf format: False&lt;BR /&gt;- Path to model dump for TensorBoard: None&lt;BR /&gt;- List of shared libraries with TensorFlow custom layers implementation: None&lt;BR /&gt;- Update the configuration file with input/output node names: None&lt;BR /&gt;- Use configuration file used to generate the model with Object Detection API: None&lt;BR /&gt;- Use the config file: None&lt;BR /&gt;- Inference Engine found in: /opt/intel/openvino_2021.4.689/python/python3.6/openvino&lt;BR /&gt;Inference Engine version: 2021.4.1-3926-14e67d86634-releases/2021/4&lt;BR /&gt;Model Optimizer version: 2021.4.1-3926-14e67d86634-releases/2021/4&lt;BR /&gt;[ WARNING ] Telemetry will not be sent as TID is not specified.&lt;BR /&gt;2022-05-09 11:28:16.836810: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/opt/intel/openvino_2021.4.689/deployment_tools/model_optimizer/mo/utils/../../../inference_engine/lib/intel64:/opt/intel/openvino_2021.4.689/deployment_tools/model_optimizer/mo/utils/../../../inference_engine/external/tbb/lib:/opt/intel/openvino_2021.4.689/deployment_tools/model_optimizer/mo/utils/../../../ngraph/lib&lt;BR /&gt;2022-05-09 11:28:16.836832: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.&lt;BR /&gt;/usr/local/lib/python3.6/dist-packages/tensorflow/python/autograph/impl/api.py:22: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses&lt;BR /&gt;import imp&lt;BR /&gt;2022-05-09 11:28:18.007461: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set&lt;BR /&gt;2022-05-09 11:28:18.007617: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/opt/intel/openvino_2021.4.689/deployment_tools/model_optimizer/mo/utils/../../../inference_engine/lib/intel64:/opt/intel/openvino_2021.4.689/deployment_tools/model_optimizer/mo/utils/../../../inference_engine/external/tbb/lib:/opt/intel/openvino_2021.4.689/deployment_tools/model_optimizer/mo/utils/../../../ngraph/lib&lt;BR /&gt;2022-05-09 11:28:18.007630: W tensorflow/stream_executor/cuda/cuda_driver.cc:326] failed call to cuInit: UNKNOWN ERROR (303)&lt;BR /&gt;2022-05-09 11:28:18.007642: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (openvino-ATC-8010): /proc/driver/nvidia/version does not exist&lt;BR /&gt;2022-05-09 11:28:18.007758: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA&lt;BR /&gt;To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.&lt;BR /&gt;2022-05-09 11:28:18.008575: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set&lt;BR /&gt;2022-05-09 11:28:18.265428: I tensorflow/core/grappler/devices.cc:69] Number of eligible GPUs (core count &amp;gt;= 8, compute capability &amp;gt;= 0.0): 0&lt;BR /&gt;2022-05-09 11:28:18.265514: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session&lt;BR /&gt;2022-05-09 11:28:18.265730: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set&lt;BR /&gt;2022-05-09 11:28:18.284247: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2400000000 Hz&lt;BR /&gt;2022-05-09 11:28:18.286792: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:928] Optimization results for grappler item: graph_to_optimize&lt;BR /&gt;function_optimizer: Graph size after: 69 nodes (53), 92 edges (76), time = 1.251ms.&lt;BR /&gt;function_optimizer: function_optimizer did nothing. time = 0.025ms.&lt;BR /&gt;&lt;BR /&gt;[ ERROR ] Shape [-1 30 30 3] is not fully defined for output 0 of "conv2d_input". Use --input_shape with positive integers to override model input shapes.&lt;BR /&gt;[ ERROR ] Cannot infer shapes or values for node "conv2d_input".&lt;BR /&gt;[ ERROR ] Not all output shapes were inferred or fully defined for node "conv2d_input". &lt;BR /&gt;For more information please refer to Model Optimizer FAQ, question #40. (https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?question=40#question-40)&lt;BR /&gt;[ ERROR ] &lt;BR /&gt;[ ERROR ] It can happen due to bug in custom shape infer function &amp;lt;function Parameter.infer at 0x7f3911d1f2f0&amp;gt;.&lt;BR /&gt;[ ERROR ] Or because the node inputs have incorrect values/shapes.&lt;BR /&gt;[ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).&lt;BR /&gt;[ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information.&lt;BR /&gt;[ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (&amp;lt;class 'extensions.middle.PartialInfer.PartialInfer'&amp;gt;): Stopped shape/value propagation at "conv2d_input" node. &lt;BR /&gt;For more information please refer to Model Optimizer FAQ, question #38. (https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?question=38#question-38)&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 09 May 2022 14:48:05 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Issues-converting-a-model-trained-with-TensorFlow-Keras-H5/m-p/1382881#M27243</guid>
      <dc:creator>mvasquez</dc:creator>
      <dc:date>2022-05-09T14:48:05Z</dc:date>
    </item>
    <item>
      <title>Re:Issues converting a model trained with TensorFlow (Keras H5) format to IR</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Issues-converting-a-model-trained-with-TensorFlow-Keras-H5/m-p/1383060#M27246</link>
      <description>&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Hi Miguel,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Please ensure you have installed the requirements for TensorFlow2 dependencies as mentioned &lt;/SPAN&gt;&lt;A href="https://docs.openvino.ai/2021.4/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html#convert-tensorflow-2-models" rel="noopener noreferrer" target="_blank" style="font-size: 16px;"&gt;here&lt;/A&gt;&lt;SPAN style="font-size: 16px;"&gt;.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;On another note, you can try running the following script when converting your model:&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px; font-family: courier;"&gt;python3 mo_tf.py --saved_model_dir model --batch 1&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Additionally, please share more details regarding your model, is it a custom or pre-trained model, the topology, source repository, etc., for further investigation.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Regards,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Hairul&lt;/SPAN&gt;&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 10 May 2022 04:01:33 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Issues-converting-a-model-trained-with-TensorFlow-Keras-H5/m-p/1383060#M27246</guid>
      <dc:creator>Hairul_Intel</dc:creator>
      <dc:date>2022-05-10T04:01:33Z</dc:date>
    </item>
    <item>
      <title>Re: Re:Issues converting a model trained with TensorFlow (Keras H5) format to IR</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Issues-converting-a-model-trained-with-TensorFlow-Keras-H5/m-p/1383179#M27254</link>
      <description>&lt;P&gt;Hi Hairul, in order to convert correctly the model I had to add the input_shape parameter:&lt;/P&gt;
&lt;PRE&gt;python3 mo --saved_model_dir ~/Downloads/model --input_shape [1,30,30,3]&lt;/PRE&gt;
&lt;P&gt;The model had been trained with 30x30 images, but not sure what do mean the numbers 1 and 3.&lt;/P&gt;
&lt;P&gt;Thanks for the support,&lt;/P&gt;
&lt;P&gt;Miguel&lt;/P&gt;</description>
      <pubDate>Tue, 10 May 2022 14:04:20 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Issues-converting-a-model-trained-with-TensorFlow-Keras-H5/m-p/1383179#M27254</guid>
      <dc:creator>mvasquez</dc:creator>
      <dc:date>2022-05-10T14:04:20Z</dc:date>
    </item>
    <item>
      <title>Re:Issues converting a model trained with TensorFlow (Keras H5) format to IR</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Issues-converting-a-model-trained-with-TensorFlow-Keras-H5/m-p/1383355#M27261</link>
      <description>&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Hi Miguel,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Glad to know that your issue is resolved.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;For sharing purposes, the input shape values are in the order of &lt;/SPAN&gt;&lt;B style="font-size: 16px;"&gt;[N,H,W,C]&lt;/B&gt;&lt;SPAN style="font-size: 16px;"&gt; for TensorFlow models. The meaning for each letter are as follows:&lt;/SPAN&gt;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;B style="font-size: 16px;"&gt;N&lt;/B&gt;&lt;SPAN style="font-size: 16px;"&gt;: number of images in the batch&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;B style="font-size: 16px;"&gt;H&lt;/B&gt;&lt;SPAN style="font-size: 16px;"&gt;: height of the image&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;B style="font-size: 16px;"&gt;W&lt;/B&gt;&lt;SPAN style="font-size: 16px;"&gt;: width of the image&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;B style="font-size: 16px;"&gt;C&lt;/B&gt;&lt;SPAN style="font-size: 16px;"&gt;: number of channels of the image (ex: 3 for RGB, 1 for grayscale…)&lt;/SPAN&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;This thread will no longer be monitored since this issue has been resolved.&amp;nbsp;If you need any additional information from Intel, please submit a new question.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Regards,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Hairul&lt;/SPAN&gt;&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 11 May 2022 03:18:36 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Issues-converting-a-model-trained-with-TensorFlow-Keras-H5/m-p/1383355#M27261</guid>
      <dc:creator>Hairul_Intel</dc:creator>
      <dc:date>2022-05-11T03:18:36Z</dc:date>
    </item>
  </channel>
</rss>

