<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Re:StatefulPartitionedCall issues in converting te... in Intel® Distribution of OpenVINO™ Toolkit</title>
    <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/StatefulPartitionedCall-issues-in-converting-tensorflow-model/m-p/1197811#M20155</link>
    <description>&lt;P&gt;Hi Port,&lt;/P&gt;
&lt;P&gt;Just to be clear: are you converting and running your model on the Pi? Or have you converted your model on our Linux system and then trying to deploy the model on your Pi?&lt;/P&gt;
&lt;P&gt;Either way, can you please send me the Model Optimizer command you used?&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you are able to attach your model, please do so. I can also send you a PM and you can send it there if you prefer to not share it publicly.&lt;/P&gt;
&lt;P&gt;Best Regards,&lt;BR /&gt;Sahira&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Wed, 05 Aug 2020 23:53:33 GMT</pubDate>
    <dc:creator>Sahira_Intel</dc:creator>
    <dc:date>2020-08-05T23:53:33Z</dc:date>
    <item>
      <title>StatefulPartitionedCall issues in converting tensorflow model</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/StatefulPartitionedCall-issues-in-converting-tensorflow-model/m-p/1193718#M19921</link>
      <description>&lt;P&gt;I am currently converting a custom tensorflow model in OpenVINO 2020.4 using Tensorflow 2.2.0&lt;/P&gt;
&lt;P&gt;I am running this command (I know my input shape is correct):&amp;nbsp;&lt;/P&gt;
&lt;P style="box-sizing: border-box; color: #555555; font-family: &amp;amp;quot; intel-clear&amp;amp;quot;,&amp;amp;quot;tahoma&amp;amp;quot;,helvetica,&amp;amp;quot;helvetica&amp;amp;quot;,arial,sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px; margin: 0px 0px 8px 0px;"&gt;"&lt;FONT style="background-color: #ffffff; box-sizing: border-box;"&gt;sudo python3.6 mo_tf.py --saved_model_dir ~/Downloads/saved_model --output_dir ~/Downloads/it_worked --input_shape [1,120,20] &lt;/FONT&gt;".&lt;/P&gt;
&lt;P style="box-sizing: border-box; color: #555555; font-family: &amp;amp;quot; intel-clear&amp;amp;quot;,&amp;amp;quot;tahoma&amp;amp;quot;,helvetica,&amp;amp;quot;helvetica&amp;amp;quot;,arial,sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px; margin: 0px 0px 8px 0px;"&gt;I'm running into issues with one of the operations: "StatefulPartitionedCall". I read that this particular node can have issues in openvino here:&amp;nbsp;&lt;FONT style="background-color: #ffffff; box-sizing: border-box;"&gt;&lt;A style="background-color: transparent; box-sizing: border-box; color: #0071c5; text-decoration: underline;" href="https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html#freeze-the-tensorflowmodel" rel="nofollow noopener noreferrer" target="_blank"&gt;https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Mode...&lt;/A&gt;&lt;/FONT&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="box-sizing: border-box; color: #555555; font-family: &amp;amp;quot; intel-clear&amp;amp;quot;,&amp;amp;quot;tahoma&amp;amp;quot;,helvetica,&amp;amp;quot;helvetica&amp;amp;quot;,arial,sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px; margin: 0px 0px 8px 0px;"&gt;It says that "&lt;SPAN style="background-color: #ffffff; box-sizing: border-box; color: #4a4a4a; display: inline; float: none; font-family: &amp;amp;quot; roboto&amp;amp;quot;,&amp;amp;quot;helvetica&amp;amp;quot;,sans-serif; font-size: 15px; font-size-adjust: none; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: 400; letter-spacing: normal; line-height: 22px; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;"&gt;TensorFlow 2.x SavedModel format has a specific graph due to eager execution. In case of pruning, find custom input nodes in the &lt;/SPAN&gt;&lt;CODE style="background-attachment: scroll; background-clip: border-box; background-color: #f9f9f9; background-image: none; background-origin: padding-box; background-position-x: 0%; background-position-y: 0%; background-repeat: repeat; background-size: auto; border-bottom-left-radius: 0px; border-bottom-right-radius: 0px; border-top-left-radius: 0px; border-top-right-radius: 0px; box-sizing: border-box; color: #4a4a4a; display: block; font-family: Menlo,Monaco,Consolas,&amp;amp;quot; courier new&amp;amp;quot;,monospace; font-size: 15px; font-style: normal; font-variant: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px; padding: 0px;"&gt;StatefulPartitionedCall/*&lt;/CODE&gt;&lt;SPAN style="background-color: #ffffff; box-sizing: border-box; color: #4a4a4a; display: inline; float: none; font-family: &amp;amp;quot; roboto&amp;amp;quot;,&amp;amp;quot;helvetica&amp;amp;quot;,sans-serif; font-size: 15px; font-size-adjust: none; font-stretch: normal; font-style: normal; font-variant: normal; font-weight: 400; letter-spacing: normal; line-height: 22px; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;"&gt; subgraph of TensorFlow 2.x SavedModel format.&lt;/SPAN&gt;"&lt;/P&gt;
&lt;P style="box-sizing: border-box; color: #555555; font-family: &amp;amp;quot; intel-clear&amp;amp;quot;,&amp;amp;quot;tahoma&amp;amp;quot;,helvetica,&amp;amp;quot;helvetica&amp;amp;quot;,arial,sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px; margin: 0px 0px 8px 0px;"&gt;Could I please get more detail into how exactly I should be 'pruning' these node's input?&lt;/P&gt;
&lt;P style="box-sizing: border-box; color: #555555; font-family: &amp;amp;quot; intel-clear&amp;amp;quot;,&amp;amp;quot;tahoma&amp;amp;quot;,helvetica,&amp;amp;quot;helvetica&amp;amp;quot;,arial,sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px; margin: 0px 0px 8px 0px;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="box-sizing: border-box; color: #555555; font-family: &amp;amp;quot; intel-clear&amp;amp;quot;,&amp;amp;quot;tahoma&amp;amp;quot;,helvetica,&amp;amp;quot;helvetica&amp;amp;quot;,arial,sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px; margin: 0px 0px 8px 0px;"&gt;Thanks&amp;nbsp;&lt;/P&gt;
&lt;P style="box-sizing: border-box; color: #555555; font-family: &amp;amp;quot; intel-clear&amp;amp;quot;,&amp;amp;quot;tahoma&amp;amp;quot;,helvetica,&amp;amp;quot;helvetica&amp;amp;quot;,arial,sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px; margin: 0px 0px 8px 0px;"&gt;--Port&lt;/P&gt;
&lt;P style="box-sizing: border-box; color: #555555; font-family: &amp;amp;quot; intel-clear&amp;amp;quot;,&amp;amp;quot;tahoma&amp;amp;quot;,helvetica,&amp;amp;quot;helvetica&amp;amp;quot;,arial,sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px; margin: 0px 0px 8px 0px;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="box-sizing: border-box; color: #555555; font-family: &amp;amp;quot; intel-clear&amp;amp;quot;,&amp;amp;quot;tahoma&amp;amp;quot;,helvetica,&amp;amp;quot;helvetica&amp;amp;quot;,arial,sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px; margin: 0px 0px 8px 0px;"&gt;Oh, here is the error, and I know that my input shape is correct,:&lt;/P&gt;
&lt;P style="box-sizing: border-box; color: #555555; font-family: &amp;amp;quot; intel-clear&amp;amp;quot;,&amp;amp;quot;tahoma&amp;amp;quot;,helvetica,&amp;amp;quot;helvetica&amp;amp;quot;,arial,sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px; margin: 0px 0px 8px 0px;"&gt;&lt;FONT style="background-color: #ffffff; box-sizing: border-box;"&gt;Model Optimizer version: &amp;nbsp;&lt;BR style="box-sizing: border-box;" /&gt;Progress: [.......&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; ]&amp;nbsp; 35.71% done&lt;/FONT&gt;&lt;FONT style="background-color: #ffffff; box-sizing: border-box;"&gt;[ ERROR ]&amp;nbsp; Cannot infer shapes or values for node "StatefulPartitionedCall/sequential/lstm/StatefulPartitionedCall/TensorArrayUnstack/TensorListFromTensor".&lt;BR style="box-sizing: border-box;" /&gt;[ ERROR ]&amp;nbsp; Tensorflow type 21 not convertible to numpy dtype.&lt;BR style="box-sizing: border-box;" /&gt;[ ERROR ]&amp;nbsp; &lt;BR style="box-sizing: border-box;" /&gt;[ ERROR ]&amp;nbsp; It can happen due to bug in custom shape infer function &amp;lt;function tf_native_tf_node_infer at 0x7f39ccd08400&amp;gt;.&lt;BR style="box-sizing: border-box;" /&gt;[ ERROR ]&amp;nbsp; Or because the node inputs have incorrect values/shapes.&lt;BR style="box-sizing: border-box;" /&gt;[ ERROR ]&amp;nbsp; Or because input shapes are incorrect (embedded to the model or passed via --input_shape).&lt;BR style="box-sizing: border-box;" /&gt;[ ERROR ]&amp;nbsp; Run Model Optimizer with --log_level=DEBUG for more information.&lt;BR style="box-sizing: border-box;" /&gt;[ ERROR ]&amp;nbsp; Exception occurred during running replacer "REPLACEMENT_ID" (&amp;lt;class 'extensions.middle.PartialInfer.PartialInfer'&amp;gt;): Stopped shape/value propagation at "StatefulPartitionedCall/sequential/lstm/StatefulPartitionedCall/TensorArrayUnstack/TensorListFromTensor" node. &lt;BR style="box-sizing: border-box;" /&gt;&amp;nbsp;For more information please refer to Model Optimizer FAQ (&lt;A style="background-color: transparent; box-sizing: border-box; color: #0071c5; text-decoration: underline;" href="https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html" rel="nofollow noopener noreferrer" target="_blank"&gt;https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html&lt;/A&gt;), question #38. &lt;/FONT&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 20 Jul 2020 19:39:03 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/StatefulPartitionedCall-issues-in-converting-tensorflow-model/m-p/1193718#M19921</guid>
      <dc:creator>Portomania</dc:creator>
      <dc:date>2020-07-20T19:39:03Z</dc:date>
    </item>
    <item>
      <title>Re: StatefulPartitionedCall issues in converting tensorflow model</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/StatefulPartitionedCall-issues-in-converting-tensorflow-model/m-p/1193944#M19936</link>
      <description>&lt;P&gt;Here is my model.summary(), for reference:&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="display: inline !important; float: none; background-color: #ffffff; color: black; font-family: monospace; font-size: inherit; font-style: normal; font-variant: normal; font-weight: 400; letter-spacing: normal; line-height: inherit; orphans: 2; overflow-wrap: break-word; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: pre-wrap; word-break: break-all; word-spacing: 0px;"&gt;Model: "sequential"&lt;BR /&gt;_________________________________________________________________&lt;BR /&gt;Layer (type)&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Output Shape&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Param #&amp;nbsp;&amp;nbsp; &lt;BR /&gt;=================================================================&lt;BR /&gt;lstm (LSTM)&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; (None, 120, 20)&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 3280&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;BR /&gt;_________________________________________________________________&lt;BR /&gt;batch_normalization (BatchNo (None, 120, 20)&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 80&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;BR /&gt;_________________________________________________________________&lt;BR /&gt;dropout (Dropout)&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; (None, 120, 20)&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;BR /&gt;_________________________________________________________________&lt;BR /&gt;lstm_1 (LSTM)&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; (None, 120, 64)&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 21760&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;BR /&gt;_________________________________________________________________&lt;BR /&gt;batch_normalization_1 (Batch (None, 120, 64)&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 256&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;BR /&gt;_________________________________________________________________&lt;BR /&gt;dropout_1 (Dropout)&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; (None, 120, 64)&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;BR /&gt;_________________________________________________________________&lt;BR /&gt;lstm_2 (LSTM)&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; (None, 64)&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 33024&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;BR /&gt;_________________________________________________________________&lt;BR /&gt;batch_normalization_2 (Batch (None, 64)&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 256&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;BR /&gt;_________________________________________________________________&lt;BR /&gt;dropout_2 (Dropout)&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; (None, 64)&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;BR /&gt;_________________________________________________________________&lt;BR /&gt;dense (Dense)&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; (None, 32)&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 2080&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;BR /&gt;_________________________________________________________________&lt;BR /&gt;dropout_3 (Dropout)&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; (None, 32)&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;BR /&gt;_________________________________________________________________&lt;BR /&gt;dense_1 (Dense)&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; (None, 1)&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 33&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;BR /&gt;=================================================================&lt;BR /&gt;Total params: 60,769&lt;BR /&gt;Trainable params: 60,473&lt;BR /&gt;Non-trainable params: 296&lt;BR /&gt;___________________________&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 21 Jul 2020 19:39:59 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/StatefulPartitionedCall-issues-in-converting-tensorflow-model/m-p/1193944#M19936</guid>
      <dc:creator>Portomania</dc:creator>
      <dc:date>2020-07-21T19:39:59Z</dc:date>
    </item>
    <item>
      <title>Re:StatefulPartitionedCall issues in converting te...</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/StatefulPartitionedCall-issues-in-converting-tensorflow-model/m-p/1194069#M19941</link>
      <description>&lt;P&gt;&lt;SPAN style="font-size: 14px;"&gt;Greetings,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 14px;"&gt;First and foremost, please help to check and ensure the &lt;/SPAN&gt;&lt;B style="font-size: 14px;"&gt;custom model&lt;/B&gt;&lt;SPAN style="font-size: 14px;"&gt; that you are using&lt;/SPAN&gt;&lt;B style="font-size: 14px;"&gt; supported&lt;/B&gt;&lt;SPAN style="font-size: 14px;"&gt; by OpenVino. You can refer here: &lt;/SPAN&gt;&lt;A href="https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html" rel="noopener noreferrer" target="_blank" style="font-size: 14px;"&gt;https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 14px;"&gt;As a reminder, your model need to be frozen, if not, you need to do so, you can refer to Freezing Custom model in the same link as above.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 14px;"&gt;Generally,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 14px;"&gt;There are a few ways to convert custom TF model:&lt;/SPAN&gt;&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;&lt;SPAN style="font-size: 14px;"&gt;Checkpoint: you need to have an inference graph file &amp;amp; use mo.py in OpenVino's folder (model optimizer). Run the script with the path to the checkpoint file to convert a model. &lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN style="font-size: 14px;"&gt;MetaGraph: In this case, a model consists of three or four files stored in the same directory:model_name.meta, model_name.index, model_name.data-00000-of-00001 and checkpoint&amp;nbsp;(optional). Then, run the&amp;nbsp;mo_tf.py&amp;nbsp;script with a path to the MetaGraph&amp;nbsp;.meta&amp;nbsp;file to convert a model&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN style="font-size: 14px;"&gt;SavedModel format of TensorFlow 1.x and 2.x versions: Similar concept applied where you need to route mo_tf.py script to the correct directories.&lt;/SPAN&gt;&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;B style="font-size: 14px;"&gt;TensorFlow 2.x SavedModel format strictly requires the 2.x version of TensorFlow.&lt;/B&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 14px;"&gt;Regarding your Error:&lt;/SPAN&gt;&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;&lt;SPAN style="font-size: 14px;"&gt;Tensorflow type 21 not convertible to numpy dtype -- this indicate there are certain things that are not convertible which in this case a dtype object of numpy array&lt;/SPAN&gt;&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;&lt;SPAN style="font-size: 14px;"&gt;There are several possible reasons for this to happens and you can see them in you Error Log.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 14px;"&gt;I'm not sure which custom Tensorflow's Topology you are using but I'm assuming its incompatible. It's good if you could cross check your model with OpenVino's supported Topology: &lt;/SPAN&gt;&lt;A href="https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html" rel="noopener noreferrer" target="_blank" style="font-size: 14px;"&gt;https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 14px;"&gt;This is our official tutorial's video which might help you:  &lt;/SPAN&gt;&lt;A href="https://www.youtube.com/watch?v=QW6532LtiTc" rel="noopener noreferrer" target="_blank" style="font-size: 14px;"&gt;https://www.youtube.com/watch?v=QW6532LtiTc&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 14px;"&gt;Sincerely,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 14px;"&gt;Iffa&lt;/SPAN&gt;&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 22 Jul 2020 12:31:01 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/StatefulPartitionedCall-issues-in-converting-tensorflow-model/m-p/1194069#M19941</guid>
      <dc:creator>Iffa_Intel</dc:creator>
      <dc:date>2020-07-22T12:31:01Z</dc:date>
    </item>
    <item>
      <title>Re: Re:StatefulPartitionedCall issues in converting te...</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/StatefulPartitionedCall-issues-in-converting-tensorflow-model/m-p/1194129#M19947</link>
      <description>&lt;P&gt;Hello, thanks this cleared up a bunch of my issues.&lt;/P&gt;
&lt;P&gt;It turns out the LSTM layer in Keras wasn't compatible for some reason, so for now I've changed to the keras TCN layer which I know is compatible as it is listed as a accepted network topology. Once I changed the model, it fully converted, but now I'm having issues actually using it.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;When I try to import the model in Python, I am getting:&lt;/P&gt;
&lt;P&gt;&lt;FONT style="background-color: #ffffff;"&gt;Traceback (most recent call last):&lt;BR /&gt;&amp;nbsp; File "modeltest.py", line 9, in &amp;lt;module&amp;gt;&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; exec_net = ie.load_network(network=net, device_name="MYRIAD", num_requests=2)&lt;BR /&gt;&amp;nbsp; File "ie_api.pyx", line 178, in openvino.inference_engine.ie_api.IECore.load_network&lt;BR /&gt;&amp;nbsp; File "ie_api.pyx", line 187, in openvino.inference_engine.ie_api.IECore.load_network&lt;BR /&gt;RuntimeError: Failed to compile layer "StatefulPartitionedCall/sequential/tcn/residual_block_0/conv1D_0/Pad": AssertionFailed: layer-&amp;gt;pads_begin.size() == 4&lt;BR /&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT style="background-color: #ffffff;"&gt;For reference this is the only code leading up to it: &lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT style="background-color: #ffffff;"&gt;from openvino.inference_engine import IENetwork, IEPlugin, IECore &amp;nbsp;&lt;/FONT&gt;&lt;/P&gt;
&lt;DIV&gt;&lt;FONT style="background-color: #ffffff;"&gt;ie = IECore()&lt;BR /&gt;net = ie.read_network(model="Model1/saved_model.xml", weights="Model1/saved_model.bin")&lt;BR /&gt;exec_net = ie.load_network(network=net, device_name="MYRIAD", num_requests=2)&amp;nbsp;&lt;/FONT&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;&lt;FONT style="background-color: #ffffff;"&gt;&lt;SPAN style="display: inline !important; float: none; background-color: #ffffff; color: #555555; cursor: text; font-family: inherit; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 400; letter-spacing: normal; line-height: 22px; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;"&gt;The issue could be that I'm trying to run this model on a Raspberry Pi with OpenVINO 2020.3, because that's the latest version I can download for the Pi. However, when I build the model on my computer however, the latest version is 2020.4 which has the tensorflow 2 support.&amp;nbsp;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;Does it look like the difference between building the model with 2020.3 and importing it with 2020.4 is the problem, or is there another issue with the model?&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;Thanks again,&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;--Port&amp;nbsp;&lt;/DIV&gt;</description>
      <pubDate>Wed, 22 Jul 2020 18:41:11 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/StatefulPartitionedCall-issues-in-converting-tensorflow-model/m-p/1194129#M19947</guid>
      <dc:creator>Portomania</dc:creator>
      <dc:date>2020-07-22T18:41:11Z</dc:date>
    </item>
    <item>
      <title>Re:StatefulPartitionedCall issues in converting te...</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/StatefulPartitionedCall-issues-in-converting-tensorflow-model/m-p/1194208#M19952</link>
      <description>&lt;P&gt;Building with 2020.3 and import it to 2020.4 should have no problems if they are in the same OS platform and using the same version of TF.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;If you are building with 2020.3 (from Raspbian) and import it into 2020.4 in let say Windows OS, this I believe would cause conflict since they are in different platform and toolkit package.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;In addition to that, if you are going to use TF2 you need to ensure the imported saved model also trained using TF2&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Sincerely,&lt;/P&gt;&lt;P&gt;Iffa&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 23 Jul 2020 05:52:27 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/StatefulPartitionedCall-issues-in-converting-tensorflow-model/m-p/1194208#M19952</guid>
      <dc:creator>Iffa_Intel</dc:creator>
      <dc:date>2020-07-23T05:52:27Z</dc:date>
    </item>
    <item>
      <title>Re: Re:StatefulPartitionedCall issues in converting te...</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/StatefulPartitionedCall-issues-in-converting-tensorflow-model/m-p/1194266#M19954</link>
      <description>&lt;P&gt;I'm building in 2020.4 Linux Openvino, and converting in the same environment. I know that I'm using TF2 on this machine for both the training and converting of the model.&lt;/P&gt;
&lt;P&gt;The Pi is what has 2020.3 Openvino. Since they're both technically Linux distributions of Openvino I didn't think this would be a problem?&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Also I'm including the tensorboard logs for the model before I converted it, since I'm also confused why there even is a StatefulPartionedcall operation in the final model.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 23 Jul 2020 13:22:43 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/StatefulPartitionedCall-issues-in-converting-tensorflow-model/m-p/1194266#M19954</guid>
      <dc:creator>Portomania</dc:creator>
      <dc:date>2020-07-23T13:22:43Z</dc:date>
    </item>
    <item>
      <title>Re:StatefulPartitionedCall issues in converting te...</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/StatefulPartitionedCall-issues-in-converting-tensorflow-model/m-p/1194385#M19964</link>
      <description>&lt;P&gt;&lt;SPAN style="font-size: 14px;"&gt;Although Raspbian and Ubuntu are under the same Linux distribution you need to keep in mind that they are for different purposes and architectures.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 14px;"&gt;For instance, their bootloader processes. In Debian (other Linux OS suc as Ubuntu), you use GRUB to configure booting. On Raspbian, the configuration is entirely different, with many parameters set in&amp;nbsp;&lt;/SPAN&gt;&lt;B style="font-size: 14px;"&gt;/boot/config.txt&lt;/B&gt;&lt;SPAN style="font-size: 14px;"&gt;.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 14px;"&gt;The RPi uses a different architecture than Intel-based PCs. This means that .deb (installable binary) packages must be build specifically for the RPi ARM architecture. If a package is available in the Raspbian repository, it should (usually) install just like on any other Debian-based system. If not, you may have to build it yourself from source, which can be a challenge.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 14px;"&gt;If you notices Openvino have different toolkit packages for Raspbian and Linux OS. These indicate there are definitely differences in the toolkit architecture. Hence, sharing model between these two platforms results in high chance of provoking conflicts.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 14px;"&gt;Linux OS: &lt;/SPAN&gt;&lt;A href="https://docs.openvinotoolkit.org/latest/openvino_docs_install_guides_installing_openvino_linux.html" rel="noopener noreferrer" target="_blank" style="font-size: 14px;"&gt;https://docs.openvinotoolkit.org/latest/openvino_docs_install_guides_installing_openvino_linux.html&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 14px;"&gt;Raspbian OS: &lt;/SPAN&gt;&lt;A href="https://docs.openvinotoolkit.org/latest/openvino_docs_install_guides_installing_openvino_raspbian.html" rel="noopener noreferrer" target="_blank" style="font-size: 14px;"&gt;https://docs.openvinotoolkit.org/latest/openvino_docs_install_guides_installing_openvino_raspbian.html&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 14px;"&gt;Sincerely,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 14px;"&gt;Iffa&lt;/SPAN&gt;&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 24 Jul 2020 03:21:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/StatefulPartitionedCall-issues-in-converting-tensorflow-model/m-p/1194385#M19964</guid>
      <dc:creator>Iffa_Intel</dc:creator>
      <dc:date>2020-07-24T03:21:00Z</dc:date>
    </item>
    <item>
      <title>Re: Re:StatefulPartitionedCall issues in converting te...</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/StatefulPartitionedCall-issues-in-converting-tensorflow-model/m-p/1194987#M19999</link>
      <description>&lt;P&gt;Okay, I realized that the architecture difference could be the issue and started looking into this issue.&lt;/P&gt;
&lt;P&gt;I found this forum post&amp;nbsp;&lt;FONT style="background-color: #ffffff;"&gt;&lt;A href="https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/NCS2-Raspberry-4B-Openvino-toolkit-ie-load-network-network/td-p/1190963" target="_blank"&gt;https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/NCS2-Raspberry-4B-Openvino-toolkit-ie-load-network-network/td-p/1190963&lt;/A&gt;&lt;/FONT&gt; , which talks about there being a known issue with the Pi on this version:&lt;/P&gt;
&lt;P&gt;"&lt;SPAN style="display: inline !important; float: none; background-color: #ffffff; color: #555555; font-family: 'intel-clear','tahoma',Helvetica,'helvetica',Arial,sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;"&gt;There is an incompatibility issue between the OpenVINO™&amp;nbsp;Toolkit 2020.3 version (for RaspbianOS) and IR version 10 files, so you should add the flag &lt;/SPAN&gt;&lt;STRONG style="box-sizing: border-box; color: #555555; font-family: &amp;amp;quot; intel-clear&amp;amp;quot;,&amp;amp;quot;tahoma&amp;amp;quot;,helvetica,&amp;amp;quot;helvetica&amp;amp;quot;,arial,sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: bold; letter-spacing: normal; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;"&gt;--generate_deprecated_IR_V7&lt;/STRONG&gt;&lt;SPAN style="display: inline !important; float: none; background-color: #ffffff; color: #555555; font-family: 'intel-clear','tahoma',Helvetica,'helvetica',Arial,sans-serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;"&gt; when converting the model to IR format."&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;So on the 2020.4 Openvino Linux distribution, I added this flag to convert the model to a hopefully compatible format and ran into the same sort of error that I got previously when trying to run the already converted model on the Pi:&lt;/P&gt;
&lt;P&gt;&lt;FONT style="background-color: #ffffff;"&gt;[ WARNING ]&amp;nbsp; Use of deprecated cli option --generate_deprecated_IR_V7 detected. Option use in the following releases will be fatal. &lt;BR /&gt;Progress: [...............&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; ]&amp;nbsp; 76.00% done[ ERROR ]&amp;nbsp; -------------------------------------------------&lt;BR /&gt;[ ERROR ]&amp;nbsp; ----------------- INTERNAL ERROR ----------------&lt;BR /&gt;[ ERROR ]&amp;nbsp; Unexpected exception happened.&lt;BR /&gt;[ ERROR ]&amp;nbsp; Please contact Model Optimizer developers and forward the following information:&lt;BR /&gt;[ ERROR ]&amp;nbsp; Exception occurred during running replacer "REPLACEMENT_ID (&amp;lt;class 'extensions.back.PadToV7.PadToV7'&amp;gt;)": Fill value is not constants for node "StatefulPartitionedCall/sequential/tcn/residual_block_0/conv1D_0/Pad"&lt;BR /&gt;[ ERROR ]&amp;nbsp; Traceback (most recent call last):&lt;BR /&gt;&amp;nbsp; File "/opt/intel/openvino_2020.4.287/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 288, in apply_transform&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; for_graph_and_each_sub_graph_recursively(graph, replacer.find_and_replace_pattern)&lt;BR /&gt;&amp;nbsp; File "/opt/intel/openvino_2020.4.287/deployment_tools/model_optimizer/mo/middle/pattern_match.py", line 58, in for_graph_and_each_sub_graph_recursively&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; func(graph)&lt;BR /&gt;&amp;nbsp; File "/opt/intel/openvino_2020.4.287/deployment_tools/model_optimizer/extensions/back/PadToV7.py", line 43, in find_and_replace_pattern&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; assert fill_value is not None, 'Fill value is not constants for node "{}"'.format(pad_name)&lt;BR /&gt;AssertionError: Fill value is not constants for node "StatefulPartitionedCall/sequential/tcn/residual_block_0/conv1D_0/Pad"&lt;BR /&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;This at least tells me there is some connection between the Pi's version and having the StatefulPartitionedCall layer in my model.&amp;nbsp;Should I be trying a different workaround, or is there a larger issue with my current model or architecture?&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 27 Jul 2020 14:27:28 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/StatefulPartitionedCall-issues-in-converting-tensorflow-model/m-p/1194987#M19999</guid>
      <dc:creator>Portomania</dc:creator>
      <dc:date>2020-07-27T14:27:28Z</dc:date>
    </item>
    <item>
      <title>Re: Re:StatefulPartitionedCall issues in converting te...</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/StatefulPartitionedCall-issues-in-converting-tensorflow-model/m-p/1197811#M20155</link>
      <description>&lt;P&gt;Hi Port,&lt;/P&gt;
&lt;P&gt;Just to be clear: are you converting and running your model on the Pi? Or have you converted your model on our Linux system and then trying to deploy the model on your Pi?&lt;/P&gt;
&lt;P&gt;Either way, can you please send me the Model Optimizer command you used?&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you are able to attach your model, please do so. I can also send you a PM and you can send it there if you prefer to not share it publicly.&lt;/P&gt;
&lt;P&gt;Best Regards,&lt;BR /&gt;Sahira&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 05 Aug 2020 23:53:33 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/StatefulPartitionedCall-issues-in-converting-tensorflow-model/m-p/1197811#M20155</guid>
      <dc:creator>Sahira_Intel</dc:creator>
      <dc:date>2020-08-05T23:53:33Z</dc:date>
    </item>
  </channel>
</rss>

