<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Running neural models on a raspberry pi in Intel® Distribution of OpenVINO™ Toolkit</title>
    <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126206#M7633</link>
    <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;I recently purchased an Intel Movidius Neural Compute Stick 2 and I've managed to install OpenVINO on my raspberry pi following the instructions provided on the forum (https://software.intel.com/en-us/articles/OpenVINO-Install-RaspberryPI). What I'm trying to do now is to convert my Keras model to a supported version in order to run it on the Movidius Stick. First of all, is it possible to run a neural model that doesn't take an image as an input?&lt;/P&gt;&lt;P&gt;Thank you in advance.&lt;/P&gt;</description>
    <pubDate>Thu, 03 Jan 2019 08:55:37 GMT</pubDate>
    <dc:creator>Drakopoulos__Fotis</dc:creator>
    <dc:date>2019-01-03T08:55:37Z</dc:date>
    <item>
      <title>Running neural models on a raspberry pi</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126206#M7633</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;I recently purchased an Intel Movidius Neural Compute Stick 2 and I've managed to install OpenVINO on my raspberry pi following the instructions provided on the forum (https://software.intel.com/en-us/articles/OpenVINO-Install-RaspberryPI). What I'm trying to do now is to convert my Keras model to a supported version in order to run it on the Movidius Stick. First of all, is it possible to run a neural model that doesn't take an image as an input?&lt;/P&gt;&lt;P&gt;Thank you in advance.&lt;/P&gt;</description>
      <pubDate>Thu, 03 Jan 2019 08:55:37 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126206#M7633</guid>
      <dc:creator>Drakopoulos__Fotis</dc:creator>
      <dc:date>2019-01-03T08:55:37Z</dc:date>
    </item>
    <item>
      <title>Hello Fotis,</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126207#M7634</link>
      <description>&lt;P&gt;Hello&amp;nbsp;Fotis,&lt;/P&gt;&lt;P&gt;&amp;gt;&amp;nbsp;&amp;nbsp;First of all, is it possible to run a neural model that doesn't take an image as an input?&lt;/P&gt;&lt;P&gt;OpenVino supports this. "Other-than-image input" worked fine in my products on both CPU and GPU devices&amp;nbsp;but not sure if I also tried on NCS2. Will try tomorrow and update.. (just having a build issue I need to resolve first before I can test on NCS2).&lt;/P&gt;&lt;P&gt;Cheers,&lt;/P&gt;&lt;P&gt;Nikos&lt;/P&gt;</description>
      <pubDate>Fri, 04 Jan 2019 03:27:43 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126207#M7634</guid>
      <dc:creator>nikos1</dc:creator>
      <dc:date>2019-01-04T03:27:43Z</dc:date>
    </item>
    <item>
      <title>Hi Nikos,</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126208#M7635</link>
      <description>&lt;P&gt;Hi Nikos,&lt;/P&gt;&lt;P&gt;First of all thanks for the prompt reply.&lt;/P&gt;&lt;P&gt;The thing is that I have a keras model for audio signal processing and I want to run it on my NCS2, connected on a raspberry pi. I have successfully installed the openVINO on the raspberry pi (according to the instructions provided on the forum), so what I am trying to do now is to convert the keras model in order to run it on the NCS2. From what I understood the model conversion is not possible on the raspberry pi, but even on an ubuntu machine I am still not sure how to convert an "other-than-image input" model.&lt;/P&gt;&lt;P&gt;Cheers,&lt;/P&gt;&lt;P&gt;Fotis&lt;/P&gt;&lt;P&gt;UPDATE: I managed to convert the model with the mo_tf.py script but now I am not sure how to run it on the NCS2. After converting the model to the format needed for the NCS2 (bin and xml files) I tried to load it on the raspberry pi but when I type the command &lt;EM&gt;net = IENetwork(model="tf_model.bin",weights="tf_model.xml")&lt;/EM&gt; I get the following error:&lt;/P&gt;&lt;P&gt;RuntimeError: Error reading network: input must have dimensions&lt;/P&gt;</description>
      <pubDate>Fri, 04 Jan 2019 08:28:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126208#M7635</guid>
      <dc:creator>Drakopoulos__Fotis</dc:creator>
      <dc:date>2019-01-04T08:28:00Z</dc:date>
    </item>
    <item>
      <title>Have you used the --input</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126209#M7636</link>
      <description>&lt;P&gt;Have you used the&amp;nbsp;--input_shape parameter of&amp;nbsp;mo_tf.py ?&lt;/P&gt;&lt;P&gt;BTW&amp;nbsp;&amp;nbsp;computer_vision_sdk_2018.5.445/deployment_tools/documentation/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html has some examples how to convert networks for speech.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 04 Jan 2019 17:32:12 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126209#M7636</guid>
      <dc:creator>nikos1</dc:creator>
      <dc:date>2019-01-04T17:32:12Z</dc:date>
    </item>
    <item>
      <title>Also please make sure you set</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126210#M7637</link>
      <description>&lt;P&gt;Also please make sure you set input / output precision correctly , like for example&lt;/P&gt;
&lt;PRE class="brush:cpp; class-name:dark;"&gt;    input_data-&amp;gt;setPrecision(Precision::U8);
    input_data-&amp;gt;setLayout(Layout::NCHW);  // ? &lt;/PRE&gt;

&lt;P&gt;There are a few options&lt;/P&gt;

&lt;PRE class="brush:cpp; class-name:dark;"&gt;**
 * @enum Layout
 * @brief Layouts that the inference engine supports
 */
enum Layout : uint8_t {
    ANY = 0,           // "any" layout

    // I/O data layouts
    NCHW = 1,
    NHWC = 2,
    NCDHW = 3,
    NDHWC = 4,

    // weight layouts
    OIHW = 64,

    // bias layouts
    C = 96,

    // Single image layout (for mean image)
    CHW = 128,

    // 2D
    HW = 192,
    NC = 193,
    CN = 194,

    BLOCKED = 200,
};&lt;/PRE&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 04 Jan 2019 17:36:30 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126210#M7637</guid>
      <dc:creator>nikos1</dc:creator>
      <dc:date>2019-01-04T17:36:30Z</dc:date>
    </item>
    <item>
      <title>First of all, I wasn't able</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126211#M7638</link>
      <description>&lt;P&gt;First of all, I wasn't able to convert the model without setting the input shape. Thanks, I'll check the examples for speech models. Regarding the precision parameters, where do I define these?&lt;/P&gt;</description>
      <pubDate>Fri, 04 Jan 2019 18:43:58 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126211#M7638</guid>
      <dc:creator>Drakopoulos__Fotis</dc:creator>
      <dc:date>2019-01-04T18:43:58Z</dc:date>
    </item>
    <item>
      <title>&gt; Regarding the precision</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126212#M7639</link>
      <description>&lt;P&gt;&amp;gt; Regarding the precision parameters, where do I define these?&lt;/P&gt;&lt;P&gt;In the inference application - for example in the case of C++ see line 619 of&lt;/P&gt;
&lt;PRE class="brush:cpp; class-name:dark;"&gt;computer_vision_sdk_2018.5.445/deployment_tools/inference_engine/samples/speech_sample/main.cpp

&lt;/PRE&gt;

&lt;P&gt;inputPrecision and layout is set&amp;nbsp;&lt;/P&gt;

&lt;PRE class="brush:cpp; class-name:dark;"&gt;        /** configure input precision if model loaded from IR **/
        for (auto &amp;amp;item : inputInfo) {
            Precision inputPrecision = Precision::FP32;  // specify Precision::I16 to provide quantized inputs
            item.second-&amp;gt;setPrecision(inputPrecision);
            item.second-&amp;gt;getInputData()-&amp;gt;layout = NC;  // row major layout
        }
&lt;/PRE&gt;

&lt;P&gt;nikos&lt;/P&gt;</description>
      <pubDate>Fri, 04 Jan 2019 19:19:45 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126212#M7639</guid>
      <dc:creator>nikos1</dc:creator>
      <dc:date>2019-01-04T19:19:45Z</dc:date>
    </item>
    <item>
      <title>Hi Niko,</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126213#M7640</link>
      <description>&lt;P&gt;Hi Niko,&lt;/P&gt;&lt;P&gt;The thing is that I can't even load the model to specify these parameters or change anything. I'm also using python so I'm trying to figure out what's going on because there are no examples for speech. I tried many different things on the model conversion but I'm still getting the "input must have dimensions" error when I try to load the model.&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;&lt;P&gt;EDIT: I got it finally to work, after specifying certain parameters for the conversion. I just need to figure out how to run the model now, I'll post if I have any more issues.&lt;/P&gt;</description>
      <pubDate>Mon, 07 Jan 2019 16:09:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126213#M7640</guid>
      <dc:creator>Drakopoulos__Fotis</dc:creator>
      <dc:date>2019-01-07T16:09:00Z</dc:date>
    </item>
    <item>
      <title>Hello again,</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126214#M7641</link>
      <description>&lt;P&gt;Hello again,&lt;/P&gt;&lt;P&gt;So, I was able to convert the model and run it on the NCS2 and on the raspberry pi, but so far I'm getting noisy outputs and I don't know the cause. First of all, I used data type FP16 to convert the model and run it on the NCS2 (it wasn't possible with FP32) but I've noticed that the output has 'float32' dtype whatever the input data type is. How can I check the input/output precision of the converted model on Python?&lt;/P&gt;</description>
      <pubDate>Wed, 16 Jan 2019 16:29:36 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126214#M7641</guid>
      <dc:creator>Drakopoulos__Fotis</dc:creator>
      <dc:date>2019-01-16T16:29:36Z</dc:date>
    </item>
    <item>
      <title>Hello Foti,</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126215#M7642</link>
      <description>&lt;P&gt;Hello&amp;nbsp;Foti,&lt;/P&gt;&lt;P&gt;May be better to validate FP32 on CPU device first and then move to NCS2 FP16 (-d MYRAD ); would be less deltas and easier&amp;nbsp;to track discrepancies.&lt;/P&gt;&lt;P&gt;Sorry I am not sure about Python API and&amp;nbsp;input/output precision or validation options. My end to end to workflow is using C++ and offers flexibility to adjust precision and also validate and compare results to my reference implementation. I am sure Python API allows all this but never used it :-)&lt;/P&gt;&lt;P&gt;Cheers,&lt;/P&gt;&lt;P&gt;nikos&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 16 Jan 2019 17:49:11 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126215#M7642</guid>
      <dc:creator>nikos1</dc:creator>
      <dc:date>2019-01-16T17:49:11Z</dc:date>
    </item>
    <item>
      <title>Thanks again Niko!</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126216#M7643</link>
      <description>&lt;P&gt;Thanks again Niko!&lt;/P&gt;&lt;P&gt;I'm not able to test the FP32 on a raspberry pi because there is no plugin for CPU right now (only for MYRIAD). I've checked the input/outputs layers of the FP16 model and they are FP16 precision, however the output that I am getting from the NCS2 is in 'float32' format. I am still getting a kind of periodic noise to the output...&lt;/P&gt;&lt;P&gt;However, I found that there is a number of unsupported layers (Const) by the plugin for MYRIAD. What should I do in this case?&lt;/P&gt;</description>
      <pubDate>Thu, 17 Jan 2019 13:28:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126216#M7643</guid>
      <dc:creator>Drakopoulos__Fotis</dc:creator>
      <dc:date>2019-01-17T13:28:00Z</dc:date>
    </item>
    <item>
      <title>Hi Foti,</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126217#M7644</link>
      <description>&lt;P&gt;Hi Foti,&lt;/P&gt;&lt;P&gt;&amp;gt; not able to test the FP32 on a raspberry pi because there is no plugin for CPU right now (only for MYRIAD).&lt;/P&gt;&lt;P&gt;Perhaps you could try on a x86 Linux or even Windows platform. Validating on FP32 CPU is essential in your case before moving to pi and NCS for a number of reasons.&lt;/P&gt;&lt;P&gt;&amp;gt;&amp;nbsp;they are FP16 precision, however the output that I am getting from the NCS2 is in 'float32'&amp;nbsp;&lt;/P&gt;&lt;P&gt;That's not an issue. I am also getting FP32 out from FP16 inference. Again this becomes irrelevant in the case of FP32 valldation.&lt;/P&gt;&lt;P&gt;&amp;gt;&amp;nbsp;However, I found that there is a number of unsupported layers (Const) by the plugin for MYRIAD. What should I do in this case?&lt;/P&gt;&lt;P&gt;I think this is the most important issue. Again on CPU FP32 may have no issues. Need to test CPU FP32 and see if they are supported there.&lt;/P&gt;&lt;P&gt;FWIW in my experience a smoother validation and dev workflow is&lt;/P&gt;&lt;P&gt;Native -&amp;gt; CPU FP32 -&amp;gt; run validation app&amp;nbsp;&lt;/P&gt;&lt;P&gt;CPU FP32 -&amp;gt; GPU FP16 -&amp;gt; validate FP16&lt;/P&gt;&lt;P&gt;GPU FP16 -&amp;gt; NCS FP16 -&amp;gt; Validate on NCS&lt;/P&gt;&lt;P&gt;It is slower but makes it easier to track issues.&lt;/P&gt;&lt;P&gt;If it all fails then comparison of results layer by layer as done in another post &amp;nbsp;( reference&amp;nbsp;( &lt;A href="https://software.intel.com/en-us/forums/computer-vision/topic/801760" target="_blank"&gt;https://software.intel.com/en-us/forums/computer-vision/topic/801760&lt;/A&gt; by Nikolaev, Viktor &amp;nbsp;)&amp;nbsp;&lt;/P&gt;&lt;P&gt;Cheers,&lt;/P&gt;&lt;P&gt;Nikos&lt;/P&gt;</description>
      <pubDate>Fri, 18 Jan 2019 19:37:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126217#M7644</guid>
      <dc:creator>nikos1</dc:creator>
      <dc:date>2019-01-18T19:37:00Z</dc:date>
    </item>
    <item>
      <title>Hi Niko,</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126218#M7645</link>
      <description>&lt;P&gt;Hi Niko,&lt;/P&gt;&lt;P&gt;Yesterday I tried running the FP32 model on an ubuntu machine using CPU but I got a "buffer overrun" error (when I tried to load the network to the plugin). I looked a bit for a solution but I didn't find anything. I guess I'll try this layer by layer comparison to see what happens. Thanks!&lt;/P&gt;&lt;P&gt;Cheers,&lt;/P&gt;&lt;P&gt;Fotis&lt;/P&gt;</description>
      <pubDate>Sat, 19 Jan 2019 11:39:42 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126218#M7645</guid>
      <dc:creator>Drakopoulos__Fotis</dc:creator>
      <dc:date>2019-01-19T11:39:42Z</dc:date>
    </item>
    <item>
      <title>Sorry, hard to see what the</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126219#M7646</link>
      <description>&lt;P&gt;Sorry, hard to see what the issue is without more information on model optimizer parameters or workflow in general. Are you converting frozen or non-frozen TensorFlow models or using Caffe or other?&lt;/P&gt;&lt;P&gt;If Caffe supported layers are in&amp;nbsp;https://software.intel.com/en-us/articles/OpenVINO-Using-Caffe#caffe-supported-layers&lt;/P&gt;&lt;P&gt;Tensorflow supported layers in&amp;nbsp; &lt;A href="https://software.intel.com/en-us/articles/OpenVINO-Using-TensorFlow#tensorflow-supported-layers" target="_blank"&gt;https://software.intel.com/en-us/articles/OpenVINO-Using-TensorFlow#tensorflow-supported-layers&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;gt;&amp;nbsp;Keras model&lt;/P&gt;&lt;P&gt;I can see Keras so assuming you are on Tensorflow backend and freeze to a pb.&lt;/P&gt;&lt;P&gt;For TF custom&amp;nbsp;layers, if needed, there is good documentation how to offload but I am not sure if it would make sense in terms of performance in the case of pi+NCS. Some info in&amp;nbsp;https://software.intel.com/en-us/articles/OpenVINO-ModelOptimizer#Tensorflow-models-with-custom-layersSome ideas here from DeepSpeech may help in case more mo_tf.py parameters are needed. Sometimes it is not straightforward to convert TF&amp;nbsp;to IR and it could be the case here that you just need one more parameter and the problem will be solved.&lt;/P&gt;&lt;P&gt;&lt;A href="https://software.intel.com/en-us/articles/OpenVINO-Using-tensorflow" target="_blank"&gt;https://software.intel.com/en-us/articles/OpenVINO-Using-tensorflow&lt;/A&gt; ( also see section :&amp;nbsp;Supported Layers and the Mapping to Intermediate Representation Layers )&lt;/P&gt;&lt;P&gt;To generate the DeepSpeech Intermediate Representation (IR), provide TensorFlow DeepSpeech model to the Model Optimizer with parameters:&lt;/P&gt;
&lt;PRE class="brush:cpp; class-name:dark;"&gt;python3 ./mo_tf.py
  --input_model path_to_model/output_graph.pb                         \
  --freeze_placeholder_with_value input_lengths-&amp;gt;[16]                \
  --input input_node,previous_state_h/read,previous_state_c/read  \
  --input_shape [1,16,19,26],[1,2048],[1,2048]                              \
  --output raw_logits,lstm_fused_cell/Gather,lstm_fused_cell/Gather_1&lt;/PRE&gt;

&lt;P&gt;nikos&lt;/P&gt;</description>
      <pubDate>Sat, 19 Jan 2019 19:27:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126219#M7646</guid>
      <dc:creator>nikos1</dc:creator>
      <dc:date>2019-01-19T19:27:00Z</dc:date>
    </item>
    <item>
      <title>Do you think that the "Buffer</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126220#M7647</link>
      <description>&lt;P&gt;Do you think that the "Buffer overrun" error that I got for the FP32 model could be caused because of an incorrect conversion?&lt;/P&gt;&lt;P&gt;To get more into detail, I'm converting a keras model to an IR representation and I tried doing both with a frozen and a non-frozen model. I am specifying the input layer name and size and the output layer name to the conversion command (as shown in the example) but I will experiment a bit with the parameters tomorrow to see if this will make a difference.&lt;/P&gt;&lt;P&gt;Fotis&lt;/P&gt;</description>
      <pubDate>Sun, 20 Jan 2019 13:42:36 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126220#M7647</guid>
      <dc:creator>Drakopoulos__Fotis</dc:creator>
      <dc:date>2019-01-20T13:42:36Z</dc:date>
    </item>
    <item>
      <title>&gt;  could be caused because of</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126221#M7648</link>
      <description>&lt;P&gt;&amp;gt; &amp;nbsp;could be caused because of an incorrect conversion?&lt;/P&gt;&lt;P&gt;Yes, assuming you have no unsupported&amp;nbsp;layers, I think it is possible to be a conversion parameter issue causing the inference engine buffer issue&amp;nbsp;when loading weights. Coincidentally also got the same&amp;nbsp;error two weeks ago and fixed but do not remember the exact issue, poor short-term memory :-) &amp;nbsp;I think it was related to input shape&amp;nbsp;or NCHW&amp;nbsp;vs. NHWC but it was with 2D images not 1D case.&lt;/P&gt;&lt;P&gt;nikos&lt;/P&gt;</description>
      <pubDate>Sun, 20 Jan 2019 18:48:45 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126221#M7648</guid>
      <dc:creator>nikos1</dc:creator>
      <dc:date>2019-01-20T18:48:45Z</dc:date>
    </item>
    <item>
      <title>Hello again Niko,</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126222#M7649</link>
      <description>&lt;P&gt;Hello again Niko,&lt;/P&gt;&lt;P&gt;I tried changing all the different parameters during the conversion but I still get a buffer overrun error when I try to run the FP32 model using the CPU.&lt;/P&gt;&lt;P&gt;Additionally, I changed the 'ReLu' on my keras model and now most of the unsupported layers on the FP16 model for the MYRIAD are gone, but I still get the Input layer as an unsupported layer and the same noisy output. I was wondering what is the correct representation of the Input layer for a model on MYRIAD, because it's weird that the input layer is unsupported.&lt;/P&gt;&lt;P&gt;I also tried to convert and try out the deepspeech model mentioned above, but when I do the conversion I get the following error:&lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;P&gt;[ ERROR ]&amp;nbsp; -------------------------------------------------&lt;BR /&gt;[ ERROR ]&amp;nbsp; ----------------- INTERNAL ERROR ----------------&lt;BR /&gt;[ ERROR ]&amp;nbsp; Unexpected exception happened.&lt;BR /&gt;[ ERROR ]&amp;nbsp; Please contact Model Optimizer developers and forward the following information:&lt;BR /&gt;[ ERROR ]&amp;nbsp; Exception occurred during running replacer "None (&amp;lt;class 'extensions.front.tf.BlockLSTM.BlockLSTM'&amp;gt;)": 7&lt;BR /&gt;[ ERROR ]&amp;nbsp; Traceback (most recent call last):&lt;BR /&gt;&amp;nbsp; File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 114, in apply_replacements&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; replacer.find_and_replace_pattern(graph)&lt;BR /&gt;&amp;nbsp; File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/front/common/replacement.py", line 125, in find_and_replace_pattern&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; apply_pattern(graph, action=self.replace_sub_graph, **self.pattern())&lt;BR /&gt;&amp;nbsp; File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/middle/pattern_match.py", line 95, in apply_pattern&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; action(graph, match)&lt;BR /&gt;&amp;nbsp; File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/front/common/replacement.py", line 189, in replace_sub_graph&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; self.replace_output_edges(graph, self.gen_output_edges_match(node, self.replace_op(graph, node)))&lt;BR /&gt;&amp;nbsp; File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/extensions/front/tf/BlockLSTM.py", line 84, in replace_op&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; [graph.remove_edge(node.in_node(p).id, node.id) for p, input_data in node.in_nodes().items() if p in [5, 6, 7]]&lt;BR /&gt;&amp;nbsp; File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/extensions/front/tf/BlockLSTM.py", line 84, in &amp;lt;listcomp&amp;gt;&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; [graph.remove_edge(node.in_node(p).id, node.id) for p, input_data in node.in_nodes().items() if p in [5, 6, 7]]&lt;BR /&gt;&amp;nbsp; File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/graph/graph.py", line 329, in in_node&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; return self.in_nodes(control_flow=control_flow)[key]&lt;BR /&gt;KeyError: 7&lt;/P&gt;&lt;P&gt;The above exception was the direct cause of the following exception:&lt;/P&gt;&lt;P&gt;Traceback (most recent call last):&lt;BR /&gt;&amp;nbsp; File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/main.py", line 325, in main&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; return driver(argv)&lt;BR /&gt;&amp;nbsp; File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/main.py", line 267, in driver&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; mean_scale_values=mean_scale)&lt;BR /&gt;&amp;nbsp; File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/pipeline/tf.py", line 248, in tf2nx&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; class_registration.apply_replacements(graph, class_registration.ClassType.FRONT_REPLACER)&lt;BR /&gt;&amp;nbsp; File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 127, in apply_replacements&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; )) from err&lt;BR /&gt;Exception: Exception occurred during running replacer "None (&amp;lt;class 'extensions.front.tf.BlockLSTM.BlockLSTM'&amp;gt;)": 7&lt;/P&gt;&lt;P&gt;[ ERROR ]&amp;nbsp; ---------------- END OF BUG REPORT --------------&lt;BR /&gt;[ ERROR ]&amp;nbsp; -------------------------------------------------&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;EDIT: I finally managed to (maybe) get a correct output from the converted model using the MYRIAD plugin. I used the "--disable_nhwc_to_nchw" parameter in the conversion and now I don't see this noisy output. However, I now get a new list of unsupported layers and the most important part is that the IR model suddenly got really slow (it takes around 190 ms for 1 iteration). What could be the cause? Also, if I compare the two xml files (before and after the&amp;nbsp;"--disable_nhwc_to_nchw" addition) I see different dimensions for each layer.&lt;/P&gt;</description>
      <pubDate>Tue, 22 Jan 2019 14:22:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126222#M7649</guid>
      <dc:creator>Drakopoulos__Fotis</dc:creator>
      <dc:date>2019-01-22T14:22:00Z</dc:date>
    </item>
    <item>
      <title>Hello Foti,</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126223#M7650</link>
      <description>&lt;P&gt;Hello Foti,&lt;/P&gt;&lt;P&gt;Good find with the&amp;nbsp;--disable_nhwc_to_nchw parameter. Just for the record were you able to run now on CPU FP32 and get valid results?&lt;/P&gt;&lt;P&gt;&amp;gt;&amp;nbsp;However, I now get a new list of unsupported layers&lt;/P&gt;&lt;P&gt;Are you using LSTM? Not sure if is supported or validated yet for MYRIAD. I will ask this question in my old post (&amp;nbsp; &amp;nbsp;https://software.intel.com/en-us/forums/computer-vision/topic/755432&amp;nbsp; )&lt;/P&gt;&lt;P&gt;Based on 2018 R5&amp;nbsp;release notes:&lt;/P&gt;&lt;P&gt;&lt;EM&gt;New Features in the 2018 R5 include:&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Extends neural network support to include LSTM (long short-term memory) from ONNX*, TensorFlow*&amp;amp; MXNet* frameworks, &amp;amp; 3D convolutional-based networks in preview mode (CPU-only) to support additional, new use cases beyond computer vision.&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&amp;gt; and the most important part is that the IR model suddenly got really slow (it takes around 190 ms for 1 iteration).&lt;/P&gt;&lt;P&gt;For that you may want to use the profiler that reports ms per layer and get a better idea of what slows down the execution. of course functionality first, much higher priority.&lt;/P&gt;&lt;P&gt;nikos&lt;/P&gt;</description>
      <pubDate>Tue, 22 Jan 2019 19:16:07 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126223#M7650</guid>
      <dc:creator>nikos1</dc:creator>
      <dc:date>2019-01-22T19:16:07Z</dc:date>
    </item>
    <item>
      <title>Hi Niko,</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126224#M7651</link>
      <description>&lt;P&gt;Hi Niko,&lt;/P&gt;&lt;P&gt;No, even with the&amp;nbsp;--disable_nhwc_to_nchw parameter the model doesn't work on the CPU. I tried every possible parameter but I am still getting the "cannot create internal buffer. buffer can be overrun" so I don't know how to proceed with this.&lt;/P&gt;&lt;P&gt;The unsupported layers are again of type "Const" but originate from conv layers of the original model. What I did before to remove the unsupported layers was to train the model with a leakyReLu instead, but now I don't really know how to substitute the convolution layers.&lt;/P&gt;&lt;P&gt;The thing is that the model now works on the MYRIAD (I'll verify the output tomorrow but with a first glance I think that it produces a correct output) but it is really slow. How could I find the cause of this at least?&lt;/P&gt;</description>
      <pubDate>Tue, 22 Jan 2019 19:56:02 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126224#M7651</guid>
      <dc:creator>Drakopoulos__Fotis</dc:creator>
      <dc:date>2019-01-22T19:56:02Z</dc:date>
    </item>
    <item>
      <title>try to get performance counts</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126225#M7652</link>
      <description>&lt;P&gt;try to get performance counts (us per layer) using&amp;nbsp;get_perf_counts.&lt;/P&gt;
&lt;PRE class="brush:cpp; class-name:dark;"&gt;        perf_counts = infer_request_handle.get_perf_counts()
        log.info("Performance counters:")
        print("{:&amp;lt;70} {:&amp;lt;15} {:&amp;lt;15} {:&amp;lt;15} {:&amp;lt;10}".format('name', 'layer_type', 'exet_type', 'status', 'real_time, us'))
        for layer, stats in perf_counts.items():
            print("{:&amp;lt;70} {:&amp;lt;15} {:&amp;lt;15} {:&amp;lt;15} {:&amp;lt;10}".format(layer, stats['layer_type'], stats['exec_type'],
                                                              stats['status'], stats['real_time']))
&lt;/PRE&gt;

&lt;P&gt;Some examples in&lt;/P&gt;

&lt;PRE class="brush:cpp; class-name:dark;"&gt;&amp;nbsp;grep &amp;nbsp;perf ./computer_vision_sdk/deployment_tools/inference_engine/samples/python_samples/*
&lt;/PRE&gt;

&lt;P&gt;or check&amp;nbsp;the python API&amp;nbsp;docs if more information is needed for performance counters.&lt;/P&gt;
&lt;P&gt;cheers&lt;/P&gt;
&lt;P&gt;nikos&lt;/P&gt;</description>
      <pubDate>Tue, 22 Jan 2019 21:00:24 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-neural-models-on-a-raspberry-pi/m-p/1126225#M7652</guid>
      <dc:creator>nikos1</dc:creator>
      <dc:date>2019-01-22T21:00:24Z</dc:date>
    </item>
  </channel>
</rss>

