Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Different output values NCSDK2 vs (Openvino 2018 R5 and openvino 2019 R1)

Deblauwe__Tom
Beginner
1,270 Views

Hello,

I'm using an tensorflow-trained model, which is just an FCN network, but without a dense layer at the end. It's trained patch-based, but then the inferences are at 320x640. It worked correctly in NCSDK2 and I could convert it without any problems and compile it using mvNCCompile. However, now it runs sort-of okay on openvino r5 on the myriad plugin, but I saw some differences in output values. 

For example on NCSDK2 I see softmax values like "0.99245", but now on openvino it's more like "0.000425" so a big difference, however, the output of the class overal is not really wrong, so the segmentation is almost correct, but less so than with NCSDK2.

For the training, the input of my network uses the RGB values divided by 255. So in NCSDK I could just create CV_32FC3 images and divide each value by 255 and give this to NCSDK api, which accepts FP32 float images and that is ok then.

Now with openvino, and with the mo_tf.py compiler I use this commandline:

mo_tf.py --input_model tensorflow_nets/NN_inference_only.pb --input input --output output/Reshape -b 1 --data_type FP16 --scale 255

So but now, when using the openvino examples, they all copy the RGB image to the input_buffer as U8 byte values, so I guess the "scale" parameter should do the same? 

My question is, is my conversion correct or am I missing something?

The network I use for inference in keras(tensorflow) is:
 

        inputs = layers.Input(shape=self.image_shape, name="input")

        x = layers.Conv2D(filters=32, kernel_size=(3, 3))(inputs)
        x = layers.BatchNormalization()(x)
        x = layers.Activation('relu')(x)
        x = layers.MaxPooling2D(pool_size=(2, 2))(x)

        x = layers.Conv2D(filters=16, kernel_size=(3, 3))(x)
        x = layers.BatchNormalization()(x)
        x = layers.Activation('relu')(x)
        x = layers.Conv2D(filters=16, kernel_size=(3, 3))(x)
        x = layers.BatchNormalization()(x)
        x = layers.Activation('relu')(x)
        x = layers.MaxPooling2D(pool_size=(2, 2))(x)

        x = layers.Conv2D(filters=16, kernel_size=(5, 5))(x)
        x = layers.BatchNormalization()(x)
        x = layers.Activation('relu')(x)
        x = layers.MaxPooling2D(pool_size=(2, 2))(x)

        x = layers.Conv2D(filters=16, kernel_size=(5, 5))(x)
        x = layers.BatchNormalization()(x)
        x = layers.Activation('relu')(x)

        x = layers.Conv2D(filters=self.num_classes, kernel_size=(1, 1))(x)
        x = layers.Flatten()(x)
        x = layers.Softmax()(x)
        x = layers.Reshape((32,72,3),name="output")(x)

 

0 Kudos
24 Replies
Shubha_R_Intel
Employee
1,042 Views

Dearest Tom:

There will be no more NCSDK2 releases, so please migrate your apps to OpenVino !

You seem to be doing everything right. The only thing is that OpenVINO samples are loading images with OpenCV and they are in BGR order instead of RGB. Maybe you need to add --reverse_input_channels option to MO command.

And you can also try to do it the same way you are  used to with NCSDK – pass FP32 input to Inference Engine.

Hope it helps and thanks for using OpenVino !

Shubha

0 Kudos
Deblauwe__Tom
Beginner
1,042 Views

Hello,

Thanks for the answer!

Can you give me some pointers or a sample starting point how I can give FP32 input instead of u8? Is that when running the inference in C++, or do you mean when doing the mo_tf.py compilation step? (I'm using the myriad plugin so I can't use FP32 during compilation, it always says it needs to be FP16.)

Or does it mean I do not use the CNNNetreader wrapper class or something then?

Really hoping to make it work...

0 Kudos
Deblauwe__Tom
Beginner
1,042 Views

After some more tests, I found how to give FP32 as input.

But the real problem is: if I run on eCPU device, and compile using FP32, then it is all ok, but if I compile for FP16, and then run on eMYRIAD it is giving me totally other results. Of course I can't compare any values because FP16 won't run on eCPU device. Any ideas what I can do to pinpoint the problem?

 

 

0 Kudos
Shubha_R_Intel
Employee
1,042 Views

Dear Deblauwe, Tom, 

you can run FP16 on an Intel GPU advice. Would that work for you ?

Thanks,

Shubha

0 Kudos
Deblauwe__Tom
Beginner
1,042 Views

I don't have this setup unfortunately. But for a tensorflow network to work with openvino and the movidius compute stick 1, should it in the first place be trained with 'float16' or does that not matter? Isn't that the problem now I'm facing? Because the network and it's weights are now trained with float32 by default. This works on CPU, and in the past on VPU(myriad) but with NCS SDK2. And now I must retrain it with float16? I didn't really find anything in the docs mentioning this in the "upgrade from ncsdk to openvino" blog post.

Best regards,

Tom,

0 Kudos
Shubha_R_Intel
Employee
1,042 Views

Dearest Deblauwe, Tom,

Model Optimizer (or OpenVino) does not care how your model is trained - whether float32 or float16 or whatever was used. The --data_type switch to Model Optimizer will appropriately produce IR for the correct format and for NCS2, it should be FP16. I see in the title that you are using an old version of OpenVino. I strongly encourage you to upgrade to 2019 R1 - there have been lots of fixes for Myriad in the new release !

Thanks,

Shubha 

0 Kudos
Deblauwe__Tom
Beginner
1,042 Views

Hello,

I tried 2019 R1 today, and it is still the same. The results are normal for FP32 + CPU and are different than with NCSDK2 when using FP16 + MYRIAD plugin. It's really crazy that this works on your older version and not with openvino... unfortunately it blocks me from going forward with openvino.

Best regards,

Tom,

0 Kudos
Deblauwe__Tom
Beginner
1,042 Views

I just tried this on the compute stick 2, and I get the same results...

Would it help if I use a mean value for the image input for my training? now I only divide by 255. I saw some supported tensorflow-slim networks use 127.5 as mean value...

0 Kudos
Shubha_R_Intel
Employee
1,042 Views

Dear Tom,

At the beginning of the thread, you say :

For example on NCSDK2 I see softmax values like "0.99245", but now on openvino it's more like "0.000425" so a big difference, however, the output of the class overal is not really wrong, so the segmentation is almost correct, but less so than with NCSDK2.

And even in a recent comment you mentioned NCSDK, which I hope you're not using as it's not supported anymore.  What happens when you use OpenVino 2019.1.0.1 ? Is inference correct ? Since you're concerned about softmax values, your inference is a classification problem correct ? Why not try OpenVino's classification_sample on your NCS2 stick, with your chosen model ? If IR is generated for your classification model then OpenVino's classification_sample (either Python or C++) should work fine. I looked at your model code above - there's nothing crazy going on there. Make sure you do --reverse_input_channels in your MO command if you're passing in RGB input.

Hope it helps and thanks for using OpenVino !

Shubha

 

0 Kudos
Deblauwe__Tom
Beginner
1,042 Views

Hi,

2019.1.0.1 is not available to run inference on the raspberry pi, the latest here is "l_openvino_toolkit_raspbi_p_2019.1.094.tgz" which I use. The version of the MO I used to make the IR files is "2019.1.0-341-gc9b66a2".

I tried already all variations you suggested, like using FP32 as input values, etc etc.

I'm doing a segmentation task of an image and the output are 3 classes, that is why a softmax is used. 

 

0 Kudos
Shubha_R_Intel
Employee
1,042 Views

Dear Deblauwe, Tom,

To be honest, I still don't understand what your exact error is, even after reading all your stuff above.

Can you try running the \inference_engine\samples\python_samples\segmentation_demo\segmentation_demo.py (for semantic segmentation)? Or the C++ version ? Please use model downloader to download all models as follows : python downloader.py --all and try the models found under semantic_segmentation with segmentation_demo.py. 

Here is info about the Python version:

https://docs.openvinotoolkit.org/latest/_inference_engine_ie_bridges_python_sample_segmentation_demo_README.html

And the C++ version:

https://docs.openvinotoolkit.org/latest/_inference_engine_samples_segmentation_demo_README.html

Please try these segmentation samples and models which are shipped with OpenVino on your Raspberry PI NCS2. Do they work properly ? 

Thanks,

Shubha

 

0 Kudos
Deblauwe__Tom
Beginner
1,042 Views

Hi,

I tried the segmentation demo with the adas road segmentation and that seems to give very comparable results on CPU and on MYRIAD.

So I'm now looking to use Unet, but is that actually supported on the movidius sticks? I tried it, from the repo that is linked on the website(https://github.com/kkweon/UNet-in-Tensorflow/blob/master/train.py) and when running that I always get this below. So yeah, it's every time a problem. Tried it with stick1 in my virtualbox, and on the pi with stick2, all give the same message. So yeah, these error messages are not helpful at all. Is it possible to give me an xml file with a unet architecture that definitely will work on the movidius stick? Intel should do that for all the "supported" architectures so people can compare their thing with a working version at least.

E: [xLink] [    956431] dispatcherEventReceive:368	dispatcherEventReceive() Read failed -1 | event 0x7f4ca76bbee0 

E: [xLink] [    956431] eventReader:230	eventReader stopped
E: [xLink] [    956432] XLinkReadDataWithTimeOut:1377	Event data is invalid
E: [ncAPI] [    956432] ncFifoReadElem:3313	Packet reading is failed.
All closed now
Traceback (most recent call last):
  File "desktop_movidius_stick_openvino_surface_unet.py", line 106, in <module>
    main()
  File "desktop_movidius_stick_openvino_surface_unet.py", line 90, in main
    res = exec_net.infer(inputs={input_blob: image})
  File "ie_api.pyx", line 146, in openvino.inference_engine.ie_api.ExecutableNetwork.infer
  File "ie_api.pyx", line 179, in openvino.inference_engine.ie_api.InferRequest.infer
  File "ie_api.pyx", line 183, in openvino.inference_engine.ie_api.InferRequest.infer
RuntimeError: Failed to read output from FIFO: NC_ERROR
E: [ncAPI] [    956459] ncFifoDestroy:3136	Failed to write to fifo before deleting it!

 

0 Kudos
Deblauwe__Tom
Beginner
1,042 Views

Hi,

Regarding you not understanding my initial problem. Here is some clarification. This is the difference: with NCSDK2, my softmax values actually add up to around 1.0 which is what I expect. With openvino I have very small values. First I print my logging of the working version, then with openvino:

[2019-05-17 13:57:10.859] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs: 1.00026 xyz: 0.634277 0.359863 0.00612259
[2019-05-17 13:57:10.859] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs: 1.00044 xyz: 0.626465 0.357666 0.0163116
[2019-05-17 13:57:10.860] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs: 1 xyz: 0.857422 0.126587 0.0159912
[2019-05-17 13:57:10.860] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs: 0.999924 xyz: 0.952637 0.0272827 0.0200043
[2019-05-17 13:57:10.860] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs: 1.0004 xyz: 0.933594 0.0337219 0.0330811
[2019-05-17 13:57:10.861] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs: 1.00012 xyz: 0.654785 0.220093 0.125244
[2019-05-17 13:57:10.861] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs: 1 xyz: 0.821777 0.0529785 0.125244
[2019-05-17 13:57:10.861] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs: 1.00021 xyz: 0.891113 0.0249329 0.0841675
[2019-05-17 13:57:10.861] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs: 1.00043 xyz: 0.759277 0.0376587 0.203491
[2019-05-17 13:57:10.861] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs: 0.999817 xyz: 0.572266 0.0928345 0.334717
[2019-05-17 13:57:10.861] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs: 0.999756 xyz: 0.535645 0.203857 0.260254
[2019-05-17 13:57:10.861] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs: 0.999847 xyz: 0.287354 0.663574 0.0489197
[2019-05-17 13:57:10.861] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs: 1.00006 xyz: 0.17627 0.791992 0.0317993
[2019-05-17 13:57:10.862] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs: 1.00012 xyz: 0.692383 0.223267 0.0844727

And here are the openvino results:

[2019-05-17 13:24:50.082] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0 xyz: 0 0 0
[2019-05-17 13:24:50.082] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0 xyz: 0 0 0
[2019-05-17 13:24:50.082] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0 xyz: 0 0 0
[2019-05-17 13:24:50.083] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0 xyz: 0 0 0
[2019-05-17 13:24:50.083] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0 xyz: 0 0 0
[2019-05-17 13:24:50.083] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0 xyz: 0 0 0
[2019-05-17 13:24:50.083] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0 xyz: 0 0 0
[2019-05-17 13:24:50.083] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0.000157118 xyz: 0 0 0.000157118
[2019-05-17 13:24:50.083] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0 xyz: 0 0 0
[2019-05-17 13:24:50.084] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0 xyz: 0 0 0
[2019-05-17 13:24:50.084] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0.000129342 xyz: 0 0 0.000129342
[2019-05-17 13:24:50.084] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0 xyz: 0 0 0
[2019-05-17 13:24:50.084] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0 xyz: 0 0 0
[2019-05-17 13:24:50.084] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0.000158906 xyz: 0 0 0.000158906
[2019-05-17 13:24:50.084] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0 xyz: 0 0 0
[2019-05-17 13:24:50.085] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0 xyz: 0 0 0
[2019-05-17 13:24:50.085] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0.000281334 xyz: 0 0 0.000281334
[2019-05-17 13:24:50.085] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0 xyz: 0 0 0
[2019-05-17 13:24:50.085] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0 xyz: 0 0 0
[2019-05-17 13:24:50.085] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0.000361085 xyz: 6.73532e-05 0 0.000293732
[2019-05-17 13:24:50.085] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0 xyz: 0 0 0
[2019-05-17 13:24:50.085] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0 xyz: 0 0 0
[2019-05-17 13:24:50.086] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0.000294447 xyz: 0 0 0.000294447
[2019-05-17 13:24:50.086] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0 xyz: 0 0 0
[2019-05-17 13:24:50.086] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0 xyz: 0 0 0
[2019-05-17 13:24:50.086] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0.000162363 xyz: 0 0 0.000162363
[2019-05-17 13:24:50.086] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0 xyz: 0 0 0
[2019-05-17 13:24:50.087] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0 xyz: 0 0 0
[2019-05-17 13:24:50.087] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0.00018847 xyz: 0 0 0.00018847
[2019-05-17 13:24:50.087] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0 xyz: 0 0 0
[2019-05-17 13:24:50.087] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0 xyz: 0 0 0
[2019-05-17 13:24:50.087] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0.000173211 xyz: 0 0 0.000173211
[2019-05-17 13:24:50.087] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0 xyz: 0 0 0
[2019-05-17 13:24:50.087] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0 xyz: 0 0 0
[2019-05-17 13:24:50.088] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0.000154614 xyz: 0 0 0.000154614
[2019-05-17 13:24:50.088] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0 xyz: 0 0 0
[2019-05-17 13:24:50.088] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 7.4625e-05 xyz: 0 7.4625e-05 0
[2019-05-17 13:24:50.088] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0.000564098 xyz: 0 7.82013e-05 0.000485897
[2019-05-17 13:24:50.089] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0.0027895 xyz: 0.000977516 0.00181198 0
[2019-05-17 13:24:50.089] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0.000154614 xyz: 0 0.000154614 0
[2019-05-17 13:24:50.089] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0.00136948 xyz: 0.00136948 0 0
[2019-05-17 13:24:50.089] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0.00214434 xyz: 0.000564098 0.00158024 0
[2019-05-17 13:24:50.089] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0.000112057 xyz: 0 0.000112057 0
[2019-05-17 13:24:50.089] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0.00157356 xyz: 0.00157356 0 0
[2019-05-17 13:24:50.090] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0.0022049 xyz: 0.0011301 0.00107479 0
[2019-05-17 13:24:50.090] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0.000341892 xyz: 0 0.000341892 0
[2019-05-17 13:24:50.090] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0.00127602 xyz: 0.00127602 0 0
[2019-05-17 13:24:50.090] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0.000471711 xyz: 0.000260115 0.000211596 0
[2019-05-17 13:24:50.090] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0.000738621 xyz: 0 0.000738621 0
[2019-05-17 13:24:50.090] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0.000573158 xyz: 0.000573158 0 0
[2019-05-17 13:24:50.091] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0.000386 xyz: 0.000157118 0.000228882 0
[2019-05-17 13:24:50.091] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0.000529766 xyz: 0 0.000529766 0
[2019-05-17 13:24:50.091] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0.000115991 xyz: 0.000115991 0 0
[2019-05-17 13:24:50.091] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0.00113845 xyz: 0.000324011 0.000814438 0
[2019-05-17 13:24:50.091] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0.00014925 xyz: 0 0.00014925 0
[2019-05-17 13:24:50.091] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0.000267267 xyz: 0.000267267 0 0
[2019-05-17 13:24:50.092] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0.000769854 xyz: 0.00054884 0.000221014 0
[2019-05-17 13:24:50.092] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0.000444055 xyz: 0.000299215 0.000144839 0
[2019-05-17 13:24:50.092] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0.0018425 xyz: 0.0018425 0 0
[2019-05-17 13:24:50.092] [SurfaceNeuralNetworkSegmentation] [warning] SUM of probs  : 0.000386 xyz: 0.000386 0 0

I think it is pretty clear, right?

Thanks for your continued interest! 

Best regards,

Tom,

0 Kudos
Shubha_R_Intel
Employee
1,042 Views

Dear Deblauwe, Tom,

First. Please stop using NCS1. Intel is not supporting it anymore. Now, have you tried NCS2 outside of a virtual box ? Those errors you post above are indeed bizarre, but rather than add layers upon layers of complexity, please just try NCS2 in a native OS which supports it - i.e. Linux or Windows. For now (during troubleshooting), please avoid using a VM or a container.

Next, looking at those softmax values, yes, OK, the OpenVino ones look different from the NCSDK2 ones. I understand your concern. The sum of probabilities for the OpenVino do not add up to 1, which doesn't seem correct.

To answer your question, there is no reason a unet architecture won't work on an NCS2 stick -- unless there are specific layers in unet which are not supported on Myriad.  And not all layers are supported on Myriad, though they may be supported on the CPU and/or GPU. This doc lists which layers are supported (or not) per device. But please, i urge you to start with the basics. Run the segmentation samples I pointed you to above, with OpenVino supported models , and don't use a virtual machine or docker - does this step work for you ? 

As for the softmax issue, if you can build me a short and sweet sample (with a simple model) which demonstrates the issue, I will file a bug on your behalf.

Thanks for using OpenVino !

Shubha

0 Kudos
Deblauwe__Tom
Beginner
1,042 Views

Hi!

Just want to clarify that this whole thread is so that the people I work for can transition from the old NCSDK2 to Openvino, so we will definitely stop using NCSDK2 once we can run our stuff on Openvino with the same output. 

I already ran the segmentation examples as I stated in my earlier post:

I tried the segmentation demo with the adas road segmentation and that seems to give very comparable results on CPU and on MYRIAD.

So therefor I think my setup is ok and working.

Trying it in a native linux or windows environment will take some time for me. It is not clear to me what the advantage is, because I'm now running the inferences on the raspberry pi and that is a supported environment already.

I will try to make a shorter example than the one in my first post where I already showed you my network I'm using and with which you could reproduce the problem already. So do you want the compiled network as IR files, or just the network as a tensorflow PB file and the compile command I'm using?

Best regards,

Tom

0 Kudos
Deblauwe__Tom
Beginner
1,042 Views

Well, it's actually the softmax layer at the end that does not give me the correct results. If I do that myself, starting from the results I get when I change this:

        x = layers.Conv2D(filters=self.num_classes, kernel_size=(1, 1))(x)
        x = layers.Flatten()(x)
        x = layers.Softmax()(x)
        x = layers.Reshape((32,72,3),name="output")(x)

...to this line below, + then I do the softmax operation myself in the host CPU, then I get correct results:

        x = layers.Conv2D(filters=self.num_classes, kernel_size=(1, 1), name="output")(x)

So... yeah, that is then some bug in the myriad plugin most likely...

Also curious: when I was experimenting and removed the Flatten layer, I get this error when running the inference, which I have seen on this forum reported by other people as wel:

RuntimeError: Error reading network: Unsupported Activation layer type: exp

Regards,

Tom,

0 Kudos
Shubha_R_Intel
Employee
1,042 Views

Dear Deblauwe, Tom,

if you can attach a zip file containing 1) a simple model which demonstrates the problem 2) a simple inference script which demonstrates the problem I can file a bug on your behalf.  The problem we are referring to is the Myriad plugin's improper handling of softmax.  I will PM you so that you can privately PM the *.zip to me.

Thanks,

Shubha

0 Kudos
Shubha_R_Intel
Employee
1,042 Views

Dear Deblauwe, Tom,

I wanted to let you know that another forum poster has observed that softmax values don't add up to 1 after inference and I have filed a bug on this issue. As for the "exp" exception you are getting, the usual method to fix that is "model cutting". 

Please see my response here . Unfortunately my trick didn't work for that customer and I ended up filing a bug. But you can try a similar trick. Look at the poster's *.xml file for how I came up with --input <blah blah>.

RuntimeError: Error reading network: Unsupported Activation layer type: exp was supposed to have been fixed in R2019R1.1.

Thanks,

Shubha

0 Kudos
Deblauwe__Tom
Beginner
1,042 Views

Hi, 

Thanks for the message!

We are really eager to have this all working because we are waiting on this solution, so we know if we can use the compute stick 2 or not as our AI platform. 

Best regards

Tom

0 Kudos
Shubha_R_Intel
Employee
896 Views

Dear Tom,

of course I understand your eagerness and I'm really sorry that we've had so much "back and forth" on this issue.  I promise you that I filed a bug on the "SoftMax Probabilities not adding to  1 after inference issue". Will keep you posted.

Thanks for your patience !

Shubha

0 Kudos
Reply