Hello,
I'm using an tensorflow-trained model, which is just an FCN network, but without a dense layer at the end. It's trained patch-based, but then the inferences are at 320x640. It worked correctly in NCSDK2 and I could convert it without any problems and compile it using mvNCCompile. However, now it runs sort-of okay on openvino r5 on the myriad plugin, but I saw some differences in output values.
For example on NCSDK2 I see softmax values like "0.99245", but now on openvino it's more like "0.000425" so a big difference, however, the output of the class overal is not really wrong, so the segmentation is almost correct, but less so than with NCSDK2.
For the training, the input of my network uses the RGB values divided by 255. So in NCSDK I could just create CV_32FC3 images and divide each value by 255 and give this to NCSDK api, which accepts FP32 float images and that is ok then.
Now with openvino, and with the mo_tf.py compiler I use this commandline:
mo_tf.py --input_model tensorflow_nets/NN_inference_only.pb --input input --output output/Reshape -b 1 --data_type FP16 --scale 255
So but now, when using the openvino examples, they all copy the RGB image to the input_buffer as U8 byte values, so I guess the "scale" parameter should do the same?
My question is, is my conversion correct or am I missing something?
The network I use for inference in keras(tensorflow) is:
inputs = layers.Input(shape=self.image_shape, name="input") x = layers.Conv2D(filters=32, kernel_size=(3, 3))(inputs) x = layers.BatchNormalization()(x) x = layers.Activation('relu')(x) x = layers.MaxPooling2D(pool_size=(2, 2))(x) x = layers.Conv2D(filters=16, kernel_size=(3, 3))(x) x = layers.BatchNormalization()(x) x = layers.Activation('relu')(x) x = layers.Conv2D(filters=16, kernel_size=(3, 3))(x) x = layers.BatchNormalization()(x) x = layers.Activation('relu')(x) x = layers.MaxPooling2D(pool_size=(2, 2))(x) x = layers.Conv2D(filters=16, kernel_size=(5, 5))(x) x = layers.BatchNormalization()(x) x = layers.Activation('relu')(x) x = layers.MaxPooling2D(pool_size=(2, 2))(x) x = layers.Conv2D(filters=16, kernel_size=(5, 5))(x) x = layers.BatchNormalization()(x) x = layers.Activation('relu')(x) x = layers.Conv2D(filters=self.num_classes, kernel_size=(1, 1))(x) x = layers.Flatten()(x) x = layers.Softmax()(x) x = layers.Reshape((32,72,3),name="output")(x)
Link Copied
Dear Tom,
of course I understand your eagerness and I'm really sorry that we've had so much "back and forth" on this issue. I promise you that I filed a bug on the "SoftMax Probabilities not adding to 1 after inference issue". Will keep you posted.
Thanks for your patience !
Shubha
Hello,
It has been a week now. I was wondering, if you guys/girls have a solution or are working on a solution and already have something like in the form of a patch to the opensourced version of openvino R1.1, then it's no problem for me to compile the myriad plugin from the dldt repo and try it out.
Best regards,
Tom,
Dearest Deblauwe, Tom,
A bug on the SoftMax issue has definitely been filed and as I mentioned before you are not the only OpenVino community member to find this bug. I will certainly keep you informed on the progress. Typically we do not give out patches but I can certainly check on this.
Thanks,
Shubha
Hi Shubha,
How's the progress? Can you give me a timeframe on a solution?
Best regards,
Tom,
Dear Deblauwe, Tom,
Though I can't commit to dates the OpenVino R2 should be arriving "any day now".
Thanks,
Shubha
For more complete information about compiler optimizations, see our Optimization Notice.