I am trying to infer a model in open vino and all the layers are supported in open vino. After obtaining the intermediate representation in open vino I am running an inference using the inference engine(python wrapper). But the results obtained from open vino are different from that obtained using tensorflow in almost 50% of the cases. Please note that I am operating on point clouds and is trying a point net architecture. I have shared all the necessary files to reproduce the error.
In order to reproduce the error simply execute the python file in python3. I have attached the codes herewith.
I saw your message to me on LinkedIn and I gave a status update on what I did today involving your issue. Tomorrow I will perform the test and see what I found out about the outputs.
After some further analysis and test with OpenVINO R5. I've gotten closer to the issue and actually have your program running with no differences in output. I did this by disabling the fusion feature that Model Optimizer has when converting your model:
python3 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo.py --input_model frozen_classifier.pb --disable_fusing
Try this and rerun your program and the difference between mask2 and mask should be 0 and the time for inference is around the same as when fusing was enabled and you were getting different results. I will take a look further to see what fusing caused this issue but for now this should get you on your way to completing your solution.