I am trying to run deeplab v3+ which comes from MATLAB via onnx exporter on R2019v3.1.
Model optimizer runs well and the model works fine on AVX2 , GPU, Myriad.
But it gives wrong result on SSE2 and it will cause seg fault on AVX-512.
Another U-net like network runs well on both SSE2 and AVX-512.
I think some layer implementation may be wrong.
I attach xml and bin.
For more complete information about compiler optimizations, see our Optimization Notice.