I am using the UpSampling2D layer in Keras, which uses the ResizeNearestNeighbor layer in tensorflow. The model optimizer is able to convert it, but when I execute with OpenVINO in C++ it it gives me the following error:
Unsupported primitive of type: Resample name: up_sampling2d_4/ResizeNearestNeighbor
Which I find strange as this layer is listed as supported on this page (See nr 49): https://software.intel.com/en-us/articles/OpenVINO-Using-TensorFlow#tensorflow-supported-layers
I am using Ubuntu Linux, have an Intel i7-6700HQ, R5 2018 (2018.5.445) of the OpenVINO SDK.
Any assistance on this matter would be appreciated
Can you upload the following:
* Command for model optimizer conversion:
python3 mo_tf.py --input_model /xxx/transfer_learning_model_april_2019.pb --batch 1 --output_dir /xxx/models/
* Files are attached
* Application I'm running is FAST https://github.com/smistad/fast, the OpenVINO specific code is here: https://github.com/smistad/FAST/blob/ca95829b949d72d5a5cc34bd4e58deefc10097b6/source/FAST/Algorithms...
* The error only occur when the device is CPU, with GPU it works.
Please modify one of the OpenVino samples to resemble the below code:
Or maybe one of our existing OpenVino samples will work out of the box (but I doubt it).
If you can supply a main.cpp containing the code logic and reproduces the issue (and also works within the OpenVino sample infrastructure), that would expedite debugging.
Dear Erik, also what om77 says often does the trick for the 'Unsupported Primitive' error. Just add -l <PATH_TO cpu_extension.dll> (or cpu_extension.so).
Hope it helps,
hey I solved this problem according to Shubha R. (Intel)'s’ method.
Find the extension dll file in the installation directory
I installed the default path：
cpu_dll_path = r"C:\Program Files (x86)\IntelSWTools\openvino_2019.1.087\deployment_tools\inference_engine\bin\intel64\Release\cpu_extension_avx2.dll" And call the class method to load the extension.
And call the class method to load the extension.
Then run plugin.load will not give an error
I was able to fix it with Shubha's suggestion aswell:
#ifdef WIN32 auto extension_ptr = make_so_pointer<::InferenceEngine::IExtension>("libcpu_extension.dll"); #else auto extension_ptr = make_so_pointer<::InferenceEngine::IExtension>("libcpu_extension.so"); #endif m_inferencePlugin->AddExtension(extension_ptr);
Now I have another issue though, on the GPU, this network model gives the wrong answer, while on the CPU it works now.
Any thoughts on how to fix this in Ubuntu Linux (either 16 or 18)?
I have `OpenVINO_2019.3.334 (R3)` installed at `/opt/intel/openvino_fpga_2019.3.334/deployment_tools/inference_engine/lib/intel64`, where I have `libcpu_extension_avx2.so`, `libcpu_extension_avx512.so`, and `libcpu_extension_sse4.so` installed. I've tried adding this path to my `$LD_LIBRARY_PATH`, but to no avail. I've also tried re-calling the plugin again to reload the plugin path, as follows:
self.plugin = IEPlugin('CPU') cpu_ext_path = r"/opt/intel/openvino_fpga_2019.3.334/deployment_tools/inference_engine/lib/intel64/libcpu_extension_avx2.so" self.plugin.add_cpu_extension(cpu_ext_path)
Unfortunately, I still get this error:
`RuntimeError: Unsupported primitive of type: Resample name: ssh_c3_up`
The original layer is type "Upsampling", which gets renamed to "Resample", per the remapping semantics for MXNet described in: