I am looking for a tutorial / code snippets to create a custom extension and custom layer for Inference Engine. I want to run custom layer on the device other than CPU. I tried following hello_shape_infer_ssd sample code, but couldn't make it through. Is there any other implementation for the same ? (any CPP or Python code implementation)
Please study deployment_tools\model_optimizer\extensions\front\caffe\argmax_ext.py
and also deployment_tools\model_optimizer\extensions\ops\argmax.py
and also deployment_tools\inference_engine\src\extension\ext_argmax.cpp
Also look at http://caffe.berkeleyvision.org/tutorial/layers/argmax.html code.