How to define FusedBatchNormV3 function in Custom Layers with Custom DenseNet?
Firstly I make my custom model, DenseNet of 13 layers, and I try to convert my model to IR files with the following error:
[ ERROR ] List of operations that cannot be converted to Inference Engine IR: [ ERROR ] FusedBatchNormV3 (3) [ ERROR ] batch_normalization/FusedBatchNormV3 [ ERROR ] batch_normalization_1/FusedBatchNormV3 [ ERROR ] batch_normalization_2/FusedBatchNormV3 [ ERROR ] Part of the nodes was not converted to IR. Stopped. For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #24.
And it works! My model now can be converted to .bin and .xml file.
Now I need Inference Engine Custom Layer Implementation for the Intel® CPU, the url give me a sample that define cosh function in ext_cosh.cpp, but I don't know how to define FusedBatchNormV3 function in .cpp, could anyone have some solutions about this?
I use Anaconda and create two environment: TF 1.15 and TF 2.0, my custom model was constructed and training on TF 2.0, and I converted it on TF 1.15.