Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

BiasAdd operation has unsupported `data_format`=NCHW

Becktor__Jonathan
727 Views

Hi,

I'm trying to convert a tensorflow 1.14 frozen graph.

Network has a couple of convolutions followed by batch norm.

tf format is channels first so (n,c,h,w)


Error message i get:

[ ERROR ]  List of operations that cannot be converted to Inference Engine IR:
[ ERROR ]      BiasAdd (6)
[ ERROR ]          Conv2D/Conv2D/BiasAdd
[ ERROR ]          Conv2D_1/Conv2D/BiasAdd
[ ERROR ]          Conv2D_2/Conv2D/BiasAdd
[ ERROR ]          Conv2D_3/Conv2D/BiasAdd
[ ERROR ]          Conv2D_4/Conv2D/BiasAdd
[ ERROR ]          Conv2D_5/Conv2D/BiasAdd


Converter is run with:

python mo_tf.py --input_model frozen_model.pb --input FacetDots 
--input_shape "[1, 2, 400, 400]" --disable_nhwc_to_nchw --data_type FP32 --log_level DEBUG

Earlier I also got the following error:
openvino BiasAdd operation has unsupported `data_format`=NCHW

on openvino 2019.2

 

0 Kudos
7 Replies
Becktor__Jonathan
727 Views

running without --log_level DEBUG

error changes to:
[ ERROR ]  BiasAdd operation has unsupported `data_format`=NCHW
[ ERROR ]  BiasAdd operation has unsupported `data_format`=NCHW
[ ERROR ]  BiasAdd operation has unsupported `data_format`=NCHW
[ ERROR ]  BiasAdd operation has unsupported `data_format`=NCHW
[ ERROR ]  BiasAdd operation has unsupported `data_format`=NCHW
[ ERROR ]  BiasAdd operation has unsupported `data_format`=NCHW
[ ERROR ]  List of operations that cannot be converted to Inference Engine IR:
[ ERROR ]      BiasAdd (6)
[ ERROR ]          Conv2D/Conv2D/BiasAdd
[ ERROR ]          Conv2D_1/Conv2D/BiasAdd
[ ERROR ]          Conv2D_2/Conv2D/BiasAdd
[ ERROR ]          Conv2D_3/Conv2D/BiasAdd
[ ERROR ]          Conv2D_4/Conv2D/BiasAdd
[ ERROR ]          Conv2D_5/Conv2D/BiasAdd
[ ERROR ]  Part of the nodes was not converted to IR. Stopped.

0 Kudos
Shubha_R_Intel
Employee
727 Views

Dear Becktor, Jonathan,

python mo_tf.py --input_model frozen_model.pb --input FacetDots --input_shape "[1, 2, 400, 400]" --disable_nhwc_to_nchw --data_type FP32 --log_level DEBUG

Earlier I also got the following error:
openvino BiasAdd operation has unsupported `data_format`=NCHW

 

Why did you do disable_nhwc_to_nchw ? Tensorflow is actually NHWC, though Inference Engine converts everything to NCHW. If you are passing a frozen Tensorflow pb into Model Optimizer, you should leave --input_shape [N,H,W,C]. --input_shape "[1, 2, 400, 400]" is incorrect.

Please post your thoughts here. Glad to help !

Shubha

0 Kudos
Becktor__Jonathan
727 Views

Hey thanks for the reply.

Don't think I made it clear.

We run our tensorflow layers with the channels first flag so our model is channels first (n,c,h,w) which is supported on gpu and with a tensorflow built with MKL binaries. Which in turn is why I use the --disable_nhwc_to_nchw flag the model optimizer.

It seems to work for most layers except the bias add operation.

Also what is the timeline for conversion of the new tensorflow 2.0.0's Saved_Model format?

 

Jonathan

0 Kudos
Shubha_R_Intel
Employee
727 Views

Dear Becktor, Jonathan,

OK now I understand the issue. Thanks for explaining. Well today R3 was just released. Can you try it again on 2019R3 ? If the problem persists, let me know and I will file a bug. But please attach your model as a *.zip to this ticket and give me your full Model Optimizer command.  As for Tensorflow 2.0, we hope that full support will be in R4, scheduled for release toward the end of the year or early next year.

Hope it helps,

Thanks,

Shubha

 

0 Kudos
Becktor__Jonathan
727 Views

Dear Shubha,

2019R3 seems to have fixed it!

Thanks!

Jonathan

0 Kudos
Shubha_R_Intel
Employee
727 Views

Dearest Becktor, Jonathan,

Bravo ! Thanks for sharing your great news with the OpenVino Community. 

Thanks,

Shubha

 

0 Kudos
Bhattacharjea__Rajib
727 Views

I had the same issue on Linux and updating to a 2019.3 line fixed the issue for me. Thanks to Subha for the tip.

0 Kudos
Reply