Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6403 Discussions

[ ERROR ] Graph is not supported on FPGA plugin due to existance of layer (Name:conv1, Type: Convolution)

CJian1
Beginner
866 Views

Hi, I am having a very weird error when trying to run inference on FPGA with a very simple CNN model. The model only contains one convolutional layer and one fully connected layer. The inference can be successfully run on "CPU" and "HETERO:FPGA,CPU" mode, however, I am getting the following error when running on "FPGA" mode:

[ INFO ] InferenceEngine:
        API version ............ 1.2
        Build .................. 13911
[ INFO ] Parsing input parameters
[ INFO ] Loading plugin
        API version ............ 1.2
        Build .................. dliaPlugin
        Description ....... dliaPlugin
[ INFO ] Loading network files:
        /home/chao/cnn.xml
        /home/chao/cnn.bin
[ INFO ] Preparing input blobs
[ INFO ] input dimensions: 1x48x255x255
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ INFO ] Output dimensions are: 1x558
[ INFO ] Loading model to the plugin
[ ERROR ] Graph is not supported on FPGA plugin due to existance of layer (Name:conv1, Type: Convolution)
 in topology. Most likely you need to use heterogeneous plugin instead of FPGA plugin directly.
 
OpenVINO version I am having is R3.343 and I was using a PAC for FPGA inference. Previously, I was able to run inference with AlexNet (without softmax layer) entirely on FPGA, so I have no idea why this simple model could not... The following is the network description xml file. Please let me know if you need more info (e.g., bin file, inference engine app, input files, etc)
 
<?xml version="1.0" ?>
<net batch="1" name="model" version="2">
	<layers>
		<layer id="0" name="data" precision="FP32" type="Input">
			<output>
				<port id="0">
					<dim>1</dim>
					<dim>48</dim>
					<dim>255</dim>
					<dim>255</dim>
				</port>
			</output>
		</layer>
		<layer id="1" name="conv1" precision="FP32" type="Convolution">
			<data dilation-x="1" dilation-y="1" group="1" kernel-x="6" kernel-y="6" output="32" pad-x="0" pad-y="0" stride="1,1,3,3" stride-x="3" stride-y="3"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>48</dim>
					<dim>255</dim>
					<dim>255</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>32</dim>
					<dim>84</dim>
					<dim>84</dim>
				</port>
			</output>
			<blobs>
				<weights offset="0" size="221184"/>
				<biases offset="221184" size="128"/>
			</blobs>
		</layer>
		<layer id="2" name="relu1" precision="FP32" type="ReLU">
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>32</dim>
					<dim>84</dim>
					<dim>84</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>32</dim>
					<dim>84</dim>
					<dim>84</dim>
				</port>
			</output>
		</layer>
		<layer id="3" name="fc6" precision="FP32" type="FullyConnected">
			<data out-size="558"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>32</dim>
					<dim>84</dim>
					<dim>84</dim>
				</port>
			</input>
			<output>
				<port id="3">
					<dim>1</dim>
					<dim>558</dim>
				</port>
			</output>
			<blobs>
				<weights offset="221312" size="503967744"/>
				<biases offset="504189056" size="2232"/>
			</blobs>
		</layer>
		<layer id="4" name="relu6" precision="FP32" type="ReLU">
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>558</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>558</dim>
				</port>
			</output>
		</layer>
		<layer id="5" name="prob" precision="FP32" type="SoftMax">
			<data axis="1"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>558</dim>
				</port>
			</input>
			<output>
				<port id="1">
					<dim>1</dim>
					<dim>558</dim>
				</port>
			</output>
		</layer>
	</layers>
	<edges>
		<edge from-layer="0" from-port="0" to-layer="1" to-port="0"/>
		<edge from-layer="1" from-port="3" to-layer="2" to-port="0"/>
		<edge from-layer="2" from-port="1" to-layer="3" to-port="0"/>
		<edge from-layer="3" from-port="3" to-layer="4" to-port="0"/>
		<edge from-layer="4" from-port="1" to-layer="5" to-port="0"/>
	</edges>
</net>

 

 
Thanks,
Chao
 
0 Kudos
6 Replies
Mark_L_Intel1
Moderator
866 Views

Hi Chao,

There are several issues here.

Mark

0 Kudos
CJian1
Beginner
866 Views

Hi Mark,

Thanks for your reply. I have checked the page you referred to. However, I did not find any information indicating the convolutional layer in my CNN model is not supported on FPGA.

The CNN model is really just an extremely simplified LeNet model with a normal convolutional layer and a fully connected layer. The only uncommon part it has is probably in the input layer because the input has 48 channels instead of 3 channels as normal image inputs would have. I am not sure about it, but I feel like the FPGA plugin somehow limits the input shape to have 3 or maybe less than 3 channels? Could you let me know if this is the case? 

I appreciate any help you could provide!

Regards,

Chao

0 Kudos
CJian1
Beginner
866 Views

Hi Mark,

OK, I just made it work on FPGA by changing the "stride" of the convolutional layer from (3,3) to (4,4)... I don't know why it worked. It seems like there are maybe some limitations in FPGA plugin that are not very well documented... Or maybe I was just not able to find it on the website ;p. 

Anyway, thank you again for all the help.

Chao

0 Kudos
Mark_L_Intel1
Moderator
866 Views

Hi Chao,

Thanks so much for the debugging info, for your work around, did you confirm the result is the expected? If you pass the execution but the result is wrong, we should not have a valid fix.

From your original post, we can tell there must be some operation are not supported by the FPGA. Normally FPGA is configured by different bitstream, the AlexNet bitstream in your case might not support your model, did you try to program with different bitstreams under /opt/intel/computer_vision_sdk_fpga_2018.3.343/a10_devkit_bitstreams/?

I am not expert of the models, the question might not be proper, but which topology should support your model? 

Mark

0 Kudos
CJian1
Beginner
866 Views

Hi Mark,

Yes, this error I encountered was probably caused by my configuration of the convolutional layer (with stride of (3, 3)) not compatible with the convolutional layers in these predefined topologies in the FPGA bitstreams. I have tried to program the FPGA with AlexNet, GoogLeNet, VGG, and ResNet bitstreams, but none of them worked. However, after I changed stride to (4,4) and re-trained the model, everything worked perfectly now with AlexNet bitstream on FPGA. 

Thanks,

Chao

0 Kudos
Mark_L_Intel1
Moderator
866 Views

Hi Chao,

Thanks so much for the information and this a good help to us, I will submit a bug report for this.

Do you have the sample's name? I searched through the posts and I can't find it.

Also I think you do reproduce it with the bitstream in our FPGA release, did you use your model or the model in our release?

Mark

0 Kudos
Reply