Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6503 Discussions

The output dimension is incorrect after loading my network

rzy6461
Beginner
950 Views

Hello,I have a network with input dimension [1,1,322] and output dimension [1,1,161], in the format of. onnx.I used model converter to convert to. xml and. bin.From the. xml file, the input and output dimensions are correct.However,when I started inference,The output dimension became[1,1,150].The code is as follows:

int main(int argc, char *argv[])
{
	const std::string NETWORK(argv[1]);//.xml
	const std::string device_name{argv[2]};//CPU,GPU,etc
// --------------------------- 1. Load inference engine instance -------------------------------------	
	Core ie;
// 2. Read a model in OpenVINO Intermediate Representation (.xml and .bin files) or ONNX (.onnx file) format	
	CNNNetwork network = ie.ReadNetwork(NETWORK);
	network.setBatchSize(1);
// --------------------------- 3. Configure input & output ---------------------------------------------	
// --------------------------- Prepare input blobs -----------------------------------------------------	
	InputInfo::Ptr input_info = network.getInputsInfo().begin()->second;
	std::string input_name = network.getInputsInfo().begin()->first;
	input_info->setLayout(Layout::CHW);
	input_info->setPrecision(Precision::FP32);	
	static size_t num_channels = input_info->getTensorDesc().getDims()[0];
	static size_t height = input_info->getTensorDesc().getDims()[1];
	static size_t width = input_info->getTensorDesc().getDims()[2];
	std::cout << "num_channels=" << num_channels << std::endl;
	std::cout << "height=" << height << std::endl;
	std::cout << "width=" << width << std::endl;
// --------------------------- Prepare output blobs ----------------------------------------------------	
	DataPtr output_info = network.getOutputsInfo().begin()->second;
    std::string output_name = network.getOutputsInfo().begin()->first;
	output_info->setPrecision(Precision::FP32);		
// --------------------------- 4. Loading model to the device ------------------------------------------
	ExecutableNetwork executable_network = ie.LoadNetwork(network, device_name);
// --------------------------- 5. Create infer request -------------------------------------------------
	InferRequest infer_request = executable_network.CreateInferRequest();
// --------------------------- 6. Prepare input data--------------------------------------------------------	
	std::cout << "Filling input buffer" << std::endl;
	Blob::Ptr input = infer_request.GetBlob(input_name);
	auto input_data = input->buffer().as<PrecisionTrait<Precision::FP32>::value_type *>();
	for(int i = 0; i < num_channels*width*height; i++)
 	    input_data[i] = 0.256f;
// --------------------------- 7. Do inference --------------------------------------------------------
	infer_request.Infer();
// --------------------------- 8. Process output ------------------------------------------------------
	Blob::Ptr output = infer_request.GetBlob(output_name);
	auto output_data = output->buffer().as<PrecisionTrait<Precision::FP32>::value_type*>();
	std::cout << "Neural Network output" << std::endl;
	static size_t out_0 = output->getTensorDesc().getDims()[0];
	static size_t out_1 = output->getTensorDesc().getDims()[1];
	static size_t out_2 = output->getTensorDesc().getDims()[2];
	std::cout << "Output dims: " << out_0<< "x" << out_1 <<"x" <<out_2<<std::endl;
	return 0;
}

num_channels=1,height=1,width=322.

out_0=1,out_1=1,out_2=150.

In addition, when I use four-dimensional input(NCHW),although in .xml file the output dimension is right,after using the code above,the output dimension is also wrong.

Is there any problem with the code of the above inference process?

Best regards,

rzy

0 Kudos
4 Replies
Iffa_Intel
Moderator
930 Views

Greetings,


Could you provide the details of:

  1. Which model/topology did you use for this?
  2. Which code did you use to inference with?
  3. Command that you use to convert this (anything relevant to the conversion)
  4. Did you setup the OpenVINO toolkit according to your OS as in here: https://docs.openvinotoolkit.org/latest/openvino_docs_install_guides_installing_openvino_windows.html
  5. Are you able to run the example application as in the guide above?
  6. what OS did you use?


Sincerely,

Iffa


0 Kudos
rzy6461
Beginner
924 Views

Hi Iffa,

1.I put my test.pth and test.onnx model in the attachment.I use “torch.onnx.export to conver test.pth to test.onnx.Besides this, LSTM_1x322.zip has the same problem.

2.The code I use to inference is given above.Is there any problem with these codes?

3.The command I use to convert is "python3 mo.py --input_model=test.onnx"

4.Yes.

5.There is no problem with demo.

6.Linux.

Thank you so much!

zy

0 Kudos
Iffa_Intel
Moderator
901 Views

Actually, for the conversion, you need to:

  1. go to <INSTALL_DIR>/deployment_tools/model_optimizer directory.
  2. Use the mo.py script to simply convert a model with the path to the input model .nnet file: python3 mo.py --input_model <INPUT_MODEL>.onnx


You may refer here: https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX.html


and here: https://www.youtube.com/watch?v=PA4I31_ixew


I'm not quite sure about your code since I didn't know your whole flow. Instead, try to use the OpenVINO sample application/ Inference and see whether you get a good result.


You can run it with the classification_sample.py located in <openvino path>/deployment_tools/inference_engine_/samples/python/classification sample


Run it with the command: python3 classification_sample.py -m <youtmodel.xml -i yourinputfile.jpg

Run python3 classification_sample.py -h to see what type of input file is supported, eg jpg,mp4 etc.


Sincerely,

Iffa


0 Kudos
Iffa_Intel
Moderator
887 Views

Greetings,


Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question.


Sincerely,

Iffa



0 Kudos
Reply