Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6584 Discussions

Multiple Inputs with dla_benchmark (FPGA AI Suite 2025.1)

TNFPGA
Beginner
193 Views

I would like some help preparing a dataset for a dla_benchmark test.

I have a PyTorch model that takes three separate inputs: model(input1, input2, input3). Each input has a single channel with shape (x, y, 1). I don’t think the FPGA supports extracting individual channels from a 3-channel image (e.g., ch0 = image[:, :, 0], ch1 = image[:, :, 1], ch2 = image[:, :, 2]), so I need to feed each channel separately.

After converting the model to .xml and .bin, how should I prepare the dataset to use with the -i option in the dla_benchmark command?

Thanks!

0 Kudos
3 Replies
Wan_Intel
Moderator
157 Views

Hi TNFPGA,

Thank you for reaching out to OpenVINO™ community.


To prepare the dataset to use with the -i option in the dla_benchmark command, please refer to Section Content at https://www.intel.com/content/www/us/en/docs/programmable/768977/2024-3/performing-accelerated-inference-with.html



Regards,

Wan


0 Kudos
TNFPGA
Beginner
133 Views

@Wan_Intel 

 

Thank you so much for the link. I went through the materials, but I couldn’t quite find a solution for my case.

 

From what I understand, the -i option (in dla_benchmark) expects a folder or multiple folder directories (multiple model) containing image data files, where each file represents a single input to the model, like model_infer(input). In my case, my model takes two inputs, like model_infer(input1, input2) (currently I use this with python API).

 

How should I structure my data folder to use the -i option in this scenario?"

 

Thanks

0 Kudos
Wan_Intel
Moderator
99 Views

Hi TNFPGA,

Thanks for the information.

 

Could you please share the following information with us so that we can further investigate the issue?

  • Model files in XML and BIN format
  • Input samples that you used to run inference

 

 

Regards,

Wan

 

0 Kudos
Reply