- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I would like some help preparing a dataset for a dla_benchmark test.
I have a PyTorch model that takes three separate inputs: model(input1, input2, input3). Each input has a single channel with shape (x, y, 1). I don’t think the FPGA supports extracting individual channels from a 3-channel image (e.g., ch0 = image[:, :, 0], ch1 = image[:, :, 1], ch2 = image[:, :, 2]), so I need to feed each channel separately.
After converting the model to .xml and .bin, how should I prepare the dataset to use with the -i option in the dla_benchmark command?
Thanks!
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi TNFPGA,
Thank you for reaching out to OpenVINO™ community.
To prepare the dataset to use with the -i option in the dla_benchmark command, please refer to Section Content at https://www.intel.com/content/www/us/en/docs/programmable/768977/2024-3/performing-accelerated-inference-with.html
Regards,
Wan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you so much for the link. I went through the materials, but I couldn’t quite find a solution for my case.
From what I understand, the -i option (in dla_benchmark) expects a folder or multiple folder directories (multiple model) containing image data files, where each file represents a single input to the model, like model_infer(input). In my case, my model takes two inputs, like model_infer(input1, input2) (currently I use this with python API).
How should I structure my data folder to use the -i option in this scenario?"
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi TNFPGA,
Thanks for the information.
Could you please share the following information with us so that we can further investigate the issue?
- Model files in XML and BIN format
- Input samples that you used to run inference
Regards,
Wan
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page