Hi, i'm trying to run the C++ object_detection_sample_ssd project with my custom onnx model from Microsoft Azure Custom Vision portal.
Is an Object Detection model with 27 labels that uses an image as input.
When I run the Object Detection C++ Sample SSD project using the optimized model with this command:
object_detection_sample_ssd -i C:\...\x.jpg -m C:\...\model.xml -d CPU
I get this error:
[ INFO ] InferenceEngine: API version ............ 2.1 Build .................. 42025 Description ....... API Parsing input parameters [ INFO ] Files were added: 1 [ INFO ] C:\...\x.jpg [ INFO ] Loading Inference Engine [ INFO ] Device info: CPU MKLDNNPlugin version ......... 2.1 Build ........... 42025 [ INFO ] Loading network files: C:\...\model.xml C:\...\model.bin [ INFO ] Preparing input blobs [ INFO ] Batch size is 1 [ INFO ] Preparing output blobs [ ERROR ] Can't find a DetectionOutput layer in the topology
In Netron, an open source viewer for neural networks, I can see this:
MODEL PROPERTIES format ONNX v3 domain onnxml imports ai.onnx v7 INPUTS data name: data type: float32[None,3,416,416] denotation: Image(Bgr8) Image(s) in BGR format. It is a [N, C, H, W]-tensor. The 1st/2nd/3rd slices along the C-axis are blue, green, and red channels, respectively. OUTPUTS model_outputs0 name: model_outputs0 type: float32[None,None2,13,13]
Is there something I am doing wrong?
Is it the right sample to test my model?
All clues are welcome.
Thanks in advance.
Since you have posted the same question in another thread, I will proceed to close this thread.
Further communication/queries are to be posted to Thread 856667.