- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hey, I'm new to ML so the answer to my question could be trivial, but unfortunately not for me at the moment.
Well, what happens there is that I have a caffe model which detects faces on images, when I'm running it on PC it runs well and the shape of the output is something like (1,1,123,7), the thing is when I compile the model to graph file with mvNCCompile, the shape of the output on NCS is something like (1407,) for the same input image. So, to get the bounding boxes and number of detected faces I can't use the same process I used when I was getting output on the PC, so the question is:
It was supposed to be different? If yes, then how can I interpret the format of output on ncs?
Also, if you happen to know any good tutorial to learn caffe models training, I would appreciate :)
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @tw1xy
Could you provide a link to your model so we can take a closer look? Also, which version of the NCSDK are you using?
Regards,
Jesus
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @Jesus_at_Intel
Yes, sure. So the model is here: https://drive.google.com/open?id=15aFSfAQd7KdDbXPMfc8flBu3xKWzSbtz
Also, I'm using version 2.10.01.01 of NCSDK on ncs1
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @tw1xy
I compiled your model into graph format using mvNCCompile with the following command:
mvNCCompile 1.prototxt -w 1.caffemodel -s 12 -in data_bn -on detection_out
It will save me a lot of time, if you could share your code to run this model on the NCS.
Could you also check the model with mvNCCheck? I see the results and expected are both size (1, 1, 112, 7)
, do you see the same?
Regards,
Jesus
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello @Jesus_at_Intel
I just compiled the model into a graph as you did:
mvNCCompile 1.prototxt -w 1.caffemodel -s 12 -in data_bn -on detection_out
and tested that with my code and it gave me the same result as earlier
(1407,)
I checked the model with mvNCCheck and get the following results, and they don't seem consistent for some reason
Anyway, the code I'm testing with.
Also, thank you for the help :smile:
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @tw1xy
I took a closer look at your code and model. The result is the expected value as you are printing is the size of the output array.
The first value of the output data is the number of detections and the next 6 values are not used. The rest of the output array is the detection data organized in groups of 7 per detection:
- Image id (always 0)
- Class id
- Score
- Box left location
- Box top location
- Box right location
- Box bottom location
In your prototxt file, you specify that you want to keep the top 200 object detection results (keep_top_k: 200).
First 7 values + (7 parameters per detection * 200) = 1404 element array.
I recommend looking at the video objects example included in the ncappzoo on how to post process the output data.
https://github.com/movidius/ncappzoo/tree/ncsdk2/apps/video_objects
Hope this makes sense!
Regards,
Jesus
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page