Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6404 Discussions

Question about MTCNN example implementation and NCS Constraint

idata
Employee
615 Views

Hi,

 

I got several questions about MTCNN after I trying tensorflow mtcnn example.

 

I've noticed that the implementation of MTCNN on ncappzoo is quite different from the paper.

 

It miss the image pyramid and the middle network - RNet.

 

IMO, The lack of RNet is relatively fine because it's pretty similar to ONet.

 

But missing image pymarid is a different thing.

 

Image pymarid is a trick about scaling the input image to various scale and take all of them as input in order to get a better performance on detections

 

However AFAIK, the converted NCS Graph could only take single size of tensor as input shape.

 

Which means that if i really want to implement the image pyramid.

 

I have to prepare multiple sticks for different input shape and make sure every following input images would be scaled down to same sizes.

 

besides that, NCS Graph haven't supported Batch Inference so this would be sequential.

 

My question is,

 

Is there any better way to implement this

 

because image pyramid does have a tremendous effect on the detecting performance.

 

I know the NCSDK v2 have been released and I also know that MTCNN is currently not working on v2.

 

but would the new feature - multiple graph on single stick be a good way to implement image pyramid ?

 

the batch inference would still be unresolved though.
0 Kudos
2 Replies
idata
Employee
290 Views

I have same question!

0 Kudos
idata
Employee
290 Views

@hejianfeng @amacs.mist The NCAPPZOO is a community repository and you can contact the author if you want more information. As far as the image pyramid, I don't know if it will be easy to implement. Once a graph file is created, the input size for that graph file is set and cannot be changed. Maybe a workaround is to have models with various input sizes and create NCS graph files from those models and then queue them up for inference. Maybe you could achieve the same effect as the image pyramid, but I don't think this w/a is very efficient.

 

For now, we have no plans to support batch processing, although you can use fifo queues to queue up multiple inferences with NCSDK v 2.x. You can take a look at the fifo queues for the Python API at https://movidius.github.io/ncsdk/ncapi/python_api_migration.html and for the C API at https://movidius.github.io/ncsdk/ncapi/c_api_migration.html.

0 Kudos
Reply