- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi~
The CV SDK have any plan to support the mtcnn model with tensorflow version (only for caffe version now)?
And have any sample code for using ir mtcnn model (convert from caffe mtcnn) on CV SDK?
Thanks.
Link Copied
2 Replies
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Currently OpenVINO is supporting:
Caffe - MTCNN-PNet MTCNN-RNet MTCNN-ONet
Inference Engine supports MKLDNN(CPU), clDNN(GEN) for MTCNN.
Thanks,
Jeff
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
In most implementations of MTCNN, the image is passed at multiple scales to the p-net.
However,
request = net.CreateInferRequestPtr(); Blob::Ptr blog = request->GetBlob(input); SizeVector blobSize = blob->getTensorDesc().getDims(); const size_t width = blobSize[3]; const size_t height = blobSize[2]; const size_t channels = blobSize[1]; T* blob_data = blob->buffer().as<T*>(); cv::Mat resized_image(orig_image); if (width != orig_image.size().width || height!= orig_image.size().height) { cv::resize(orig_image, resized_image, cv::Size(width, height)); }
will resize the input image to the size of the input layer.
How to pass images at different sizes to p-net? p-net is fully convolutional, can the input size not be fixed?
Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page