Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

YoloV2 NCS - C++

idata
Employee
1,059 Views

I need to create a software in C++ that use Movidius NCS to detect objects with tiny yolo v2.

 

I saw this example with python and C++ https://github.com/duangenquan/YoloV2NCS and i used Region class in my project.

 

I don't know why but I have many false detections and 18.000 results from mvncGetResults().

 

Using dog.jpg for example i have a lot of output like these:

 

 

 

14 = person , prob = 0.309372

 

0 = aeroplane , prob = 0.32841

 

0 = aeroplane , prob = 0.346906

 

5 = bus , prob = 0.334437

 

17 = sofa , prob = 0.374106

 

10 = diningtable , prob = 0.461306

 

15 = pottedplant , prob = 0.489467

 

18 = train , prob = 0.287603

 

17 = sofa , prob = 0.375134

 

8 = chair , prob = 0.28799

 

9 = cow , prob = 0.290264

 

3 = boat , prob = 0.432494

 

2 = bird , prob = 0.564795

 

19 = tvmonitor , prob = 0.266686

 

5 = bus , prob = 0.760655

 

10 = diningtable , prob = 0.521631

 

1 = bicycle , prob = 0.411501

 

 

 

I think it's something with preprocessing or with conversion in float16 (I used fp16 code taken from GitHub mvnc examples) but I don't really know how to solve these problems.

 

I hard coded some values for Region.getDetection taking them from python code ObjectWrapper.py of YoloV2NCS.

 

Here is the inference part of the code (I used Qt in my project and width = height = 416):

 

void NCSNet::preprocess_image(const cv::Mat& src_image_mat, cv::Mat& preprocessed_image_mat) { cv::Mat resized(width, height, CV_8UC3); if (src_image_mat.channels()==4) cv::cvtColor(src_image_mat, src_image_mat, CV_BGRA2BGR); cv::resize(src_image_mat, resized, cv::Size(width, height)); cv::cvtColor(resized,resized, CV_BGR2RGB); resized.convertTo(preprocessed_image_mat, CV_32FC3, 1.0/255.0); } void NCSNet::DoInferenceOnImage(void* graphHandle, cv::Mat& inputMat) { mvncStatus retCode; cv::Mat preprocessed_image_mat(width, height,CV_32FC3); preprocess_image(inputMat, preprocessed_image_mat); if (preprocessed_image_mat.rows > width || preprocessed_image_mat.cols > height) { qCritical() << "Error - preprocessed image is unexpected size!"; return; } half tensor16[width* height * 3]; floattofp16((unsigned char*)tensor16, (float*)preprocessed_image_mat.data, width*height*3); unsigned int lenBufFp16 = 3*width*height*sizeof(half); retCode = mvncLoadTensor(graphHandle, tensor16, lenBufFp16, NULL); if (retCode != MVNC_OK) { qCritical() << "Error - Could not load tensor\n"; qCritical() << " mvncStatus from mvncLoadTensor is: " << retCode; return; } void* resultData16; void* userParam; unsigned int lenResultData; retCode = mvncGetResult(graphHandle, &resultData16, &lenResultData, &userParam); if (retCode != MVNC_OK) { qCritical() << "Error - Could not get result for image "; qCritical() << " mvncStatus from mvncGetResult is: " << retCode; return; } // convert half precision floats to full floats int numResults = lenResultData / sizeof(half); qDebug() << "numResults: " << numResults; float* resultData32; resultData32 = (float*)malloc(numResults * sizeof(*resultData32)); fp16tofloat(resultData32, (unsigned char*)resultData16, numResults); std::vector<DetectedObject> obj; Region region; region.GetDetections(resultData32, 125, 12, 12, 20, width, height, 0.25, 0.4, 13, obj); for(DetectedObject o : obj){ qDebug() << o.objType << " = " << o.name.c_str() << ", prob = " << o.confidence; } mvncDeallocateGraph(graphHandle); mvncCloseDevice(deviceHandle); }

 

It is a test, so is written horribly and I'm sorry for it.

 

Can you help me?

0 Kudos
7 Replies
idata
Employee
733 Views

@Doch It's tough to tell from looking at this code alone. Can you send me a link to your code and your model so that I may debug the issue? Thanks.

0 Kudos
idata
Employee
733 Views

Here's a link with the code and the graph: https://dropbox.com/s/mqgcgublnsnsoun/YoloNCS.tar.gz?dl=0.

 

In the meantime I changed a bit the code because I tried without success some things; I also removed Qt dependence.

 

I wrote and modified only main.cpp, ncsnet.h and ncsnet.cpp, other files are copied as they are from examples.

 

Edit: I tried the same graph file with duangenquan's YoloV2NCS and it works fine.

0 Kudos
idata
Employee
733 Views

Any ideas?

0 Kudos
idata
Employee
733 Views

Hi Doch,

 

I think you're missing the following line code:

 

out = self.Reshape(out, self.dim)

 

referring to orginal python source.

 

Basically it does a Mat transpose that seems mandatory to get good results.

 

Try to comment out the line in the original code and you'll find (more or less) results similar to yours.

 

Then I advice you to migrate to NCSDK2.

 

You could try starting from SqueezeNet Caffe example.

 

I think you'll have to migrate original ObjectWrapper.PrepareImage method to provide to NCS same data as python example.

 

Regards

 

F__
0 Kudos
idata
Employee
733 Views

Thanks!

 

As soon as I can I try to implement that line and I'll let you know if it works.
0 Kudos
idata
Employee
733 Views

Hello,

 

I've to try this process to with bad result.

 

Have you got new about this code?

 

Process getResult give me strange result..

 

"out = self.Reshape(out, sefldim)" where I must to put this line code ?

 

Thank's
0 Kudos
idata
Employee
733 Views

Right.

 

NCSDK 2 work very well.

 

Result give a float* pointer. Finish bad subroutine convertion float16.

 

I can finally work with my c++
0 Kudos
Reply