Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6480 Discussions

Object detection ssd c++ sample is not working properly

AR92
New Contributor I
1,414 Views

hi,

i am working on windows machine (Plateform toolset v141), i converted a tensorflow model to IR and my python api is working but c++ example of object_detection_ssd is not working 

object_detection_ssd_async is also working with camera, but not with mp4 file  async example gives error for video file "The number of channels for net input and image must match"

i tried object_detection_ssd with below command

object_detection_sample_ssd.exe -m D:/openvino/2/frozen_inference_graph.xml -i C:\Users\LaserTrac\Desktop\la.jpg

below is the output :

[ INFO ] InferenceEngine:
API version ............ 2.1
Build .................. 2020.3.0-3467-15f2c61a-releases/2020/3
Description ....... API
my custom argc: 5
my custom argc: 00000130D90F55F0
Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ] C:\Users\LaserTrac\Desktop\la.jpg
[ INFO ] Loading Inference Engine
[ INFO ] Device info:
CPU
MKLDNNPlugin version ......... 2.1
Build ........... 2020.3.0-3467-15f2c61a-releases/2020/3
[ INFO ] Loading network files:
D:/openvino/2/frozen_inference_graph.xml
D:/openvino/2/frozen_inference_graph.bin
[ INFO ] Preparing input blobs
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs

[ INFO ] Preparing output blobs -4Preparing output blobs -4 0

program exits without any error, in the line ngraphFunction->get_ops() i think it found nothing

 

 

 

 

slog::info << "Preparing output blobs" << slog::endl;

OutputsDataMap outputsInfo(network.getOutputsInfo());

std::string outputName;
DataPtr outputInfo;
if (auto ngraphFunction = network.getFunction()) {
  slog::info << "Preparing output blobs -4";
  int c = 0;
  int d = 0;
  for (const auto& out : outputsInfo) {
    slog::info << "Preparing output blobs -4 "<< c;
    c++;
    for (const auto& op : ngraphFunction->get_ops()) {
      slog::info << "Preparing output blobs -4 " << c << d;
      // here my program exits without any error
      d++;
      if (op->get_type_info() == ngraph::op::DetectionOutput::type_info &&
       op->get_friendly_name() == out.second->getName()) {
       outputName = out.first;
       outputInfo = out.second;
       break;
      }
    }
  }

} else {
  slog::info << "Preparing output blobs -4 2";
  outputInfo = outputsInfo.begin()->second;
  outputName = outputInfo->getName();
}

 

 

 

 

please have  a look

 

 

0 Kudos
4 Replies
samontab
Valued Contributor II
1,401 Views

You say that it works with the camera, but not with an mp4 video file.

A few ideas to try:

Make sure that the video is correctly read by the program, maybe you're missing a codec or there's some other kind of I/O problem, like the file path or permissions, etc.

Another thing it complains about is the number of channels. Have a look if the image is converted correctly to whatever is necessary, maybe it expects a greyscale image and it sends a colour image from the video, etc.

Good luck!

0 Kudos
AR92
New Contributor I
1,395 Views

hi @samontab thanks for ur reply, there are two projects,

i will try object_detection_ssd_sync. 

i was building in Debug mode so c++ sample object_detection_ssd is was not working , i switched it to Release mode its working now,

there is one more thing, i generated my IR with below command with --data_type FP16 

python mo_tf.py --input_model E:\tensorflow_models\ssd_54_ob_139553\output_inference_graph_v1.pb\frozen_inference_graph.pb --tensorflow_object_detection_api_pipeline_config E:\tensorflow_models\ssd_54_ob_139553\output_inference_graph_v1.pb\pipeline.config --tensorflow_use_custom_operations_config ssd_support_api_v1.15.json --data_type FP16   

i am not a c++ developer so i didn't created a new project because there adding dependency to new c++ project looks me little  messy. i followed this video -> https://www.youtube.com/watch?v=_z_YYanhE2I and now .exe is working, in the code i saw FP32 in some places and i have created my IR with FP16 so do i need to change it??? actually i tried changing but then model didn't detect any thing in image. like below code 

float *p = minput2Holder.as<PrecisionTrait<Precision::FP32>::value_type *>();

i changed it to 

INT16 *p = minput2Holder.as<PrecisionTrait<Precision::FP16>::value_type *>();

and all occurrence of FP32 to FP16

Thanks & Regards

Amit 

FP16 in my main.cpp 

0 Kudos
Iffa_Intel
Moderator
1,385 Views

Greetings,



Please help to test with one of our MP4 video samples from here & see whether it could work:

 https://github.com/intel-iot-devkit/sample-videos




Sincerely,

Iffa


AR92
New Contributor I
1,380 Views
0 Kudos
Reply