Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6404 Discussions

Running custom network on Intel Neural Compute Stick 2

MChmu
Beginner
491 Views

I have the following problem. Namely I trained network in Keras, subsequently I performed the conversion to TensorFlow model. Then using model optimizer script provided by Intel, I converted my model to Intermediate Representation using the following command:

python mo.py --input_model frozen_net_0000.pb --input inputs/input_tof,inputs/input_radar --input_shape [1,20,38304,1],[1,23,4096,1] --disable_nhwc_to_nchw --data_type FP16 --progress

 

After conversion to Intermediate Representation, I adapted C++ code in order to load neural network model to compute stick and perform inference. However, the problem begins here. Namely, I call StartAsync() method. For debugging purposes, I have written two couts, the first one is printed however second cout is not printed (in case of running inference on Neural Copute Stick). Morever, in case of running inference on either CPU or GPU. Both couts are printed and inference is performed in both cases.

... ... ... cout << "CHECK_0" << endl; infer_request.StartAsync(); infer_request.Wait(IInferRequest::WaitMode::RESULT_READY); cout << "CHECK_1" << endl; ... ... ...

For that reason, I'd like to pose a question. What can be possible reason that inference on NCS2 is crashing ?

If you need more details. Please let me know, I can provide it to you.

0 Kudos
1 Reply
JAVIERJOSE_A_Intel
403 Views

Hi MChmu,

 

 

Thanks for reaching out.

 

Could you please answer the following so we can test it from our end:

 

  • Which version of OpenVINO™ toolkit are you using?
  • Wich version of Tensorflow did you use to convert your Keras model?
  • Also, could you share your model files (Keras, .pb, and IR files)? If you don't want to share it publicly, you can send us a private message with the files.

 

 

Regards,

 

Javier A.

0 Kudos
Reply