Intel® DevCloud
Help for those needing help starting or connecting to the Intel® DevCloud

Kernel Dies while tryin to load model in plugin with ie.load_network

Raza__Ghulam_Jilani
1,332 Views

ive a vanila yolov3 model that i downloaded and converted to ir on pc and it ran fine on my pc and now ive uploaded the same model on the devcloud and everytime i try to load the model into plugine with ie.load_network(...)  the kernel dies. Basicaly i am trying to copy the code from the demo object_detection_demo_yolov3_async.py on the cloud which runs fine on my pc but kernel crashes on the dev cloud.

this happens when the device is CPU

the cpu extension is libcpu_extension_sse4.so

the link to the model is:

https://drive.google.com/open?id=1_sRkq8y-Ijdfb9D_kzw8ozsMIM48v17T

Any help?

0 Kudos
6 Replies
Yogesh_P_Intel
Employee
1,332 Views

Hi Raza,

We have requested access to the IR files in your Drive to look more closely into the issue. Kindly provide that

Meanwhile, can you try with libcpu_extension_avx2.so located at. /opt/intel/openvino/deployment_tools/inference_engine/lib/intel64/libcpu_extension_avx2.so.

Regards

Yogesh

 

 

0 Kudos
Raza__Ghulam_Jilani
1,332 Views

Pandey, Yogesh (Intel) wrote:

Hi Raza,

We have requested access to the IR files in your Drive to look more closely into the issue. Kindly provide that

Meanwhile, can you try with libcpu_extension_avx2.so located at. /opt/intel/openvino/deployment_tools/inference_engine/lib/intel64/libcpu_extension_avx2.so.

Regards

Yogesh

 

 

Hi! I've provided the access. you can see it now.

0 Kudos
Raza__Ghulam_Jilani
1,332 Views

[an update]

i've also uploaded the frozen model to the cloud and tried to convert it here. On my cpu i use this command python mo_tf.py --input_model E:\tensorflow-yolo-v3-master\frozen_darknet_yolov3_model.pb --output_dir C:\openvino_demo_build\Intel\OpenVINO\openvino_models\ir --data_type FP16 -b 1 --tensorflow_use_custom_operations_config ./extensions/front/tf/yolo_v3.json and it converts and runs fine, but when i ran the same command here specifying  --tensorflow_use_custom_operations_config ./extensions/front/tf/yolo_v3.json , it converts but also gives the following error

[ SUCCESS ] Generated IR model. [ SUCCESS ] XML file: /home/u33951/yolov3_ir/FP32/frozen_darknet_yolov3_model.xml [ SUCCESS ] BIN file: /home/u33951/yolov3_ir/FP32/frozen_darknet_yolov3_model.bin [ SUCCESS ] Total execution time: 24.51 seconds.

--------------------------------------------------------------------------- NameError Traceback (most recent call last) <ipython-input-7-a2f63926b7f3> in <module> 1 get_ipython().system('/opt/intel/openvino/deployment_tools/model_optimizer/mo_tf.py --input_model /home/u33951/yolo_raw/frozen_darknet_yolov3_model.pb --data_type FP32 --output_dir /home/u33951/yolov3_ir/FP32 --batch 1') ----> 2 --tensorflow_use_custom_operations_config /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/yolo_v3.json NameError: name 'tensorflow_use_custom_operations_config' is not defined

However when i remove the --tensorflow_use_custom_operations_config ./extensions/front/tf/yolo_v3.json specifier online, the model converts without errors, but in both cases passes varification, but crashes at ie.load_network.

 

and when i try to run the converted model that i uploaded from my pc, it gives the error ERROR: This sample supports only single output topologies  during verification. I cant get my head around this behaviour. the code is given in the attached notebook.

 

p.s using avx2 extension didnt help

p.s im using the model frozen from https://github.com/mystic123/tensorflow-yolo-v3 repo

i've attached both the convertion and testing notebooks  for your reference

 

0 Kudos
Yogesh_P_Intel
Employee
1,332 Views

Hi Raza,

We have a working sample here that uses YOLOV3 model here :

Here we are downloading and freezing the model in the lab. Once that is done we are then using Model Optimizer to convert the model to IR format. 

https://github.com/yogeshmpandey/iot-devcloud/tree/R3_updates/python/object-detection-using-yolov3 

Can you test it out and see if your queries are resolved.

Regards

Yogesh

0 Kudos
Raza__Ghulam_Jilani
1,332 Views

Hi,

Thanks for the repo, i've run it and the code is working fine, and it is also running ok on vpu and xeon, but when i try to run on  fpga, it gives an error, apparently it is requiring a plugin. can you please share which plugin it is from the bitstream directory?

the code cell is as follows :

job_id_fpga = !qsub object_detection_job_for_fpga.sh -l nodes=1:idc003a10:iei-mustang-f100-a10 -F "-r results/fpga -d HETERO:FPGA,CPU -f FP16 -i $VIDEO -n 2" -N obj_det_fpga
print(job_id_fpga[0])
#Progress indicators
if job_id_fpga:
   
    progressIndicator('results/fpga', 'post_progress.txt', "Inferencing", 0, 100)

 

Here is the output of the .o file

Traceback (most recent call last):
  File "object_detection_demo_yolov3_async.py", line 364, in <module>
    sys.exit(main() or 0)
  File "object_detection_demo_yolov3_async.py", line 175, in main
    ie.add_extension(args.cpu_extension,args.device)
  File "ie_api.pyx", line 118, in openvino.inference_engine.ie_api.IECore.add_extension
RuntimeError: HETERO device does not support extensions. Please, set extensions directly to fallback devices

 

0 Kudos
Yogesh_P_Intel
Employee
1,332 Views

Hi Raza,

The current lab has the Job script that focuses the YoloV3 job deployment to dev cloud nodes currently CPU, GPU and MYRIAD. 

For FPGA you need to do minor changes in the script (object_detection_job.sh) take a reference from here 

You need to flash using aocl command for the bitstreams to work. The bitstreams available with OpenVINO R3 for YoloV3 are documented here.

You might need to change your new script accordingly. 

 

Regards

Yogesh

 

 

 

 

0 Kudos
Reply