- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
When I attempt to run the object_detection_sample_ssd script on a UP Squared AI Edge board, with HDDL specified as the device, the program yields the following error:
CHECK failed: (index) < (current_size_):
Has anyone else encountered this problem? I am running Ubuntu 16.04.6 LTS on a UP Squared AI Edge Board, using Openvino 2020.3. I tested this on both the SSD mobilenet v2 model and yolov3 model, converted using the instructions in the documentation. Both models produce the same error. Both models produce the correct results without error when the CPU is specified instead.
This is the full output of the sample run:
[ INFO ] InferenceEngine: API version ............ 2.1 Build .................. 2020.3.0-3467-15f2c61a-releases/2020/3 Description ....... API Parsing input parameters [ INFO ] Files were added: 1 [ INFO ] images/apples.bmp [ INFO ] Loading Inference Engine [ INFO ] Device info: HDDL HDDLPlugin version ......... 2.1 Build ........... 2020.3.0-3467-15f2c61a-releases/2020/3 [ INFO ] Loading network files: graph2.xml graph2.bin [ INFO ] Preparing input blobs [ INFO ] Batch size is 1 [ INFO ] Preparing output blobs [ INFO ] Loading model to the device [libprotobuf FATAL /home/jenkins/agent/workspace/IE-Packages/BuildAndPush/hddl-service/thirdparty/protobuf/src/google/protobuf/repeated_field.h:1167] CHECK failed: (index) < (current_size_): [ ERROR ] CHECK failed: (index) < (current_size_):
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Mason,
I apologize for the delay. We are looking into your issue.
Regards,
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Mason,
We were able to run object_detection_sample_ssd successfully on an Up Squared board with HDDL and OpenVINO 2020.3.
This is the command we used:
~/inference_engine_samples_build/intel64/Release/object_detection_sample_ssd -m ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.xml -i ~/image.bmp -d HDDL
A few things to check:
- Run the HDDL dependencies setup script as described in Configuration Steps in this page.
- Start the hddldaemon in a separate terminal:
<openvino_install_dir>/deployment_tools/inference_engine/external/hddl/bin/hddldaemon
- Run squeezenet demo using -d HDDL:
<openvino_install_dir>/deployment_tools/demo/demo_squeezenet_download_convert_run.sh -d HDDL
Regards,
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Jesus,
Thank you for the assistance. A reinstall of the HDDL dependencies and a relaunch of the daemon partially resolved the issue. The squeezenet demo now runs successfully, but the SSD and YOLOv3 models now produce the following error when run with the HDDL device (whereas they work fine with the CPU):
[ INFO ] Loading Inference Engine [ INFO ] Loading network files: ../yolo3.xml ../yolo3.bin [ INFO ] Device info: HDDL MKLDNNPlugin version ......... 2.1 Build ........... 2020.3.0-3467-15f2c61a-releases/2020/3 inputs number: 1 input shape: [1, 3, 416, 416] input key: inputs [ INFO ] File was added: [ INFO ] ../images/apples.bmp [ WARNING ] Image ../images/apples.bmp is resized from (416, 416) to (416, 416) [ INFO ] Preparing input blobs [ INFO ] Batch size is 1 [ INFO ] Preparing output blobs [ ERROR ] Can't find a DetectionOutput layer in the topology [ ERROR ] Output item should have 7 as a last dimension [ INFO ] Loading model to the device Traceback (most recent call last): File "/home/mason/intel/openvino/inference_engine/samples/python/object_detection_sample_ssd/object_detection_sample_ssd.py", line 213, in <module> sys.exit(main() or 0) File "/home/mason/intel/openvino/inference_engine/samples/python/object_detection_sample_ssd/object_detection_sample_ssd.py", line 166, in main exec_net = ie.load_network(network=net, device_name=args.device) File "ie_api.pyx", line 178, in openvino.inference_engine.ie_api.IECore.load_network File "ie_api.pyx", line 187, in openvino.inference_engine.ie_api.IECore.load_network RuntimeError: Unexpected CNNNetwork format: it was converted to deprecated format prior plugin's call
I have attached both sets of model files to this post. In addition, here are the exact steps I used to produce the models:
SSD Mobilenet v2:
1. Downloaded ssd_mobilenet_v2_coco_2018_03_29 directly from intel page
2. Converted the model using the following command:
/opt/intel/openvino/deployment_tools/model_optimizer/mo_tf.py --input_model ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb --transformations_config /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config ssd_mobilenet_v2_coco_2018_03_29/pipeline.config --input_shape [1,300,300,3] --reverse_input_channels --output_dir ./ --output="detection_boxes,detection_classes,detection_scores,num_detections"
3. Executed the model on the UP board using the following command:
python3 ~/intel/openvino/inference_engine/samples/python/object_detection_sample_ssd/object_detection_sample_ssd.py -m frozen_inference_graph.xml -i ../images/apples.bmp -d HDDL
YOLO v3 Model:
Converted the model using the instructions here
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Mason,
I was able to reproduce your error using the python version of the object_detection_sample. However, the cpp version in ~/inference_engine_samples_build/intel64/Release/object_detection_sample_ssd works well.
This is actually a known bug and there is no ETA for a fix yet.
Regards,
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I thought it worth posting that this bug is not present in release 2020.1, so downgrading could provide a temporary fix.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page