Community
cancel
Showing results for 
Search instead for 
Did you mean: 
bineeshpc
Beginner
502 Views

Object detection ssd inference sample in python stopped working after we generated IR with v 2021.1

We had an IR which we generated with openvino 2020.4
That IR was working properly with the default python inference sample.

We generated new IR with openvino 2021.1.

It was giving the following error.

ValueError: get_shape was called on a descriptor::Tensor with dynamic shape

Sample file

/opt/intel/openvino_2021/inference_engine/samples/python/object_detection_sample_ssd/object_detection_sample_ssd.py

Logs with error

-----------------

[ INFO ] Loading Inference Engine
[ INFO ] Loading network files:
latest_invoice_ir/frozen_inference_graph.xml
latest_invoice_ir/frozen_inference_graph.bin
[ INFO ] Device info:
CPU
MKLDNNPlugin version ......... 2.1
Build ........... 2021.1.0-1237-bece22ac675-releases/2021/1
inputs number: 2
input shape: [1, 3]
input key: image_info
input shape: [1, 3, 600, 1024]
input key: image_tensor
[ INFO ] File was added:
[ INFO ] /home/ubuntu/Desktop/Test_3/Invoice_107.jpg
[ WARNING ] Image /home/ubuntu/Desktop/Test_3/Invoice_107.jpg is resized from (835, 700) to (600, 1024)
[ INFO ] Preparing input blobs
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
/usr/lib/python3/dist-packages/apport/report.py:13: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import fnmatch, glob, traceback, errno, sys, atexit, locale, imp
Traceback (most recent call last):
File "/opt/intel/openvino_2021/inference_engine/samples/python/object_detection_sample_ssd/object_detection_sample_ssd.py", line 207, in <module>
sys.exit(main() or 0)
File "/opt/intel/openvino_2021/inference_engine/samples/python/object_detection_sample_ssd/object_detection_sample_ssd.py", line 147, in main
output_dims = output_info.shape
ValueError: get_shape was called on a descriptor::Tensor with dynamic shape

Output for earlier version of IR was as below.(working properly)
----------------------------------------------------------------
[ INFO ] Loading Inference Engine
[ INFO ] Loading network files:
/home/ubuntu/Desktop/invoice_enhancement_team/model1_ir/frozen_inference_graph.xml
/home/ubuntu/Desktop/invoice_enhancement_team/model1_ir/frozen_inference_graph.bin
[ INFO ] Device info:
CPU
MKLDNNPlugin version ......... 2.1
Build ........... 2021.1.0-1237-bece22ac675-releases/2021/1
inputs number: 2
input shape: [1, 3]
input key: image_info
input shape: [1, 3, 600, 1024]
input key: image_tensor
[ INFO ] File was added:
[ INFO ] /home/ubuntu/Desktop/Test_3/Invoice_107.jpg
[ WARNING ] Image /home/ubuntu/Desktop/Test_3/Invoice_107.jpg is resized from (835, 700) to (600, 1024)
[ INFO ] Preparing input blobs
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the device
[ INFO ] Creating infer request and starting inference
[ INFO ] Processing output blobs
[0,1] element, prob = 0.994617 (319,113)-(424,165) batch id : 0 WILL BE PRINTED!
[1,2] element, prob = 0.0116822 (311,112)-(433,169) batch id : 0
[2,3] element, prob = 0.014132 (40,627)-(232,660) batch id : 0
[3,4] element, prob = 0.960672 (483,513)-(655,549) batch id : 0 WILL BE PRINTED!
[ INFO ] Image out.bmp created!
[ INFO ] Execution successful

Labels (2)
0 Kudos
5 Replies
Iffa_Intel
Moderator
472 Views

Greetings,


Have you tried to run it from the sample : openvino_2021/deployment_tools/inference_engine/demos/python_demos/object_detection_demo_ssd_async/object_detection_demo_ssd_async.py


Does it produce a success result or contain some errors?



Sincerely,

Iffa


bineeshpc
Beginner
458 Views

Thanks for the reply.

I tried with the python demo file you suggested. That program also failed with another error.

Program output

-------------------

[ INFO ] Initializing Inference Engine...
[ INFO ] Loading network...
[ INFO ] Using USER_SPECIFIED mode
[ INFO ] Reading network from IR...
[ INFO ] Loading network to plugin...
[ INFO ] Use SingleOutputParser
[ INFO ] Reading network from IR...
[ INFO ] Loading network to plugin...
[ INFO ] Use SingleOutputParser
[ INFO ] Starting inference...
To close the application, press 'CTRL+C' here or switch to the output window and press ESC key
To switch between min_latency/user_specified modes, press TAB key in the output window
/usr/lib/python3/dist-packages/apport/report.py:13: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import fnmatch, glob, traceback, errno, sys, atexit, locale, imp
Traceback (most recent call last):
File "/opt/intel/openvino_2021/deployment_tools/inference_engine/demos/python_demos/object_detection_demo_ssd_async/object_detection_demo_ssd_async.py", line 582, in <module>
sys.exit(main() or 0)
File "/opt/intel/openvino_2021/deployment_tools/inference_engine/demos/python_demos/object_detection_demo_ssd_async/object_detection_demo_ssd_async.py", line 556, in main
detectors[mode](frame, next_frame_id, {'frame': frame, 'start_time': start_time})
File "/opt/intel/openvino_2021/deployment_tools/inference_engine/demos/python_demos/object_detection_demo_ssd_async/object_detection_demo_ssd_async.py", line 167, in __call__
inputs, preprocessing_meta = self.preprocess(inputs)
File "/opt/intel/openvino_2021/deployment_tools/inference_engine/demos/python_demos/object_detection_demo_ssd_async/object_detection_demo_ssd_async.py", line 254, in preprocess
img = self._resize_image(inputs[self.image_blob_name], (self.w, self.h), self.keep_aspect_ratio_resize)
KeyError: 'image_tensor'

Iffa_Intel
Moderator
452 Views

Greetings,

 

We had validate from our end and both works just fine.

object_detection_demo_ssd_async.py works with images, video files webcam feed.

Meanwhile, object_detection_sample_ssd.py requires an image as the input file (this is the one that you are currently using).

Attached are the validated results.

Please help to refer to these photos and take a look on how I use the command to run it there.

 

Sincerely,

Iffa

 

bineeshpc
Beginner
412 Views

From the directory name of the screenshot(from reply) it looks like the model was developed with tensorflow version 2.

However our model was developed with tensorflow 1 which used to work properly with openvino 2020.04

After we upgraded to the october release of openvino it stopped working.
Does this mean that only tensorflow 2 models will be supported from now on?

Iffa_Intel
Moderator
426 Views

Greetings,


Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question. 


Sincerely,

Iffa


Reply