Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Openvino crashing during inference

davius
Beginner
1,971 Views

Hello,

I'm trying to run a converted Keras model in OpenVino.

The conversion process did well, but OpenVino crashes when my program comes to inference.

No error message, no log, Python simply exit silently.

 

I'm running Openvino from a python script, on a Windows 10 machine.

Versions : Python 3.8.5, OpenVino 2021.3.0, TensorFlow 2.3.1.

 

Here is the doc is used as a reference : https://medium.com/analytics-vidhya/tutorial-on-how-to-run-inference-with-openvino-in-2021-a96e5e7c99f8

 

I'm also able to reproduce the issue on a Raspberry OS with Intel VPU. I get a "segmentation fault" error, but no more information.

 

I have been able to run some demos (objects detection in Python), so my OpenVino installation seems to work.

 

More than a solution, I would like to get a methodology to troubleshoot this. Is there any parameter to get a more verbose output or a log file to look in ?

 

I can send model or code by PM if necessary.

 

Thanks in advance 
David.

Labels (1)
0 Kudos
1 Solution
Zulkifli_Intel
Moderator
1,839 Views

Hello David,


It seems like the model is working fine. You should write your own inference application code in order to make it work or you can try a different inference sample or demo that matches your model topology.


Regards,

Zulkifli


View solution in original post

0 Kudos
12 Replies
Zulkifli_Intel
Moderator
1,944 Views

Hi David,


Thank you for reaching out to us.


Please share your model with us, so we can investigate this issue further.


Regards,

Zulkifli


0 Kudos
davius
Beginner
1,936 Views

Hello,

 

Please find the model attached (original Tensorflow + OpenVino).

 

David

0 Kudos
Zulkifli_Intel
Moderator
1,917 Views

Hello David,

 

Thank you for sharing your model with us.

 

Can you share more information on the following:

 

  • What Model Optimizer parameter you are using? For example "python mo_tf.py --input_model model-original.pb --input_shape [1,224,224,3] --data_type=FP16"
  • Which Python demo you are running?

 

Regards,

Zulkifli

 

0 Kudos
davius
Beginner
1,909 Views

Hello,

 

Here is the model optimizer command line I use :

python "C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\model_optimizer\mo_tf.py" --input_model $tfmodel --output_dir $modelsPath --model_name $ovmodel --data_type FP32 --batch 1

On windows (used to run the optimizer and to test the model), I'm using Miniconda, on Raspberry, I use a standard Python3 installation.

David.

 

0 Kudos
Zulkifli_Intel
Moderator
1,898 Views

Hello David,


I successfully convert your model to IR format. I have validated your model using benchmark_app and your model working fine.

The model seems to be workable (I've confirmed with Benchmark app) but just not intended for Object Detection sample usage as I received the error:


 

Can you share more information on your model topology, is it a detection model or a classification model? And which demo are you using?

 

For segmentation fault, which Raspberry Pi are you using? Are you using NCS or NCS 2 when you run the demo? The segmentation fault seems like it has memory issues.


Regards,

Zulkifli


0 Kudos
davius
Beginner
1,870 Views

Hello Zulkifli,

Thank you for this analysis.

I did not used this model as an input for Intel's demos. The model is used in a self-driving car project, based on the DonkeyCar framework (https://github.com/autorope/donkeycar/blob/dev/donkeycar/parts/keras.py).

 

The model takes images as input, and reports back targets steering and throttle values.

 

I'm using NCS2.

 

David.

0 Kudos
Maksim_S_Intel
Employee
1,893 Views

>More than a solution, I would like to get a methodology to troubleshoot this. Is there any parameter to get a more verbose output or a log file to look in ?

You can try to run the appliction in the debugger (e.g. gdb) and capture stack trace after crash:

$ gdb --args <app-and arguments>
<... some text here ...>
(gdb) run
<... crash ...>
(gdb) backtrace

With this stacktrace you'll be able to identify last function called from your application and library in which crash has happened.

0 Kudos
davius
Beginner
1,869 Views

Hello,


Thank you for this information.

David.

0 Kudos
Zulkifli_Intel
Moderator
1,840 Views

Hello David,


It seems like the model is working fine. You should write your own inference application code in order to make it work or you can try a different inference sample or demo that matches your model topology.


Regards,

Zulkifli


0 Kudos
davius
Beginner
1,825 Views

Hello,

Many thanks @Zulkifli_Intel ! If the model works on your side, I'm probably close...

Here is the code I use for inference.

This code is executed as a part in the Donkey Car framework. I think I'll try to make it independent to exclude any conflict...

ONE_BYTE_SCALE = 1.0 / 255.0

class Openvino:
    def __init__(self, device_type, model_path
        openvino_xml = os.path.splitext(model_path)[0] +".xml"
        openvino_bin = os.path.splitext(model_path)[0] +".bin"

        print("Loading Openvino model using " + device_type + " device type...")
        ie_core_handler = IECore()
        print("Available Openvino devices : " + str(ie_core_handler.available_devices))
        
        network = ie_core_handler.read_network(model=openvino_xml, weights=openvino_bin)
        exec_net = ie_core_handler.load_network(network, device_name=device_type, num_requests=1)
        self.inference_request = exec_net.requests[0]

        input_blobs = self.inference_request.input_blobs
        output_blobs = self.inference_request.output_blobs
        self.input_blob_name = next(iter(input_blobs))
        self.output_blob_name = next(iter(output_blobs))

        print(" Model loaded.")

        
    def run(self, image, other_arr: np.ndarray = None) \
            -> Tuple[Union[float, np.ndarray], ...]:
        
            img_arr = np.asarray(image)
            img_arr = img_arr.astype(np.float32) * ONE_BYTE_SCALE
            img_arr = img_arr.reshape((1,) + img_arr.shape)

            imageHeight, imageWidth, imageColorDepth = image.shape

            tensor_description = TensorDesc(precision="FP32", dims=(1, 3, imageHeight, imageWidth), layout='NCHW')
            blob = Blob(tensor_description, img_arr)
            
            self.inference_request.set_blob(blob_name=self.input_blob_name, blob=blob)
            self.inference_request.infer()
            
            #output = inference_request.output_blobs[self.output_blob_name].buffer
            steering = 0 # Debug
            throttle = 0 # Debug
            return steering, throttle

 The output function is not implemented yet, since the python script crashes just before that.

Any idea where this might come from ?

David.

0 Kudos
davius
Beginner
1,785 Views

I finally made it work, in a simpler version.

Thanks a lot!

David.

0 Kudos
Zulkifli_Intel
Moderator
1,779 Views

Hello David,

 

I'm glad to hear that from you. Since your issue has been solved, this thread is no longer be monitored by Intel. If you need any additional information from Intel, please submit a new question.

 

Regards,

Zulkifli

 

0 Kudos
Reply