Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6392 Discussions

Transitioning from testing a Keras Image Classifier (.H5) to Openvino (.xml & .bin)

Stamm__Taylor
Beginner
2,961 Views

Hello, I am somewhat new to Machine Learning and am currently developing an image classifier to detect drones flying in the air (end goal is a 30-50 foot dome of airspace). I trained a model with Keras that accurately classified between "drone" and "not drone" to my liking, and in order to implement smooth real-time drone detection on a Raspberry Pi, I bought a Neural Compute Stick 2 to run this model. I understood that in order to utilize my model for the Neural Compute Stick 2, I needed to use the Model Optimizer to convert the model for use for the Openvino environment. I successfully used the model optimizer to convert my .H5 Keras model to IR (.xml and .bin files).

To implement the model with the .H5 file, it was as simple as loading the model from the Keras.models library and using model.predict to obtain the image predictions. Then I labelled the current frame with its classification and prediction certainty. 

My issue is that I am unsure how I can implement these .xml and .bin files that are optimized for Openvino to do the same thing that I was doing with the .H5 file. I tried searching online, but could not find any good examples that directly related to my model.

I have attached pictures of the testing code I used to implement the .H5 model file and the kind of output frame I got with the model.

Thank you for your help!

0 Kudos
6 Replies
Shubha_R_Intel
Employee
2,957 Views

Dearest Taylor. What a cool project you are working on ! Sounds like fun ! Well,to convert a keras model to IR (*.xml and *.bin) you must first convert the keras model to a tensorflow frozen protobuf. You cannot input a keras model into model optimizer directly. I have included a code snippet on how to do that below. Once you have successfully created a tensorflow frozen protobuf then you can feed that into mo_tf.py and generate your IR.

def export_keras_to_tf(input_model, output_model, num_output):
    print('Loading Keras model: ', input_model)

    keras_model = load_model(input_model)

    print(keras_model.summary())

    predictions = [None] * num_output
    predrediction_node_names = [None] * num_output

    for i in range(num_output):
        predrediction_node_names = 'output_node' + str(i)
        predictions = tf.identity(keras_model.outputs, 
        name=predrediction_node_names)

    sess = K.get_session()

    constant_graph = graph_util.convert_variables_to_constants(sess, 
    sess.graph.as_graph_def(), predrediction_node_names)
    infer_graph = graph_util.remove_training_nodes(constant_graph) 

    graph_io.write_graph(infer_graph, '.', output_model, as_text=False)

 

0 Kudos
Stamm__Taylor
Beginner
2,957 Views

Shubha R. (Intel) wrote:

Dearest Taylor. What a cool project you are working on ! Sounds like fun ! Well,to convert a keras model to IR (*.xml and *.bin) you must first convert the keras model to a tensorflow frozen protobuf. You cannot input a keras model into model optimizer directly. I have included a code snippet on how to do that below. Once you have successfully created a tensorflow frozen protobuf then you can feed that into mo_tf.py and generate your IR.

def export_keras_to_tf(input_model, output_model, num_output):
    print('Loading Keras model: ', input_model)

    keras_model = load_model(input_model)

    print(keras_model.summary())

    predictions = [None] * num_output
    predrediction_node_names = [None] * num_output

    for i in range(num_output):
        predrediction_node_names = 'output_node' + str(i)
        predictions = tf.identity(keras_model.outputs, 
        name=predrediction_node_names)

    sess = K.get_session()

    constant_graph = graph_util.convert_variables_to_constants(sess, 
    sess.graph.as_graph_def(), predrediction_node_names)
    infer_graph = graph_util.remove_training_nodes(constant_graph) 

    graph_io.write_graph(infer_graph, '.', output_model, as_text=False)

 

 

Thank you, I already converted the model to the IR before the post, but I was moreover asking how I pull the predictions from the .xml and .bin files to classify an image like how I did previously with the Keras model.

0 Kudos
om77
New Contributor I
2,957 Views

Hi Taylor,

Isn't any of python samples object_detection_demo_ssd_async.py or object_detection_demo_yolov3.py suitable for reference here?

0 Kudos
Shubha_R_Intel
Employee
2,961 Views

Hi Tayler, om77 is correct. Please take a look at samples under deployment_tools\inference_engine\samples - in particular the ones om77 mentioned.

Hope this helps and thanks for using OpenVino !

Shubha

0 Kudos
CBell1
New Contributor II
2,961 Views

Hi Tayler,

please, how did you convert keras model .h5 to TensorFlow .pb file?

Do you have an example?

Thank you

0 Kudos
Shubha_R_Intel
Employee
2,961 Views

Dear Cosma,

Please revisit your other post where I have attached the full python script and given an example.

Thanks !

Shubha

0 Kudos
Reply