Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Tensorflow 2.0 Support

Peeren__Christian
4,859 Views

Hello,

I was wondering if there's any update on Openvino support for TF2.0? Can we expect support in the near future? What could also help a lot is a detailed tutorial of this workaround via TF1.X freeze_graph.py (https://software.intel.com/en-us/forums/intel-distribution-of-openvino-toolkit/topic/827537). Apparently, some ppl managed to get a TF2.0 model running by using TF1.X's freeze_graph function. Unfortunately, many crucial details are missing for this workaround and a nicely detailed example is missing. I'm currently working on a project where a TF2.0 custom model is supposed to be deployed on an edge device. My preferred solution is still NCS2, but if we can't get it to work, I'm afraid there's no other solution than using a different device.

Thank you,

Christian

0 Kudos
1 Solution
David_C_Intel
Employee
4,858 Views

Hi Christian, 

Thank you for your reply.

Regarding this error produced in Raspbian:

terminate called after throwing an instance of 'InferenceEngine::details::InferenceEngineException'
  what():  Check 'axis < static_cast<size_t>(input_rank)' failed at /teamcity/work/scoring_engine_build/releases_2020_1/ngraph/src/ngraph/op/gather.cpp:140:
While validating node 'Gather[Gather_96](patternLabel_92: float{10,20,30}, patternLabel_93: int64_t{5}, patternLabel_95: int64_t{1}) -> (??)':
The axis must => 0 and <= input_rank (axis: 4294967295).

 

It is an issue with the latest OpenVINO™ toolkit version (2020.1). The "axis" error is caused by the new IR v10 format. 

As a workaround, you can generate the model with the following parameter:

--generate_deprecated_IR_V7

If it does not work, we recommend you installing the previous version of OpenVINO™ toolkit (2019 R3.1) on a Windows, Linux or Mac OS system, convert the model there, which will generate the IR v7 format and that should work successfully with the Intel® Neural Compute Stick 2 on the raspberry pi.

 

Regards,

David

View solution in original post

0 Kudos
13 Replies
David_C_Intel
Employee
4,858 Views

Hi Christian, 

Thank you for reaching out.

Currently the Model Optimizer does not support Tensorflow (TF) 2.0. However, if you use TF 1.14 freeze_graph.py tp freeze a TF 2.0 model, this frozen .pb file should be accepted by the Model Optimizer.

About support on this in the future, we cannot comment on future releases. Although, you can check the Release Notes and be aware of the updates on latest releases.

 

Best regards,

David

0 Kudos
vkana3
Beginner
4,343 Views

Hi @David_C_Intel 

I have created a model in TF 2.2 and saved the model, but couldn't convert it to IR format.

Can please you give a sample code for using TF 1.14 freeze_graph.py to freeze a TF 2.0 model.?

0 Kudos
David_C_Intel
Employee
4,327 Views

Hi vkana3,

Thanks for reaching out. Please refer to this OpenVINO™ Toolkit documentation about Converting a Tensorflow* Model. As stated in that documentation, you can also load non-frozen models into the Model Optimizer if your model is not already frozen.

For further assistance, please open a new thread as older discussions are no longer being monitored.

Best regards,

David C.

0 Kudos
Peeren__Christian
4,858 Views

Hi David,

right exactly, there is supposed to be that way. Even though I know it is rather a TF2.0 topic, I'd be super grateful for a small example how to do that. It could be a super simple model just for the sake of demonstration. I've already wasted several days to exactly go this way and I don't know what I'm missing. E.g. how am I passing my TF2.0 model to TF1.14 model freeze? Do I use save model or reconstruct the model in TF1.14 and just pass the weights from my TF2.0 one?

If anyone could shed some light on this, this would be super helpful. I don't believe I'm the only one with that issues here.

Thanks,

Christian

 

0 Kudos
Peeren__Christian
4,858 Views

Update: Let's make it even easier. Attached as code I've got a super simple model. Which steps do I need to take to prepare it for TF1.14?

 

import tensorflow as tf

print(tf.__version__) # 2.0.0

# Initialize model
model = tf.keras.models.Sequential([
    tf.keras.layers.Flatten(input_shape=(28, 28), name="FlattenLayer"),
    tf.keras.layers.Dense(32, activation='relu', name="ReluLayer"),
    tf.keras.layers.Dense(10, activation='softmax', name="Softmax")
])

# Since we are only interested in the workflow of 
# exporting the model we skip the training

# TBD: Save model in such a way that TF1.14 model freeze can read it


 

0 Kudos
Peeren__Christian
4,858 Views

Update: Maybe this could be an alternative workaround:

Step 1.) For my simple model above I manged to freeze the graph in TF2.0 with the code below

 

import tensorflow as tf
from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2


print(tf.__version__) # 2.0.0

# Initialize model
model = tf.keras.models.Sequential([
    tf.keras.layers.Flatten(input_shape=(28, 28), name="FlattenLayer"),
    tf.keras.layers.Dense(32, activation='relu', name="ReluLayer"),
    tf.keras.layers.Dense(10, activation='softmax', name="Softmax")
])

# Since we are only interested in the workflow of 
# exporting the model we skip the training

# TBD: Save model in such a way that TF1.14 model freeze can read it

# ====================================================
# Freeze model directly in TF2.0 via
# Freezes a model to pb
def freeze(model, outputdir='./'):

    # Convert Keras model to ConcreteFunction
    full_model = tf.function(lambda x: model(x))
    full_model = full_model.get_concrete_function(
        tf.TensorSpec(model.inputs[0].shape, model.inputs[0].dtype)
    )

    frozen_func = convert_variables_to_constants_v2(full_model)
    frozen_func.graph.as_graph_def()
    # Save frozen graph from frozen ConcreteFunction to hard drive
    tf.io.write_graph(graph_or_graph_def=frozen_func.graph,
                      logdir=outputdir,
                      name="frozen_graph.pb",
                      as_text=False)


    layers = [op.name for op in frozen_func.graph.get_operations()]

    print("Frozen model layers: ")
    for layerName in layers:
        print(f"layer: {layerName}")

    print("-" * 50)
    print("Frozen model inputs: ")
    print(frozen_func.inputs)
    print("Frozen model outputs: ")
    print(frozen_func.outputs)

# Run the TF2.0 freezer
freeze(model, './freeze')

 

This gave me the frozen_graph.pb, which I have attached with its logfile (see attached zip file).

2.) Step: I took this frozen graph and passed it through the model optimizer. I used this command

mo_tf.py --input_model /tmp/*.pb --input_shape [1,28,28] --data_type FP16 --output_dir /tmp/ --model_name "examplemodel" --log_level INFO > /tmp/model_optimizer.log

after that I got the examplemodel.bin/xml/mapping and the model_optimizer.log, which is included.

3.) Step (here it fails):  I deployed the examplemodel.bin and xml on my edge device. Unfortunately, inference crashes when loading the model (see openvino_fd_myriad files) with this error message:

terminate called after throwing an instance of 'InferenceEngine::details::InferenceEngineException'
  what():  Check 'axis < static_cast<size_t>(input_rank)' failed at /teamcity/work/scoring_engine_build/releases_2020_1/ngraph/src/ngraph/op/gather.cpp:140:
While validating node 'Gather[Gather_96](patternLabel_92: float{10,20,30}, patternLabel_93: int64_t{5}, patternLabel_95: int64_t{1}) -> (??)':
The axis must => 0 and <= input_rank (axis: 4294967295).

Any idea what's going on? Could this be a viable alternative to TF1.14s model_freeze?

Thanks,

Christian

 

0 Kudos
Anonymous
Not applicable
4,829 Views

I have been trying for two days to freeze a custom tf2 model for OpenVINO. Your code is the closest I have got however when trying to convert to IR it gives:


Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /home/up2b/COVID19DN/Model/Frozen/frozen_graph.pb
- Path for generated IR: /home/up2b/COVID19DN/Model
- IR output name: frozen_graph
- Log level: ERROR
- Batch: 1
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: Not specified, inherited from the model
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Use the config file: None
Model Optimizer version: 2020.2.0-60-g0bc66e26ff
Illegal instruction

Can Intel please sort out their documentation, I waste more time using Intel tech than any other tech. Most Intel documentation no longer works, or never worked, dead links everywhere. Multiple solutions posted by users that no longer work. 

What is the official way to convert a custom TF model ? In my case it is based on DenseNet.

0 Kudos
David_C_Intel
Employee
4,858 Views

Hello Christian, 

Thank you for your reply.

Could you please try testing your model on a full version of OpenVINO™ toolkit on Windows, Linux or Mac OS?

This may be the same as an issue we are investigating with the raspberry pi. Please tell us if it works.

Regards,

David

0 Kudos
Peeren__Christian
4,858 Views

Hi David,

yap, I tried that. Interestingly it can load the model. It fails during inference time with this, but I guess this is rather specific to my model unless you have another idea.

terminate called after throwing an instance of 'InferenceEngine::details::InferenceEngineException'
  what():  Check 'input_elements % output_elements == 0' failed at /teamcity/work/scoring_engine_build/releases_2020_1/ngraph/src/ngraph/op/reshape.cpp:260:
While validating node 'Reshape[Reshape_2](Parameter_1127: float16{1,28}, Constant_1: int64_t{2}) -> (float16{1,784})':
Non-'-1' output dimensions do not evenly divide the input dimensions

Aborted (core dumped)

But so far bottom line message: I can freeze models without TF1.4 freeze_graph function. I can load it in the full (latest) version of OpenVINO™(Linux).

Best regards,

Christian

 

0 Kudos
David_C_Intel
Employee
4,866 Views

Hello Christian, 

You are right, this issue seems to be specific to your custom model.

I am glad you managed to freeze the models you needed successfully!

If you need further assistance, please be free to reach out again.

Best regards,

David

0 Kudos
Peeren__Christian
4,866 Views

David, hold on. What is the bottom line here?

Its working on my full distribution of Openvino, but not the edge device. Does it mean I have to wait until you have investigated the bug and released a new version for Raspbian?

THanks,

Christian

0 Kudos
David_C_Intel
Employee
4,859 Views

Hi Christian, 

Thank you for your reply.

Regarding this error produced in Raspbian:

terminate called after throwing an instance of 'InferenceEngine::details::InferenceEngineException'
  what():  Check 'axis < static_cast<size_t>(input_rank)' failed at /teamcity/work/scoring_engine_build/releases_2020_1/ngraph/src/ngraph/op/gather.cpp:140:
While validating node 'Gather[Gather_96](patternLabel_92: float{10,20,30}, patternLabel_93: int64_t{5}, patternLabel_95: int64_t{1}) -> (??)':
The axis must => 0 and <= input_rank (axis: 4294967295).

 

It is an issue with the latest OpenVINO™ toolkit version (2020.1). The "axis" error is caused by the new IR v10 format. 

As a workaround, you can generate the model with the following parameter:

--generate_deprecated_IR_V7

If it does not work, we recommend you installing the previous version of OpenVINO™ toolkit (2019 R3.1) on a Windows, Linux or Mac OS system, convert the model there, which will generate the IR v7 format and that should work successfully with the Intel® Neural Compute Stick 2 on the raspberry pi.

 

Regards,

David

0 Kudos
Peeren__Christian
4,866 Views

Hi David,

excellent, I'll give it a shot. Much appreciated!

Update: Above's example works when deploying on NCS2 with Raspbian :)

Thanks,

Christian

0 Kudos
Reply