Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Siemens TM NPU with Movidius Myriad X

lf96player
Beginner
2,103 Views

Hi,

 

I need help. I bought Siemens TM NPU device that has Movidius Myriad X VPU. 

I need instructions on how to convert TensorFlow Neural network to a .blob file. 

Which version of Intel OpenVino Toolkit to use? Should I install it with PIP? 

I am using Visual Studio Code with Python 3.9.13. I tried to install v2023.3.0. But when I ran script to convert TensorFlow to IR  format I got an error that Openvino is missing.

 

Also after converting it to IR format (.xml and .bin) , how to convert to .blob?

 

0 Kudos
6 Replies
Aznie_Intel
Moderator
2,056 Views

Hi Lt96player,

 

Thanks for reaching out. The support for Myriad Plugin is only available in OpenVINO 2022.3.1 LTS. You may Install OpenVINO Runtime using an Archive File based on your operating system.

 

Meanwhile, for blob creation, you may refer to this blob creation and memory utility documentation. If you want to create a blob file from your existing model, you can use the OpenVINO Compile Tool with this specific command:

./compile_tool -<path_to_model>/model_name.xml -d MYRIAD -o model_xml_file.blob

 

Hope this helps.

 


Regards,

Aznie


0 Kudos
lf96player
Beginner
2,027 Views

Hi,

thank you. This worked. But when I uploaded the .blob file to the SD card for TM NPU and when I called the neural network I got this error: ValueError: Input 0 is incompatible with layer of network: expected shape=(0, 2, 1, 0) does not match the found shape.

Neural network is called inside TM NPU with this line of code: 

net.run(input_values)   
I tried a lot of combinations for input values but it doesn't work. I don't know why it has this zeros before 2 and after 1...

 

Neural network generated from phyton is very simple: 

 

import numpy as np
import tensorflow as tf

Kp = 6
Ki = 3
# Define the neural network using TensorFlow
model = tf.keras.Sequential([
    tf.keras.layers.InputLayer(input_shape=(2,)),
    tf.keras.layers.Dense(1, use_bias=False)  # One output node, no bias
])

# Set the weights of the model
model.layers[0].set_weights([np.array([[Ki], [Kp]])])
print(tf.version.VERSION)
# Save the model in SavedModel format

 

In the attachment is the SD card content that goes into TM NPU.

 
0 Kudos
Aznie_Intel
Moderator
1,994 Views

Hi Lt96player,

 

What is the model name you are using? Please note that only the models with static shapes are supported on NPU. You may check the information and the features on this NPU Device page.

 

On another hand, I have tested your blob file and observed the error to recompile your model with the required precision. For the NPU plugin, the supported data types are FP32 and FP16. Also, it is important to make sure your Xml + Bin and blob file can be inferred and compatible with OpenVINO.If the model itself is incompatible with OpenVINO, no matter what networks/layers are used, issues are expected. You may run your Intermediate Representation (IR) Xml + Bin file with Benchmark_App to validate your model before converting it into a blob file.

 

 

Regards,

Aznie


0 Kudos
lf96player
Beginner
1,983 Views

Hi @Aznie_Intel 

 

Thank you for your answer.  I decided to simplify my neural network even more. It should act like a multiplication operation, e.g. input is 2, output is 6. Here is the Python code for that: 

import numpy as np
import tensorflow as tf

# Define the neural network using TensorFlow
model = tf.keras.Sequential([
    tf.keras.layers.InputLayer(input_shape=(1,)),  # One input node
    tf.keras.layers.Dense(1, use_bias=False)  # One output node, no bias
])

# Set the weight of the model using NumPy
model.layers[0].set_weights([np.array([[3]])])  # Weight to multiply the input by 3

# Choose some input number
input_number = 2

# Predict the output using the neural network
output = model.predict(np.array([[input_number]]))[0][0]

print(f"Input: {input_number}, Output: {output}")
#save model in .pb format
#model.save('some directory')

As a result of this code, I got two folders (assets and variables), and two .pb files (keras_metada.pb and saved_model.pb). These files are included in the attached zip file. 

After that, I used model optimizer tool with this line of code: 

mo --saved_model_dir "input directory" --output_dir "output directory" --input_shape [1,1,1,1] --compress_to_fp16

When executing this line of code, I am unsure if it uses keras_metada.pb or saved_model.pb. Since I don't know how to tell to use certain files explicitly. I hope it is saved_model.pb. 

.bin and .xml files generated this way are also included in the attached zip file. I tried bechmark_app and it worked, but it showed that it is working with f32. I read that Myriad X NPU works with F16 format.

The parameter -input_shape is [1,1,1,1]. Should I write it differently? I am not doing any image processing.

 

As the last step, I used this line of code to generate .blob file:

compile_tool -m "directory\saved_model.xml" -d MYRIAD -o "directory\saved_model.blob" -c myriad.conf

This wasn't working until I added this -c myriad.conf. I made myriad.conf file inside the folder where is compile_tool, and that file contains this: 

MYRIAD_ENABLE_MX_BOOT NO

.blob file is also included in the attached folder.

 

What do you mean by saying "model name"? And how to check if .blob file can be inferred and compatible with OpenVINO?

 

Now when I call neural network in NPU using net.run(value), I got this error:

 [62.985873] [LRT] Assertion failed in
file: D:\Git\et200.em_mp.tm_npu\em_tmai\_tool\intelMDK\mdk\projects\FathomRun\leon\mvnciResourceManager.cpp, line 375, function: hwConfig
condition: 0 <= idx && idx < (int)count(all_.hwConfigMask_)
message:
Exited with error code 1

I defined value as a list or as an array, e.g. value = [2] or value = uarray.array('i', [2]).

 

 

 

0 Kudos
Aznie_Intel
Moderator
1,957 Views

Hi Lt96player,

 

For your information, the supported and validated on NPU 3720 platform and currently available only with the Archive distribution of OpenVINO™. The Siemens TM NPU is not a validated platform in OpenVINO. The NPU support in OpenVINO is still under active development and may offer a limited set of supported OpenVINO features.

 

Also, offline compilation and blob import are supported but only for development purposes. Pre-compiled models (blobs) are not recommended to be used in production. Blob compatibility across different OpenVINO versions/ NPU driver versions is not guaranteed.

 

Hope this answers your query.

 

 

Regards,

Aznie


0 Kudos
Aznie_Intel
Moderator
1,848 Views

Hi Lt96player,


This thread will no longer be monitored since we have provided information. If you need any additional information from Intel, please submit a new question.



Regards,

Aznie


0 Kudos
Reply