- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I need help. I bought Siemens TM NPU device that has Movidius Myriad X VPU.
I need instructions on how to convert TensorFlow Neural network to a .blob file.
Which version of Intel OpenVino Toolkit to use? Should I install it with PIP?
I am using Visual Studio Code with Python 3.9.13. I tried to install v2023.3.0. But when I ran script to convert TensorFlow to IR format I got an error that Openvino is missing.
Also after converting it to IR format (.xml and .bin) , how to convert to .blob?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Lt96player,
Thanks for reaching out. The support for Myriad Plugin is only available in OpenVINO 2022.3.1 LTS. You may Install OpenVINO Runtime using an Archive File based on your operating system.
Meanwhile, for blob creation, you may refer to this blob creation and memory utility documentation. If you want to create a blob file from your existing model, you can use the OpenVINO Compile Tool with this specific command:
./compile_tool -m <path_to_model>/model_name.xml -d MYRIAD -o model_xml_file.blob
Hope this helps.
Regards,
Aznie
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
thank you. This worked. But when I uploaded the .blob file to the SD card for TM NPU and when I called the neural network I got this error: ValueError: Input 0 is incompatible with layer of network: expected shape=(0, 2, 1, 0) does not match the found shape.
Neural network is called inside TM NPU with this line of code:
Neural network generated from phyton is very simple:
import numpy as np
import tensorflow as tf
Kp = 6
Ki = 3
# Define the neural network using TensorFlow
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=(2,)),
tf.keras.layers.Dense(1, use_bias=False) # One output node, no bias
])
# Set the weights of the model
model.layers[0].set_weights([np.array([[Ki], [Kp]])])
print(tf.version.VERSION)
# Save the model in SavedModel format
In the attachment is the SD card content that goes into TM NPU.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Lt96player,
What is the model name you are using? Please note that only the models with static shapes are supported on NPU. You may check the information and the features on this NPU Device page.
On another hand, I have tested your blob file and observed the error to recompile your model with the required precision. For the NPU plugin, the supported data types are FP32 and FP16. Also, it is important to make sure your Xml + Bin and blob file can be inferred and compatible with OpenVINO.If the model itself is incompatible with OpenVINO, no matter what networks/layers are used, issues are expected. You may run your Intermediate Representation (IR) Xml + Bin file with Benchmark_App to validate your model before converting it into a blob file.
Regards,
Aznie
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @Aznie_Intel
Thank you for your answer. I decided to simplify my neural network even more. It should act like a multiplication operation, e.g. input is 2, output is 6. Here is the Python code for that:
import numpy as np
import tensorflow as tf
# Define the neural network using TensorFlow
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=(1,)), # One input node
tf.keras.layers.Dense(1, use_bias=False) # One output node, no bias
])
# Set the weight of the model using NumPy
model.layers[0].set_weights([np.array([[3]])]) # Weight to multiply the input by 3
# Choose some input number
input_number = 2
# Predict the output using the neural network
output = model.predict(np.array([[input_number]]))[0][0]
print(f"Input: {input_number}, Output: {output}")
#save model in .pb format
#model.save('some directory')As a result of this code, I got two folders (assets and variables), and two .pb files (keras_metada.pb and saved_model.pb). These files are included in the attached zip file.
After that, I used model optimizer tool with this line of code:
mo --saved_model_dir "input directory" --output_dir "output directory" --input_shape [1,1,1,1] --compress_to_fp16When executing this line of code, I am unsure if it uses keras_metada.pb or saved_model.pb. Since I don't know how to tell to use certain files explicitly. I hope it is saved_model.pb.
.bin and .xml files generated this way are also included in the attached zip file. I tried bechmark_app and it worked, but it showed that it is working with f32. I read that Myriad X NPU works with F16 format.
The parameter -input_shape is [1,1,1,1]. Should I write it differently? I am not doing any image processing.
As the last step, I used this line of code to generate .blob file:
compile_tool -m "directory\saved_model.xml" -d MYRIAD -o "directory\saved_model.blob" -c myriad.confThis wasn't working until I added this -c myriad.conf. I made myriad.conf file inside the folder where is compile_tool, and that file contains this:
MYRIAD_ENABLE_MX_BOOT NO.blob file is also included in the attached folder.
What do you mean by saying "model name"? And how to check if .blob file can be inferred and compatible with OpenVINO?
Now when I call neural network in NPU using net.run(value), I got this error:
[62.985873] [LRT] Assertion failed in
file: D:\Git\et200.em_mp.tm_npu\em_tmai\_tool\intelMDK\mdk\projects\FathomRun\leon\mvnciResourceManager.cpp, line 375, function: hwConfig
condition: 0 <= idx && idx < (int)count(all_.hwConfigMask_)
message:
Exited with error code 1I defined value as a list or as an array, e.g. value = [2] or value = uarray.array('i', [2]).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Lt96player,
For your information, the supported and validated on NPU 3720 platform and currently available only with the Archive distribution of OpenVINO™. The Siemens TM NPU is not a validated platform in OpenVINO. The NPU support in OpenVINO is still under active development and may offer a limited set of supported OpenVINO features.
Also, offline compilation and blob import are supported but only for development purposes. Pre-compiled models (blobs) are not recommended to be used in production. Blob compatibility across different OpenVINO versions/ NPU driver versions is not guaranteed.
Hope this answers your query.
Regards,
Aznie
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Lt96player,
This thread will no longer be monitored since we have provided information. If you need any additional information from Intel, please submit a new question.
Regards,
Aznie
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page