Hi!
I'm new to OpenVINO, and it's very important to me.
I use OpenVINO 2021.2.185 on UpSquader Board with Intel Pentium N4200 (4 cores) and Ubuntu 18.04.5 LTS.
First steps.
I've watched that video on official OpenVINO channel and tried to optimize Caffe-model squeezenet1.1.caffemodel with the "mo_caffe.py". I've installed the prerequisites for caffe-models. All ran good and i got my .xml and .bin files, that run correctly with "hello_classification.py".
Problem
My goal is to optimize tensorflow model (from model downloader) and then run it with some demos. But first I would try with example model from official channel too - googlenet-v3.
So, I got the prerequisites for tensorflow models in the folder "install prerequisites" with
sudo ./install_prerequisites_tf.sh
It runs with warning, that i should install testresources. Following step helped me:
sudo apt install python3-testresources
Than I run the model optimizer command:
sudo python3 mo_tf.py --input_model ~/.../inception_v3_2016_08_28_frozen.pb --output_dir ~/... --model_name first_tf_model
But i receive that message: Illegal instruction
And when I run model optimizer for previous model (squeezenet1.1 caffe) I receive thar error too. When I install prerequisites for caffe one more time, i got that error too (but it worked early!).
How I tried to fix It
I don't know how to fix that correctly. It doesn't help to reboot the device, reinstall OpenVINO. I always received that error. Only thing that helped is to reinstall Ubuntu (but it's not very comfortable - you know)
Maybe I'm doing something wrong? Or maybe i make always a mistake that I don't understand?
Surely I can do that on Colab, but I need to do optimization on my device.
Thank you for help
Best regards
Maxim Lyuzin
Error:
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /home/maxim/downloaded_models/public/googlenet-v3/inception_v3_2016_08_28_frozen.pb
- Path for generated IR: /home/maxim/optimized_models/
- IR output name: first_tf_model
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: Not specified, inherited from the model
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: None
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Use the config file: None
Model Optimizer version: 2021.2.0-1877-176bdf51370-releases/2021/2
Недопустимая инструкция (in English it is illegal instruction)
Link Copied
Hi everyone,
I have not received an answer.
So, to work somehow with OpenVINO Model Optimizer I created a .ipynb file for Google Colab according to that question, where you can install OpenVINO 2021.2 on Google's virtual machine, run demo to check installation, and optimize yolo-v3-tf and yolo-v3-tiny-tf from open zoo models correctly
Hope my experience can be helpful in your case.
https://colab.research.google.com/drive/1PIbLUK6qJ0dnSjuPJrve10H7iZJkA5kW?usp=sharing
Best regards
Lyuzin Maxim
Greetings,
Try to run the model optimizer command without sudo :
python3 mo_tf.py --input_model ~/.../inception_v3_2016_08_28_frozen.pb --output_dir ~/... --model_name first_tf_model
and ensure that you had setupvars initialized each time a new terminal opened
Sincerely,
Iffa
May I know which version of OpenVINO that you are using?
It is recommended to use the latest version which is 2021.2
OpenVINO version and model version need to be compatible, if you are using downloader.py, it would download the latest model version.
Check your installation steps here: https://docs.openvinotoolkit.org/latest/openvino_docs_install_guides_installing_openvino_linux.html
If you've got everything correct you should be able to run the example:
cd /opt/intel/openvino_2021/deployment_tools/demo
./demo_squeezenet_download_convert_run.sh -d CPU
and this example: ./demo_security_barrier_camera.sh -d CPU
You must be able to run this example, if not, something is wrong with your OpenVINO setup.
Then only you can proceed to model conversion,
python3 mo_tf.py --input_model <INPUT_MODEL>.pb (whithout sudo)
OR python3 mo_tf.py --input_model <INPUT_MODEL>.pb -o <output dir>
you may refer to this latest official documentation:
Sincerely,
Iffa
Hi!
I'm using OpenVINO 2021.2.185 (as I wrote below) - it's the newest version.
Two demos in installation guide i've done before - it worked fine.
But as i set tensorflow configurations - demo_squeezenet_download_convert_run.sh didn't work.
I reinstalled my Ubuntu again (18.04.5 LTS).
If I run mo_tf without sudo I get "[ERROR] The "/.../inception...frozen.pb" is not readable"
So, now I can do inference, but no optimizations:
So, from my experience, now I should reinstall Ubuntu (just reinstall OpenVINO doesn't work) again and never use configuration for tensorflow for Model Optimizer - it ruins all optimizations.
Steps in documentation (2021.2) were done, but still no luck. OpenVINO setuo works, as you can see.
Maybe you can help me to avoid that error?
Or is it software (OpenVINO) bug and I should only waiting for a new release?
Best regards
Lyuzin Maxim
Hi everyone,
I have not received an answer.
So, to work somehow with OpenVINO Model Optimizer I created a .ipynb file for Google Colab according to that question, where you can install OpenVINO 2021.2 on Google's virtual machine, run demo to check installation, and optimize yolo-v3-tf and yolo-v3-tiny-tf from open zoo models correctly
Hope my experience can be helpful in your case.
https://colab.research.google.com/drive/1PIbLUK6qJ0dnSjuPJrve10H7iZJkA5kW?usp=sharing
Best regards
Lyuzin Maxim
Greetings,
Glad to know that you had found a solution.
Intel will no longer monitor this thread since this issue has been resolved. If you need any additional information from Intel, please submit a new question.
Sincerely,
Iffa
For more complete information about compiler optimizations, see our Optimization Notice.