- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
hi
i'm trying to run facenet-openvino demo in kuberlab .There is an error while running model-converter like this
2019-06-11 12:28:02.396675: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
INFO:root:Load ONET graph
INFO:root:Create ONET output layer
INFO:root:Freeze ONET graph
INFO:tensorflow:Froze 21 variables.
INFO:tensorflow:Froze 21 variables.
INFO:tensorflow:Converted 21 variables to const ops.
INFO:tensorflow:Converted 21 variables to const ops.
INFO:root:Compile: mo_tf.py --input_model /notebooks/training/model/onet.pb --output_dir /notebooks/training/model --data_type FP32 --batch 2
[ ERROR ]
Detected not satisfied dependencies:
test-generator: not installed, required: 0.1.1
defusedxml: not installed, required: 0.5.0
Please install required versions of components or use install_prerequisites script
/opt/intel/openvino_2019.1.094/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites_tf.sh
Note that install_prerequisites scripts may install additional components.
Traceback (most recent call last):
File "openvino_converter.py", line 356, in <module>
main()
File "openvino_converter.py", line 324, in main
convert_onet(args.training_dir, args.align_model_dir, data_type=data_type)
File "openvino_converter.py", line 110, in convert_onet
result = subprocess.check_output(cmd, shell=True).decode()
File "/opt/conda/lib/python3.6/subprocess.py", line 356, in check_output
**kwargs).stdout
File "/opt/conda/lib/python3.6/subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command 'mo_tf.py --input_model /notebooks/training/model/onet.pb --output_dir /notebooks/training/model --data_type FP32 --batch 2' returned non-zero exit status 1.
From where this script is accesing those paths and how to resolve this error.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello KDeep,
Thank you for posting on the Intel ® communities.
I would like to begin by asking the exact model of the Intel ® Compute Stick you have, also, I would like to know if you have tried posting in our developer's zone community. Refer to the link below:
https://software.intel.com/en-us/forum
Regards,
David V
Intel Customer Support Technician
Under Contract to Intel Corporation
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello DavidV_intel,
I am using Neural compute stick2,and i installed latest toolkit openvino_2019.1.144 .I need clarification for these menctioned below
menctioned before using Kuberlab im trying to run facenet-openvino demo i got stuck at model converter
2.does this Kuberlab refers to the installed toolkit
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello KDeep,
Thank you for your response.
Since you have a Neutral Compute Stick 2 I will forward the information to the appropriate support team, you will be getting assistance as soon as possible.
Regards,
David V
Intel Customer Support Technician
Under Contract to Intel Corporation
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Deepika,
I apologize for the delay in my response.
What version of OpenVINO did you install and how did you install it? Can you give me more information about the system you are using? Is there a particular reason you are using Kuberlab?
Best Regards,
Sahira
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page