- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I'm currently learning OpenVINO so please forgive me if I have done anything stupid here. I will try to include sufficient information so as to describe my issue.
I took faster_rcnn_inception_v2_coco_2018_01_28 from the approved list of OpenVINO base models. I used the guide here to change this to a custom classifier to just identify two classes. (Cliffnotes- I labelled data/generated TFRecords/changed label map/did some training).
python train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/faster_rcnn_inception_v2_coco_2018_01_28.config
Tested this object detection model and it works perfectly.
Now,I wanted to deploy this custom model onto my NCS (1) stick. So I used mo_tf.py like so :
python mo_tf.py --data_type=FP16 --tensorflow_object_detection_api_pipeline_config "pipeline.config" --tensorflow_use_custom_operations_config "C:/Program Files (x86)/IntelSWTools/openvino/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support.json" --input_model "frozen_inference_graph.pb"
This works, and generates my .bin and .xml correctly, with the following warning (which I believe is OK but will include just in case it is relevant) :
[ WARNING ] Model Optimizer removes pre-processing block of the model which resizes image keeping aspect ratio. The Inference Engine does not support dynamic image size so the Intermediate Representation file is generated with the input image size of a fixed size.
Specify the "--input_shape" command line parameter to override the default shape which is equal to (600, 600).
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
The graph output nodes "num_detections", "detection_boxes", "detection_classes", "detection_scores" have been replaced with a single layer of type "Detection Output". Refer to IR catalogue in the documentation for information about this layer.
[ SUCCESS ] Generated IR model.
[ SUCCESS ] XML file: C:\Program Files (x86)\IntelSWTools\openvino_2019.1.148\deployment_tools\model_optimizer\.\frozen_inference_graph.xml
[ SUCCESS ] BIN file: C:\Program Files (x86)\IntelSWTools\openvino_2019.1.148\deployment_tools\model_optimizer\.\frozen_inference_graph.bin
[ SUCCESS ] Total execution time: 51.79 seconds.
Now, I wish to plug the IR model into a simple app such as the one here. (I have previously tested this repo with normal faster rcnet/SSD mobilenet and it works).
So I cloned this repository locally, and run :
python main.py -m "C:\Program Files (x86)\IntelSWTools\openvino_2019.1.148\deployment_tools\model_optimizer\frozen_inference_graph.xml" -l "C:/Program Files (x86)/IntelSWTools/openvino_2019.1.148/deployment_tools/inference_engine/bin/intel64/Release/myriadPlugin.dll" -d MYRIAD
Unfortunately, this results in :
1. AssertionError: Sample supports only single input topologies on detect.py @ line 51 - assert len(net.inputs.keys()) == 1
net.inputs.keys() appears to equal 2; but I am not sure what this means or how to change it. What is net.input.keys()?
2. Just removing the assertions from detect.py and saying some prayers results in :
File "main.py", line 86, in <module>
sys.exit(main() or 0)
File "main.py", line 49, in main
resultant_initialisation_object=object_detection.initialise_inference()
File "C:\Users\user4\Desktop\object_detection-master\detect.py", line 62, in initialise_inference
n, c, h, w = net.inputs[input_blob].shape
ValueError: not enough values to unpack (expected 4, got 2)
What is net.inputs? I see it relates to the IENetwork class, but the documentation surrounding this is a little confusing. What is this class?
"This class contains the information about the network model read from IR and allows you to manipulate with some model parameters such as layers affinity and output layers."
What are affinity and output layers?
Not sure how to progress with either of these 2 issues or what is happening in this code. Can anyone offer any help or guidance please ?
Thank you much
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear De Boer, Ronald,
It appears that you did everything correctly. However rather than use some rando detection sample on the internet, it would be better if you used OpenVino's Object Detection Sample which is code in C++, not Python.
Now to answer your questions.
What is net.inputs? I see it relates to the IENetwork class, but the documentation surrounding this is a little confusing. What is this class?
Please read this class definition doc Inference Engine CNN Network . inputs are the actual input(s) to the model. The error you are getting :
n, c, h, w = net.inputs[input_blob].shape
ValueError: not enough values to unpack (expected 4, got 2)
Means exactly what it says. N = batch_size, c = num_of_channels, h = height, w = width . Your IR apparently only has 2 of those, most likely just h and w but not sure really.
What are affinity and output layers?
Affinity has to do with assigning layers to specific devices, i.e. CPU, GPU, etc...More info about that here in Python API docs . Normally you shouldn't have to worry about affinity, the plugin takes care of it automatically for you.
Output layers are the actual output of the model. Is it an object detection model (which faster rcnn definitely is)? Then the output would be bounding boxes related stuff, like "detection_boxes,detection_scores,num_detections" - these are actually laid out in the faster_rcnn_support.json file found under deployment_tools\model_optimizer\extensions\front\tf
It seems like the detection code is failing though because it's expecting a 4-dimensional input shape and you're passing in just 2 in your IR. Now the next question is, why did your IR get created incorrectly, given, as you pointed out - that the mo_tf.py command seemed to succeed ?
Can I ask, what version of Tensorflow are you using ? Can you try the attached v1.13.json (see attached *.zip) on your custom trained model (regenerate IR) and see what happens ? If it bombs, try upgrading to Tensorflow 1.13 and see what happens.
Please report your findings here,
Thanks !
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Shubha
I have the same problem with "ValueError: not enough values to unpack (expected 4, got 2)"
Has there been any update since?
I am using Tensorflow 1.12, tried v1.7.json and v1.10 on modifier.
Thanks
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page