- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I have a question
When using the Fast R-CNN ResNet-101 model, the result is incorrect.
[-1, 0, 0, 0, 0, 0, 0]
[-1, 0, 0, 0, 0, 0, 0]
[-1, 0, 0, 0, 0, 0, 0] ...
I don't know what else I can try.
If you have any suggestions, please share with me if you are willing.
Thank you.
Here are some records I have tried :
- success model.
- When use SSD_inception_v2 model, it's work. (reference python sample - 'object_detection_demo_ssd_async')
- already try
- img.astype(numpy.float32) # input node is FP32
- img.astype(numpy.float32) / 255.0 # normalize
- use data layout HWC or CHW
- inference command
- exec_net.start_async(request_id=0, inputs={"image_tensor": in_frame})
- Model informations
- the input node info
- 'image_tensor' # 1 3 600 600 NCHW FP32
- 'image_info' # 1 3 NC FP32
- the output node info
- 'detection_output' # 1 1 300 7 NCHW FP32
- MO command as follows
python mo_tf.py
--input_model MO/fasterRCNNres101/frozen_inference_graph.pb
--tensorflow_use_custom_operations_config extensions/front/tf/faster_rcnn_support.json
--tensorflow_object_detection_api_pipeline_config MO/fasterRCNNres101/pipeline.config
--reverse_input_channels
Thanks,
Ellen
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear tsai, ellen,
Did you use --reverse_input_channels for the model optimizer command when you ran object_detection_demo_ssd_async ? Can you try re-running after creating IR without --reverse_input_channels ? Models are often trained with RGB and some of the samples use OpenCV at the backend which read images in BGR, which is why you need --reverse_input_channels.
Let me know how it works by posting back on this forum.
Thanks !
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Shubha:
Thank your reply.
I already try your suggestions, but it's result the same [-1,0,0,0,0,0,0].
I think it's reasonable. If the channel is wrong 'RGB'/'BGR', the value should be wrong, but it won't be zero.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear tsai, ellen,
Here is something you can try. Try running an OpenVino sample against your model:
http://docs.openvinotoolkit.org/latest/_inference_engine_samples_object_detection_demo_README.html
What happens ?
Thanks,
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Shubha:
Thank your reply.
- success sample, but our model didn't.
- inference command as follows:
object_detection_demo.exe -i myimage.bmp -m <path>/VGG16_faster_rcnn_final.xml -d GPU
I already try your suggestions, I run sample VGG16-Faster-RCNN Converting Caffe* Model success,
but when I against my model, error message appears:
" Expression: vector subscript out of range "
- debug and solved it: vector subscript out of range
I debug it. The reason of above error is sample code use first input node, but our first input node isn't 4 channel.
xxxxxxx.getDims()[3]; // first input node shape dimension is 2.
- bbox_pred_reshaeInPort
In sample code, it get 'proposal' node in network, but our model 'faster r-cnn resnet-101' not found this node.
does it mean i can't use this sample ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear tsai, ellen,
Yes most likely it means that you can't use the OpenVino Sample as is for your model. However, you can certainly modify the object_detection_demo source code to adapt to your model. Why not try that ? The reason I am asking you to start from one of our samples is because OpenVino samples are a known baseline which do work. So starting from a sample, you can slowly modify your code and pinpoint or narrow down your problem.
Here is another thing you can try.
Build a debug version of OpenVino using dldt . Make sure you regenerate your IR also using the model optimizer from dldt.
Put your code in the samples folder
Make sure your environment variables InferenceEngine_DIR and OpenCV_DIR point to the right directories in your Inference Engine and OpenCV build.
Rebuild the samples folder (as DEBUG). Use the open_model_zoo README as a guide.
It's very difficult for me to just know why you're getting wrong results.
My advice is for you to start from a known working sample and modify the code step by step to adapt to your model. And step through the debugger when you notice issues. If you are convinced that OpenVino has a bug, I'd be happy to file a bug on your behalf but first I must be convinced that OpenVino actually has a bug.
I hope it helps,
Thanks,
Shubha
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page