- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I convert tiny-yolo v3 model from DarkNet to Tensorflow and the pb file works normally. Then I convert the pb file to IR by the following command:
sudo python3 mo_tf.py \
--input_model frozen_tiny_yolo_v3.pb \
--output_dir save_IR \
--data_type FP16 \
--batch 1 \
--tensorflow_use_custom_operations_config yolo_v3_tiny.json
The frozen_tiny_yolo_v3.pb is converted to IR successfully.
Then I test the IR by the
" ~/inference_engine_vpu_arm/deployment_tools/inference_engine/samples/python_samples/object_detection_demo_tiny-yolov3.py" file.
(The file is used for yolov3 at first and I changed it to tiny-yolov3).
The command is:
python3 object_detection_demo_tiny-yolov3.py -m frozen_tiny_yolo_v3.xml -d MYRIAD -i cam
It works unnormally. It prints the following messages:
Detected boxes for batch 1:
[ INFO ] Class ID | Confidence | XMIN | YMIN | XMAX | YMAX | COLOR
[ INFO ] 0 | 0.513803 | 190 | 4 | 197 | 5 | (0, 0, 0)
[ INFO ] 0 | 0.587425 | 12 | 13 | 20 | 13 | (0, 0, 0)
[ INFO ] 1 | 0.700414 | 239 | 6 | 239 | 23 | (12, 7, 5)
[ INFO ] 1 | 0.526423 | 9 | 4 | 14 | 36 | (12, 7, 5)
[ INFO ] 0 | 0.522735 | 57 | 16 | 75 | 30 | (0, 0, 0)
[ INFO ] 1 | 0.524174 | 96 | 8 | 100 | 31 | (12, 7, 5)
[ INFO ] 1 | 0.749252 | 178 | 16 | 178 | 28 | (12, 7, 5)
[ INFO ] 0 | 0.513668 | 164 | 7 | 203 | 36 | (0, 0, 0)
[ INFO ] 1 | 0.588534 | 116 | 22 | 116 | 40 | (12, 7, 5)
..................
[ INFO ] Layer detector/yolo-v3-tiny/Conv_12/BiasAdd/YoloRegion parameters:
[ INFO ] num : 3
[ INFO ] coords : 4
[ INFO ] anchors : [10.0, 14.0, 23.0, 27.0, 37.0, 58.0, 81.0, 82.0, 135.0, 169.0, 344.0, 319.0]
[ INFO ] classes : 2
[ INFO ] Layer detector/yolo-v3-tiny/Conv_9/BiasAdd/YoloRegion parameters:
[ INFO ] num : 3
[ INFO ] coords : 4
[ INFO ] anchors : [10.0, 14.0, 23.0, 27.0, 37.0, 58.0, 81.0, 82.0, 135.0, 169.0, 344.0, 319.0]
[ INFO ] classes : 2
The num of coordinates is about 200 but it shoule be 1. And the XMIN is equals XMAX. In addition, the class id shoud only be 1.
When I use the file to test a normal IR, it shows the correct messages:
Detected boxes for batch 1:
[ INFO ] Class ID | Confidence | XMIN | YMIN | XMAX | YMAX | COLOR
[ INFO ] 58 | 0.826624 | 74 | 78 | 129 | 118 | (255, 255, 255)
[ INFO ] 58 | 0.500350 | 163 | 77 | 218 | 112 | (255, 255, 255)
[ INFO ] Layer detector/yolo-v3-tiny/Conv_9/BiasAdd/YoloRegion parameters:
[ INFO ] classes : 80
[ INFO ] anchors : [10.0, 14.0, 23.0, 27.0, 37.0, 58.0, 81.0, 82.0, 135.0, 169.0, 344.0, 319.0]
[ INFO ] coords : 4
[ INFO ] num : 3
[ INFO ] Layer detector/yolo-v3-tiny/Conv_12/BiasAdd/YoloRegion parameters:
[ INFO ] classes : 80
[ INFO ] anchors : [10.0, 14.0, 23.0, 27.0, 37.0, 58.0, 81.0, 82.0, 135.0, 169.0, 344.0, 319.0]
[ INFO ] coords : 4
[ INFO ] num : 3
[ INFO ] Layer detector/yolo-v3-tiny/Conv_9/BiasAdd/YoloRegion parameters:
[ INFO ] classes : 80
[ INFO ] anchors : [10.0, 14.0, 23.0, 27.0, 37.0, 58.0, 81.0, 82.0, 135.0, 169.0, 344.0, 319.0]
[ INFO ] coords : 4
[ INFO ] num : 3
..................................
It is normal. The coordinates are only about the object that is detected and the class id is right.
I would appreciate if anyone can give some methods to solve this problem.
Thany you,
Gao.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Jiansheng:
Instead of -i cam, can you try -i the following video :
person-bicycle-car-detction.mp4 found within
https://github.com/intel-iot-devkit/sample-videos
Please report results here. I'm wondering if it's a problem with -i cam only.
Thanks for using OpenVino !
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Shubha,
I try -i worker-zone-detection.mp4 instead of -i cam to test the IR. The command is:
python3 object_detection_demo_tiny-yolov3.py -m frozen_tiny_yolo_v3.xml -d MYRIAD -i worker-zone-detection.mp4
And the result is:
https://i.loli.net/2019/04/01/5ca1bca3f026d.png
when I use -i cam, the result is:
https://i.loli.net/2019/04/01/5ca1bd1c1239b.png
And when I use the pb file to test before it is converted to IR, the result is normal:
https://i.loli.net/2019/04/01/5ca1bd96698f1.jpg
The json file used to convert to IR is:
[
{
"id": "TFYOLOV3",
"match_kind": "general",
"custom_attributes": {
"classes": 2,
"coords": 4,
"num": 6,
"mask": [0,1,2],
"jitter":0.3,
"ignore_thresh":0.7,
"truth_thresh":1,
"random":1,
"anchors":[10,14,23,27,37,58,81,82,135,169,344,319],
"entry_points": ["detector/yolo-v3-tiny/Reshape","detector/yolo-v3-tiny/Reshape_4"]
}
}
]
I wonder if it is a logical issure.
I would appreciate if you can give some suggestions.
Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dearest Gao, Jiansheng,
You have definitely performed a valid test, so thank you for that. And I agree that the OpenVino IR converted result looks bad. If the same 12 anchor values are being used between the OpenVino and not-OpenVino then the results shouldn't be different.
When you say this : "And when I use the pb file to test before it is converted to IR, the result is normal" I assume you are using some other method for inference. Which hardware and which method are you using for inference in this case ?
Gao, Jiansheng can you try this test on the just released 2019 R1 ? Also please try both the C++ version and the Python version of object_detection_demo_yolov3_async ?
Thanks,
Shubha

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page