- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Anyone managed to get tiny YOLOv3 running on NCS2 in R5?
I am getting
[ ERROR ] [VPU] Internal error: Output in detector/yolo-v3-tiny/pool2_5/MaxPool has incorrect width dimension. Expected: 9 or 9 Actual: 10
[ JFTR Non-tiny YOLOv3 now runs fine on NCS2 in the new R5 SDK and same tiny FP16 IR runs fine on GPU ]
Thanks,
Nikos
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Sorry my mistake. Tiny YOLO v3 works fine in R5 SDK on NCS2 with FP16 IR ( size 416x416 ) .
Speed is about 20 fps - impressive!
performance counts: LeakyReLU_ OPTIMIZED_OUT layerType: ReLU realTime: 0 cpu: 0 execType: ReLU LeakyReLU_837 OPTIMIZED_OUT layerType: ReLU realTime: 0 cpu: 0 execType: ReLU LeakyReLU_838 OPTIMIZED_OUT layerType: ReLU realTime: 0 cpu: 0 execType: ReLU LeakyReLU_838@soc=2/2@accum EXECUTED layerType: Convolution realTime: 277 cpu: 277 execType: Sum LeakyReLU_839 OPTIMIZED_OUT layerType: ReLU realTime: 0 cpu: 0 execType: ReLU LeakyReLU_840 OPTIMIZED_OUT layerType: ReLU realTime: 0 cpu: 0 execType: ReLU LeakyReLU_841 OPTIMIZED_OUT layerType: ReLU realTime: 0 cpu: 0 execType: ReLU LeakyReLU_842 OPTIMIZED_OUT layerType: ReLU realTime: 0 cpu: 0 execType: ReLU LeakyReLU_842 -> LeakyReLU... EXECUTED layerType: Resample realTime: 217 cpu: 217 execType: Permute LeakyReLU_843 OPTIMIZED_OUT layerType: ReLU realTime: 0 cpu: 0 execType: ReLU LeakyReLU_844 OPTIMIZED_OUT layerType: ReLU realTime: 0 cpu: 0 execType: ReLU LeakyReLU_844@soc=2/2@accum EXECUTED layerType: Convolution realTime: 261 cpu: 261 execType: Sum LeakyReLU_845 OPTIMIZED_OUT layerType: ReLU realTime: 0 cpu: 0 execType: ReLU Receive-Tensor EXECUTED layerType: Receive-Tensor realTime: 0 cpu: 0 execType: Receive-Tensor detector/yolo-v3-tiny/Conv... EXECUTED layerType: Convolution realTime: 1774 cpu: 1774 execType: MyriadXHwConvolution detector/yolo-v3-tiny/Conv... EXECUTED layerType: Convolution realTime: 1766 cpu: 1766 execType: MyriadXHwConvolution + injected[Copy] detector/yolo-v3-tiny/Conv... EXECUTED layerType: Convolution realTime: 1770 cpu: 1770 execType: MyriadXHwConvolution + injected[Copy] detector/yolo-v3-tiny/Conv... EXECUTED layerType: Convolution realTime: 1769 cpu: 1769 execType: MyriadXHwConvolution + injected[Copy] detector/yolo-v3-tiny/Conv... EXECUTED layerType: Convolution realTime: 425 cpu: 425 execType: MyriadXHwConvolution + injected[Copy] detector/yolo-v3-tiny/Conv... EXECUTED layerType: Convolution realTime: 22 cpu: 22 execType: Copy detector/yolo-v3-tiny/Conv... EXECUTED layerType: Convolution realTime: 1408 cpu: 1408 execType: MyriadXHwConvolution detector/yolo-v3-tiny/Conv... EXECUTED layerType: Convolution realTime: 1417 cpu: 1417 execType: MyriadXHwConvolution + injected[Copy] detector/yolo-v3-tiny/Conv... EXECUTED layerType: Convolution realTime: 186 cpu: 186 execType: MyriadXHwConvolution + injected[Copy] detector/yolo-v3-tiny/Conv... EXECUTED layerType: Convolution realTime: 18 cpu: 18 execType: Copy detector/yolo-v3-tiny/Conv... EXECUTED layerType: Convolution realTime: 51 cpu: 51 execType: MyriadXHwConvolution detector/yolo-v3-tiny/Conv... EXECUTED layerType: Convolution realTime: 2034 cpu: 2034 execType: MyriadXHwConvolution + injected[Permute] detector/yolo-v3-tiny/Conv... EXECUTED layerType: Convolution realTime: 2092 cpu: 2092 execType: MyriadXHwConvolution detector/yolo-v3-tiny/Conv... EXECUTED layerType: Convolution realTime: 203 cpu: 203 execType: LeakyRelu detector/yolo-v3-tiny/Conv... EXECUTED layerType: RegionYolo realTime: 13435 cpu: 13435 execType: RegionYolo detector/yolo-v3-tiny/Conv... EXECUTED layerType: <Extra> realTime: 266 cpu: 266 execType: Convert_f16f32 detector/yolo-v3-tiny/Conv... EXECUTED layerType: Convolution realTime: 325 cpu: 325 execType: MyriadXHwConvolution detector/yolo-v3-tiny/Conv... EXECUTED layerType: RegionYolo realTime: 1278 cpu: 1278 execType: Permute detector/yolo-v3-tiny/Conv... EXECUTED layerType: Convolution realTime: 1439 cpu: 1439 execType: MyriadXHwConvolution detector/yolo-v3-tiny/Conv... EXECUTED layerType: Convolution realTime: 1290 cpu: 1290 execType: MyriadXHwConvolution detector/yolo-v3-tiny/Conv... EXECUTED layerType: Convolution realTime: 1472 cpu: 1472 execType: MyriadXHwConvolution detector/yolo-v3-tiny/Conv... EXECUTED layerType: Convolution realTime: 1743 cpu: 1743 execType: MyriadXHwConvolution detector/yolo-v3-tiny/Conv... EXECUTED layerType: Convolution realTime: 3417 cpu: 3417 execType: MyriadXHwConvolution detector/yolo-v3-tiny/Conv... EXECUTED layerType: Convolution realTime: 3352 cpu: 3352 execType: MyriadXHwConvolution detector/yolo-v3-tiny/Conv... EXECUTED layerType: Convolution realTime: 205 cpu: 205 execType: LeakyRelu detector/yolo-v3-tiny/Conv... EXECUTED layerType: Convolution realTime: 336 cpu: 336 execType: MyriadXHwConvolution detector/yolo-v3-tiny/Resi... EXECUTED layerType: Resample realTime: 121 cpu: 121 execType: Resample detector/yolo-v3-tiny/conc... OPTIMIZED_OUT layerType: Concat realTime: 0 cpu: 0 execType: Concat detector/yolo-v3-tiny/conc... EXECUTED layerType: Convolution realTime: 977 cpu: 977 execType: Permute detector/yolo-v3-tiny/pool... OPTIMIZED_OUT layerType: Pooling realTime: 0 cpu: 0 execType: Pooling detector/yolo-v3-tiny/pool... OPTIMIZED_OUT layerType: Pooling realTime: 0 cpu: 0 execType: Pooling detector/yolo-v3-tiny/pool... OPTIMIZED_OUT layerType: Pooling realTime: 0 cpu: 0 execType: Pooling detector/yolo-v3-tiny/pool... OPTIMIZED_OUT layerType: Pooling realTime: 0 cpu: 0 execType: Pooling detector/yolo-v3-tiny/pool... EXECUTED layerType: Pooling realTime: 457 cpu: 457 execType: MyriadXHwPooling + injected[Permute] detector/yolo-v3-tiny/pool... EXECUTED layerType: Pooling realTime: 140 cpu: 140 execType: MyriadXHwPooling detector/yolo-v3-tiny/pool... EXECUTED layerType: Pooling realTime: 1242 cpu: 1242 execType: CopyMakeBorder inputs@FP16 EXECUTED layerType: <Extra> realTime: 474 cpu: 474 execType: Convert_u8f16 Total time: 47659 microseconds [ WARN:0] terminating async callback [ INFO ] Execution successful
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello, @nikos
I am also trying to convert tiny-YoloV3.
I know that I can not convert simply.
I generated tiny-YoloV3's .pb file by referring to the following repository, but I am in trouble because an error occurred when converting to lr model.
Would you give me advice on what customization you made?
https://github.com/mystic123/tensorflow-yolo-v3.git
b920405@ubuntu:~/git/OpenVINO-YoloV3$ sudo python3 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo_tf.py \ > --input_model pbmodels/frozen_tiny_yolo_v3.pb \ > --output_dir lrmodels/tiny-YoloV3/FP16 \ > --input inputs \ > --output detector/yolo-v3-tiny/detections \ > --data_type FP16 \ > --batch 1 Model Optimizer arguments: Common parameters: - Path to the Input Model: /home/b920405/git/OpenVINO-YoloV3/pbmodels/frozen_tiny_yolo_v3.pb - Path for generated IR: /home/b920405/git/OpenVINO-YoloV3/lrmodels/tiny-YoloV3/FP16 - IR output name: frozen_tiny_yolo_v3 - Log level: ERROR - Batch: 1 - Input layers: inputs - Output layers: detector/yolo-v3-tiny/detections - Input shapes: Not specified, inherited from the model - Mean values: Not specified - Scale values: Not specified - Scale factor: Not specified - Precision of IR: FP16 - Enable fusing: True - Enable grouped convolutions fusing: True - Move mean values to preprocess section: False - Reverse input channels: False TensorFlow specific parameters: - Input model in text protobuf format: False - Offload unsupported operations: False - Path to model dump for TensorBoard: None - List of shared libraries with TensorFlow custom layers implementation: None - Update the configuration file with input/output node names: None - Use configuration file used to generate the model with Object Detection API: None - Operations to offload: None - Patterns to offload: None - Use the config file: None Model Optimizer version: 1.5.12.49d067a0 /usr/local/lib/python3.5/dist-packages/h5py/__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters /opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/middle/passes/fusing/decomposition.py:65: RuntimeWarning: invalid value encountered in sqrt scale = 1. / np.sqrt(variance.value + eps) [ ERROR ] List of operations that cannot be converted to IE IR: [ ERROR ] Exp (2) [ ERROR ] detector/yolo-v3-tiny/Exp [ ERROR ] detector/yolo-v3-tiny/Exp_1 [ ERROR ] Part of the nodes was not translated to IE. Stopped. For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #24.
My Environment
- Ubuntu 16.04
- OpenVINO R5 2018.5.445
- OpenCV 4.0.1-openvino
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Katsuya-san,
Your environment looks good. How did you create frozen_tiny_yolo_v3.pb ? I used the latest master of tensorflow-yolo-v3 and convert_weights_pb.py For tiny please also --tiny and may need to specify size ( --size 416 ).
Also in the model optimizer command please specify config ( --tensorflow_use_custom_operations_config )
An example is
python3 mo_tf.py --input_model frozen_darknet_yolov3-tiny_model.pb --tensorflow_use_custom_operations_config ~/intel/computer_vision_sdk/deployment_tools/model_optimizer/extensions/front/tf/yolo_v3-tiny.json --input_shape=[1,416,416,3] --data_type=FP16
You would have to create yolo_v3-tiny.json in a similar way that the existing yolo_v3.json is with minor modifications for your network.
Cheers,
Nikos
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Katsuya-san,
Thank you for your kind words.
> Perhaps, I anticipate that this part will be changed.
> "entry_points": ["detector/yolo-v3/Reshape", "detector/yolo-v3/Reshape_4", "detector/yolo-v3/Reshape_8"]
Exactly! I used tensorflow tools and other sdk tools to find the names. it seems they are called "detector/yolo-v3-tiny/Reshape","detector/yolo-v3-tiny/Reshape_4" , etc
I am not on the same system right now but I remember I used something like this
[ { "id": "TFYOLOV3", "match_kind": "general", "custom_attributes": { "classes": 80, "coords": 4, "num": 9, "mask": [0, 1, 2], "entry_points": ["detector/yolo-v3-tiny/Reshape","detector/yolo-v3-tiny/Reshape_4"] } } ]
You would also have to change anchors in the C++ sample or in your application.
After you specify the modified json similar to above the command below should succeed
python3 mo_tf.py --input_model frozen_darknet_yolov3-tiny_model.pb --tensorflow_use_custom_operations_config ~/intel/computer_vision_sdk/deployment_tools/model_optimizer/extensions/front/tf/yolo_v3-tiny.json --input_shape=[1,416,416,3] --data_type=FP16
Cheers,
Nikos
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Impressive!! Thank you!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi guys,
Did any of you manage to run the tiny-yolo-v3 network through the opencv python API?
I followed the same steps you guys followed, but am getting the following errors:
E: [xLink] [ 529794] dispatcherEventReceive:308 dispatcherEventReceive() Read failed -4 | event 0x7f07137fdee0 USB_READ_REL_REQ E: [xLink] [ 529794] eventReader:256 eventReader stopped E: [watchdog] [ 530480] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 531480] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 532480] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 533480] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 534480] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 535480] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 536480] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 537480] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 538480] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 539480] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 540481] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 541481] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 542481] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 543481] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 544481] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 545481] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 546481] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 547481] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 548481] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 549482] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 550482] sendPingMessage:164 Failed send ping message: X_LINK_ERROR terminate called after throwing an instance of 'InferenceEngine::details::InferenceEngineException' what(): Failed to read output from FIFO: NC_ERROR E: [xLink] [ 550595] dispatcherWaitEventComplete:720 waiting is timeout, sending reset remote event E: [ncAPI] [ 550595] ncFifoReadElem:3069 Packet reading is failed.
Did you get this at some point?
Thanks!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@ Caspi, Itai
> Did any of you manage to run the tiny-yolo-v3 network through the opencv python API?
Sorry have not tried python - just C++ . How can we repro your issue? What command did you run?
@ Hyodo, Katsuya-san
> succeeded in operating tiny-YoloV3.
> Core i7 alone is 60 FPS.
Very nice! Is accuracy similar to the reference darknet code and tensorflow-yolo-v3 ?
Would be nice to see how accuracy is affected as we move from
darknet -> tensorflow -> openvino fp32 CPU -> fp16 GPU / NCS(2)
Cheers,
Nikos
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Question related so so: i couldnt find benchmarks of the Intel GPU and NCS 2 regarding the known object detection networks such as: Yolo, tinyYolo, SSD, faster-rcnn.
@nikos maybe do you have such benchmark?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Meller, Daniel
> @nikos maybe do you have such benchmark?
Assuming you are referring inference speed (in fps) then I have some good - unofficial - idea of performance on various inference devices I tried in my lab. It varies a lot depending on the inference device (CPU/GPU/NCS) how many cores/EUs (dual Core / quad core CPU, GT2 GPU or GT4 GPU that has more EUs), clock frequencies (mobile chips vs. desktop CPUs for example), YOLO tiny or not tiny, input size 320, 416 or 608 , FP32 or FP16 on GPU? It would be a huge unofficial matrix and no point to publish here. Typically CPU FP32 fps are similar to GPU FP16 and NCS is slower with NCS2 faster. Katsuya-san published a few fps too.
My main concern at this point is accuracy as the pipeline that gets us from Darknet to Tensorlfow is not very accurate and gets even worse for the tiny version. There seems to be other ways of getting Darknet -> ? -> OpenVino IR that are more accurate. The issues logged in https://github.com/mystic123/tensorflow-yolo-v3/issues concern me a bit so fps is a not the main issue right now.
Cheers,
nikos
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@nikos it would be greatly appriciated to receive this matrix even though not offical.
we as a company required to do the benchmarking instead of Intel just providing one to the public to know if the product fits our needs!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Sure, yes please DM :-)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Caspi, Itai wrote:Hi guys,
Did any of you manage to run the tiny-yolo-v3 network through the opencv python API?
I followed the same steps you guys followed, but am getting the following errors:
E: [xLink] [ 529794] dispatcherEventReceive:308 dispatcherEventReceive() Read failed -4 | event 0x7f07137fdee0 USB_READ_REL_REQ E: [xLink] [ 529794] eventReader:256 eventReader stopped E: [watchdog] [ 530480] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 531480] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 532480] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 533480] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 534480] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 535480] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 536480] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 537480] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 538480] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 539480] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 540481] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 541481] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 542481] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 543481] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 544481] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 545481] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 546481] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 547481] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 548481] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 549482] sendPingMessage:164 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 550482] sendPingMessage:164 Failed send ping message: X_LINK_ERROR terminate called after throwing an instance of 'InferenceEngine::details::InferenceEngineException' what(): Failed to read output from FIFO: NC_ERROR E: [xLink] [ 550595] dispatcherWaitEventComplete:720 waiting is timeout, sending reset remote event E: [ncAPI] [ 550595] ncFifoReadElem:3069 Packet reading is failed.
Did you get this at some point?
Thanks!
I get similar stuff after a while of running on AI Core X (MyriadX):
E: [xLink] [ 115374] dispatcherEventReceive:308 dispatcherEventReceive() Read failed -4 | event 0x7fceb4b97de0 USB_READ_REL_REQ
E: [xLink] [ 115374] eventReader:256 eventReader stopped
E: [watchdog] [ 116374] sendPingMessage:164 Failed send ping message: X_LINK_ERROR
E: [watchdog] [ 117375] sendPingMessage:164 Failed send ping message: X_LINK_ERROR
E: [watchdog] [ 118375] sendPingMessage:164 Failed send ping message: X_LINK_ERROR
E: [watchdog] [ 119375] sendPingMessage:164 Failed send ping message: X_LINK_ERROR
E: [watchdog] [ 120375] sendPingMessage:164 Failed send ping message: X_LINK_ERROR
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I saw such sendPingMessage errors when i used NCS2 on RPi with USB extension cable...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have the same issue for some particular model (ncs works ok for another model). How to debug this? Is there any way to manage log level, enable debug logs coming from NCS? Or just to unroll network layer by layer?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
nikos wrote:Hello Katsuya-san,
Your environment looks good. How did you create frozen_tiny_yolo_v3.pb ? I used the latest master of tensorflow-yolo-v3 and convert_weights_pb.py For tiny please also --tiny and may need to specify size ( --size 416 ).
Also in the model optimizer command please specify config ( --tensorflow_use_custom_operations_config )
An example is
python3 mo_tf.py --input_model frozen_darknet_yolov3-tiny_model.pb --tensorflow_use_custom_operations_config ~/intel/computer_vision_sdk/deployment_tools/model_optimizer/extensions/front/tf/yolo_v3-tiny.json --input_shape=[1,416,416,3] --data_type=FP16
You would have to create yolo_v3-tiny.json in a similar way that the existing yolo_v3.json is with minor modifications for your network.Cheers,
Nikos
I'm using OpenVino R5 and I'm trying to run "object_detection_demo_yolov3_async" with tiny-YoloV3 downloaded from https://github.com/PINTO0309/OpenVINO-YoloV3/tree/master/lrmodels/tiny-YoloV3/FP32
The sample returns me this error: [ ERROR ] This demo only accepts networks with three layers
So, I tried to download the .pb model of tiny.yoloV3 and to convert it with model optimizer.
How can I create the yolo_v3-tiny.json ?
Thanks
Stefano
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page