Community
cancel
Showing results for 
Search instead for 
Did you mean: 
nikos1
Valued Contributor I
466 Views

Tiny YOLOv3 on NCS2 (FP16) in R5 SDK

Anyone managed to get tiny YOLOv3 running on NCS2 in R5?

I am getting 

[ ERROR ] [VPU] Internal error: Output in 
detector/yolo-v3-tiny/pool2_5/MaxPool has incorrect width dimension. 
Expected: 9 or 9 Actual: 10

[ JFTR Non-tiny YOLOv3 now runs fine on NCS2 in the new R5 SDK and same tiny FP16 IR runs fine on GPU  ]

Thanks,

Nikos

0 Kudos
24 Replies
nikos1
Valued Contributor I
413 Views

Sorry my mistake. Tiny YOLO v3 works fine in R5 SDK on NCS2  with FP16 IR ( size 416x416 ) .

Speed is about 20 fps - impressive!

 

performance counts:

LeakyReLU_                    OPTIMIZED_OUT  layerType: ReLU               realTime: 0          cpu: 0              execType: ReLU
LeakyReLU_837                 OPTIMIZED_OUT  layerType: ReLU               realTime: 0          cpu: 0              execType: ReLU
LeakyReLU_838                 OPTIMIZED_OUT  layerType: ReLU               realTime: 0          cpu: 0              execType: ReLU
LeakyReLU_838@soc=2/2@accum   EXECUTED       layerType: Convolution        realTime: 277        cpu: 277            execType: Sum
LeakyReLU_839                 OPTIMIZED_OUT  layerType: ReLU               realTime: 0          cpu: 0              execType: ReLU
LeakyReLU_840                 OPTIMIZED_OUT  layerType: ReLU               realTime: 0          cpu: 0              execType: ReLU
LeakyReLU_841                 OPTIMIZED_OUT  layerType: ReLU               realTime: 0          cpu: 0              execType: ReLU
LeakyReLU_842                 OPTIMIZED_OUT  layerType: ReLU               realTime: 0          cpu: 0              execType: ReLU
LeakyReLU_842 -> LeakyReLU... EXECUTED       layerType: Resample           realTime: 217        cpu: 217            execType: Permute
LeakyReLU_843                 OPTIMIZED_OUT  layerType: ReLU               realTime: 0          cpu: 0              execType: ReLU
LeakyReLU_844                 OPTIMIZED_OUT  layerType: ReLU               realTime: 0          cpu: 0              execType: ReLU
LeakyReLU_844@soc=2/2@accum   EXECUTED       layerType: Convolution        realTime: 261        cpu: 261            execType: Sum
LeakyReLU_845                 OPTIMIZED_OUT  layerType: ReLU               realTime: 0          cpu: 0              execType: ReLU
Receive-Tensor                EXECUTED       layerType: Receive-Tensor     realTime: 0          cpu: 0              execType: Receive-Tensor
detector/yolo-v3-tiny/Conv... EXECUTED       layerType: Convolution        realTime: 1774       cpu: 1774           execType: MyriadXHwConvolution
detector/yolo-v3-tiny/Conv... EXECUTED       layerType: Convolution        realTime: 1766       cpu: 1766           execType: MyriadXHwConvolution + injected[Copy]
detector/yolo-v3-tiny/Conv... EXECUTED       layerType: Convolution        realTime: 1770       cpu: 1770           execType: MyriadXHwConvolution + injected[Copy]
detector/yolo-v3-tiny/Conv... EXECUTED       layerType: Convolution        realTime: 1769       cpu: 1769           execType: MyriadXHwConvolution + injected[Copy]
detector/yolo-v3-tiny/Conv... EXECUTED       layerType: Convolution        realTime: 425        cpu: 425            execType: MyriadXHwConvolution + injected[Copy]
detector/yolo-v3-tiny/Conv... EXECUTED       layerType: Convolution        realTime: 22         cpu: 22             execType: Copy
detector/yolo-v3-tiny/Conv... EXECUTED       layerType: Convolution        realTime: 1408       cpu: 1408           execType: MyriadXHwConvolution
detector/yolo-v3-tiny/Conv... EXECUTED       layerType: Convolution        realTime: 1417       cpu: 1417           execType: MyriadXHwConvolution + injected[Copy]
detector/yolo-v3-tiny/Conv... EXECUTED       layerType: Convolution        realTime: 186        cpu: 186            execType: MyriadXHwConvolution + injected[Copy]
detector/yolo-v3-tiny/Conv... EXECUTED       layerType: Convolution        realTime: 18         cpu: 18             execType: Copy
detector/yolo-v3-tiny/Conv... EXECUTED       layerType: Convolution        realTime: 51         cpu: 51             execType: MyriadXHwConvolution
detector/yolo-v3-tiny/Conv... EXECUTED       layerType: Convolution        realTime: 2034       cpu: 2034           execType: MyriadXHwConvolution + injected[Permute]
detector/yolo-v3-tiny/Conv... EXECUTED       layerType: Convolution        realTime: 2092       cpu: 2092           execType: MyriadXHwConvolution
detector/yolo-v3-tiny/Conv... EXECUTED       layerType: Convolution        realTime: 203        cpu: 203            execType: LeakyRelu
detector/yolo-v3-tiny/Conv... EXECUTED       layerType: RegionYolo         realTime: 13435      cpu: 13435          execType: RegionYolo
detector/yolo-v3-tiny/Conv... EXECUTED       layerType: <Extra>            realTime: 266        cpu: 266            execType: Convert_f16f32
detector/yolo-v3-tiny/Conv... EXECUTED       layerType: Convolution        realTime: 325        cpu: 325            execType: MyriadXHwConvolution
detector/yolo-v3-tiny/Conv... EXECUTED       layerType: RegionYolo         realTime: 1278       cpu: 1278           execType: Permute
detector/yolo-v3-tiny/Conv... EXECUTED       layerType: Convolution        realTime: 1439       cpu: 1439           execType: MyriadXHwConvolution
detector/yolo-v3-tiny/Conv... EXECUTED       layerType: Convolution        realTime: 1290       cpu: 1290           execType: MyriadXHwConvolution
detector/yolo-v3-tiny/Conv... EXECUTED       layerType: Convolution        realTime: 1472       cpu: 1472           execType: MyriadXHwConvolution
detector/yolo-v3-tiny/Conv... EXECUTED       layerType: Convolution        realTime: 1743       cpu: 1743           execType: MyriadXHwConvolution
detector/yolo-v3-tiny/Conv... EXECUTED       layerType: Convolution        realTime: 3417       cpu: 3417           execType: MyriadXHwConvolution
detector/yolo-v3-tiny/Conv... EXECUTED       layerType: Convolution        realTime: 3352       cpu: 3352           execType: MyriadXHwConvolution
detector/yolo-v3-tiny/Conv... EXECUTED       layerType: Convolution        realTime: 205        cpu: 205            execType: LeakyRelu
detector/yolo-v3-tiny/Conv... EXECUTED       layerType: Convolution        realTime: 336        cpu: 336            execType: MyriadXHwConvolution
detector/yolo-v3-tiny/Resi... EXECUTED       layerType: Resample           realTime: 121        cpu: 121            execType: Resample
detector/yolo-v3-tiny/conc... OPTIMIZED_OUT  layerType: Concat             realTime: 0          cpu: 0              execType: Concat
detector/yolo-v3-tiny/conc... EXECUTED       layerType: Convolution        realTime: 977        cpu: 977            execType: Permute
detector/yolo-v3-tiny/pool... OPTIMIZED_OUT  layerType: Pooling            realTime: 0          cpu: 0              execType: Pooling
detector/yolo-v3-tiny/pool... OPTIMIZED_OUT  layerType: Pooling            realTime: 0          cpu: 0              execType: Pooling
detector/yolo-v3-tiny/pool... OPTIMIZED_OUT  layerType: Pooling            realTime: 0          cpu: 0              execType: Pooling
detector/yolo-v3-tiny/pool... OPTIMIZED_OUT  layerType: Pooling            realTime: 0          cpu: 0              execType: Pooling
detector/yolo-v3-tiny/pool... EXECUTED       layerType: Pooling            realTime: 457        cpu: 457            execType: MyriadXHwPooling + injected[Permute]
detector/yolo-v3-tiny/pool... EXECUTED       layerType: Pooling            realTime: 140        cpu: 140            execType: MyriadXHwPooling
detector/yolo-v3-tiny/pool... EXECUTED       layerType: Pooling            realTime: 1242       cpu: 1242           execType: CopyMakeBorder
inputs@FP16                   EXECUTED       layerType: <Extra>            realTime: 474        cpu: 474            execType: Convert_u8f16
Total time: 47659    microseconds
[ WARN:0] terminating async callback
[ INFO ] Execution successful

 

Hyodo__Katsuya
Innovator
413 Views

Hello, @nikos

I am also trying to convert tiny-YoloV3.

I know that I can not convert simply.

I generated tiny-YoloV3's .pb file by referring to the following repository, but I am in trouble because an error occurred when converting to lr model.

Would you give me advice on what customization you made?

 https://github.com/mystic123/tensorflow-yolo-v3.git 

b920405@ubuntu:~/git/OpenVINO-YoloV3$ sudo python3 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo_tf.py \
> --input_model pbmodels/frozen_tiny_yolo_v3.pb \
> --output_dir lrmodels/tiny-YoloV3/FP16 \
> --input inputs \
> --output detector/yolo-v3-tiny/detections \
> --data_type FP16 \
> --batch 1
Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/home/b920405/git/OpenVINO-YoloV3/pbmodels/frozen_tiny_yolo_v3.pb
	- Path for generated IR: 	/home/b920405/git/OpenVINO-YoloV3/lrmodels/tiny-YoloV3/FP16
	- IR output name: 	frozen_tiny_yolo_v3
	- Log level: 	ERROR
	- Batch: 	1
	- Input layers: 	inputs
	- Output layers: 	detector/yolo-v3-tiny/detections
	- Input shapes: 	Not specified, inherited from the model
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP16
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	False
	- Reverse input channels: 	False
TensorFlow specific parameters:
	- Input model in text protobuf format: 	False
	- Offload unsupported operations: 	False
	- Path to model dump for TensorBoard: 	None
	- List of shared libraries with TensorFlow custom layers implementation: 	None
	- Update the configuration file with input/output node names: 	None
	- Use configuration file used to generate the model with Object Detection API: 	None
	- Operations to offload: 	None
	- Patterns to offload: 	None
	- Use the config file: 	None
Model Optimizer version: 	1.5.12.49d067a0
/usr/local/lib/python3.5/dist-packages/h5py/__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/middle/passes/fusing/decomposition.py:65: RuntimeWarning: invalid value encountered in sqrt
  scale = 1. / np.sqrt(variance.value + eps)
[ ERROR ]  List of operations that cannot be converted to IE IR:
[ ERROR ]      Exp (2)
[ ERROR ]          detector/yolo-v3-tiny/Exp
[ ERROR ]          detector/yolo-v3-tiny/Exp_1
[ ERROR ]  Part of the nodes was not translated to IE. Stopped. 
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #24.

 

My Environment

- Ubuntu 16.04

- OpenVINO R5 2018.5.445

- OpenCV 4.0.1-openvino

 

nikos1
Valued Contributor I
413 Views

Hello Katsuya-san, 

Your environment looks good. How did you create frozen_tiny_yolo_v3.pb ?  I used the latest master of tensorflow-yolo-v3 and convert_weights_pb.py  For tiny please also  --tiny  and may need to specify size ( --size 416 ).

Also in the model optimizer command please specify config ( --tensorflow_use_custom_operations_config )

An example is 

python3 mo_tf.py --input_model frozen_darknet_yolov3-tiny_model.pb --tensorflow_use_custom_operations_config ~/intel/computer_vision_sdk/deployment_tools/model_optimizer/extensions/front/tf/yolo_v3-tiny.json  --input_shape=[1,416,416,3] --data_type=FP16
You would have to create yolo_v3-tiny.json in a similar way that  the existing yolo_v3.json is with minor modifications for your network.

Cheers,

Nikos

Hyodo__Katsuya
Innovator
413 Views

@nikos Thank you for giving me a quick reply. I always see a tremendous contribution within your forum. I respect you. >I used the latest master of tensorflow-yolo-v3 and convert_weights_pb.py For tiny please also --tiny and may need to specify size ( --size 416 ). Yes. I am doing exactly the same procedure as you. However, "- tensorflow_use_custom_operations_config" was not specified. python3 mo_tf.py --input_model frozen_tiny_yolo_v3.pb --input_shape=[1,416,416,3] --data_type=FP16 >Also in the model optimizer command please specify config I think you are right. I thought that using "--tensorflow_use_custom_operations_config" is inevitable. Perhaps, I anticipate that this part will be changed. "entry_points": ["detector/yolo-v3/Reshape", "detector/yolo-v3/Reshape_4", "detector/yolo-v3/Reshape_8"] How did you find out which layers should be "enty_points"? [ { "id": "TFYOLOV3", "match_kind": "general", "custom_attributes": { "classes": 80, "coords": 4, "num": 9, "mask": [3,4,5], "jitter":0.3, "ignore_thresh":0.7, "truth_thresh":1, "random":1, "anchors":[10,13,16,30,33,23,30,61,62,45,59,119,116,90,156,198,373,326], "entry_points": ["detector/yolo-v3/Reshape", "detector/yolo-v3/Reshape_4", "detector/yolo-v3/Reshape_8"] } } ]
nikos1
Valued Contributor I
413 Views

Hello Katsuya-san,

Thank you for your kind words.

> Perhaps, I anticipate that this part will be changed.
> "entry_points": ["detector/yolo-v3/Reshape", "detector/yolo-v3/Reshape_4", "detector/yolo-v3/Reshape_8"]

Exactly! I used tensorflow tools and other sdk tools to find the names. it seems they are called  "detector/yolo-v3-tiny/Reshape","detector/yolo-v3-tiny/Reshape_4" , etc

I am not on the same system right now but I remember I used something like this 

[
   {
      "id": "TFYOLOV3",
      "match_kind": "general",
      "custom_attributes": {
      "classes": 80,
      "coords": 4,
      "num": 9,
      "mask": [0, 1, 2],
      "entry_points": ["detector/yolo-v3-tiny/Reshape","detector/yolo-v3-tiny/Reshape_4"]
      }
   }
]

You would also have to change anchors in the C++ sample or in your application. 

After you specify the modified json similar to above the command below should succeed

python3 mo_tf.py --input_model frozen_darknet_yolov3-tiny_model.pb 

--tensorflow_use_custom_operations_config 

~/intel/computer_vision_sdk/deployment_tools/model_optimizer/extensions/front/tf/yolo_v3-tiny.json  

--input_shape=[1,416,416,3] --data_type=FP16

Cheers,

Nikos

 

Hyodo__Katsuya
Innovator
413 Views

@Nikos Great!! I am very grateful to you!! I will try it soon. By the way, "Github - PINTO0309" = "Youtube - PINTO0309" = "This forum - Hyodo, Katsuya" = "Twitter - PINTO03091" = "NCS forum - PINTO" = "Japanese Article Qiita - PINTO" Thank you.
nikos1
Valued Contributor I
413 Views

Impressive!! Thank you!

Caspi__Itai
Beginner
413 Views

Hi guys,

Did any of you manage to run the tiny-yolo-v3 network through the opencv python API?

I followed the same steps you guys followed, but am getting the following errors:

E: [xLink] [    529794] dispatcherEventReceive:308	dispatcherEventReceive() Read failed -4 | event 0x7f07137fdee0 USB_READ_REL_REQ

E: [xLink] [    529794] eventReader:256	eventReader stopped
E: [watchdog] [    530480] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    531480] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    532480] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    533480] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    534480] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    535480] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    536480] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    537480] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    538480] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    539480] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    540481] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    541481] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    542481] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    543481] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    544481] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    545481] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    546481] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    547481] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    548481] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    549482] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    550482] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
terminate called after throwing an instance of 'InferenceEngine::details::InferenceEngineException'
  what():  Failed to read output from FIFO: NC_ERROR
E: [xLink] [    550595] dispatcherWaitEventComplete:720	waiting is timeout, sending reset remote event
E: [ncAPI] [    550595] ncFifoReadElem:3069	Packet reading is failed.

 

Did you get this at some point? 

Thanks!

Hyodo__Katsuya
Innovator
413 Views

@nikos, Thank you very much. Thanks to you I succeeded in operating tiny-YoloV3. Core i7 alone is 60 FPS. tiny-YoloV3 https://youtu.be/md4udC4baZA C++ implementation https://github.com/PINTO0309/OpenVINO-YoloV3.git
nikos1
Valued Contributor I
413 Views

@ Caspi, Itai

> Did any of you manage to run the tiny-yolo-v3 network through the opencv python API?

Sorry have not tried python - just C++ . How can we repro your issue? What command did you run?

@ Hyodo, Katsuya-san

> succeeded in operating tiny-YoloV3.
> Core i7 alone is 60 FPS.

Very nice!  Is accuracy similar to the reference darknet code and tensorflow-yolo-v3 ?

Would be nice to see how accuracy is affected as we move from

darknet -> tensorflow -> openvino fp32 CPU -> fp16 GPU / NCS(2)

Cheers,

Nikos

Hyodo__Katsuya
Innovator
413 Views

@nikos >Is accuracy similar to the reference darknet code and tensorflow-yolo-v3 ? I have not compared exactly yet. However, I do not feel the difference in feeling. I could predict it before trying, tiny-YoloV3 is fast, but accuracy is very bad. >Would be nice to see how accuracy is affected as we move from >darknet -> tensorflow -> openvino fp32 CPU -> fp16 GPU / NCS(2) I will give it a try if I can secure time later. And at a later date I will reimplement it with the Python API.
Meller__Daniel
Beginner
413 Views

Question related so so: i couldnt find benchmarks of the Intel GPU and NCS 2 regarding the known object detection networks such as: Yolo, tinyYolo, SSD, faster-rcnn.
@nikos maybe do you have such benchmark? 

 

nikos1
Valued Contributor I
413 Views

@Meller, Daniel

> @nikos maybe do you have such benchmark? 

Assuming you are referring inference speed (in fps) then I have some good - unofficial - idea of performance on various inference devices I tried in my lab. It varies a lot depending on the inference device (CPU/GPU/NCS) how many cores/EUs (dual Core / quad core CPU, GT2 GPU or GT4 GPU that has more EUs), clock frequencies (mobile chips vs. desktop CPUs for example), YOLO tiny or not tiny, input size 320, 416 or 608 , FP32 or FP16 on GPU? It would be a huge unofficial matrix and no point to publish here. Typically CPU FP32 fps are similar to GPU FP16 and NCS is slower with NCS2 faster. Katsuya-san published a few fps too.

My main concern at this point is accuracy as the pipeline that gets us from Darknet to Tensorlfow is not very accurate and gets even worse for the tiny version. There seems to be other ways of getting Darknet -> ? -> OpenVino IR that are more accurate. The issues logged in https://github.com/mystic123/tensorflow-yolo-v3/issues concern me a bit so fps is a not the main issue right now.

Cheers,

nikos

 

Meller__Daniel
Beginner
413 Views

@nikos it would be greatly appriciated to receive this matrix even though not offical.
we as a company required to do the benchmarking instead of Intel just providing one to the public to know if the product fits our needs!

 

nikos1
Valued Contributor I
413 Views

Sure, yes please DM :-)

Peniak__Martin
Beginner
413 Views

Caspi, Itai wrote:

Hi guys,

Did any of you manage to run the tiny-yolo-v3 network through the opencv python API?

I followed the same steps you guys followed, but am getting the following errors:

E: [xLink] [    529794] dispatcherEventReceive:308	dispatcherEventReceive() Read failed -4 | event 0x7f07137fdee0 USB_READ_REL_REQ

E: [xLink] [    529794] eventReader:256	eventReader stopped
E: [watchdog] [    530480] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    531480] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    532480] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    533480] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    534480] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    535480] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    536480] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    537480] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    538480] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    539480] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    540481] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    541481] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    542481] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    543481] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    544481] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    545481] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    546481] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    547481] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    548481] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    549482] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
E: [watchdog] [    550482] sendPingMessage:164	Failed send ping message: X_LINK_ERROR
terminate called after throwing an instance of 'InferenceEngine::details::InferenceEngineException'
  what():  Failed to read output from FIFO: NC_ERROR
E: [xLink] [    550595] dispatcherWaitEventComplete:720	waiting is timeout, sending reset remote event
E: [ncAPI] [    550595] ncFifoReadElem:3069	Packet reading is failed.

 

Did you get this at some point? 

Thanks!

 

I get similar stuff after a while of running on AI Core X (MyriadX):

 

 

E: [xLink] [    115374] dispatcherEventReceive:308 dispatcherEventReceive() Read failed -4 | event 0x7fceb4b97de0 USB_READ_REL_REQ

 

E: [xLink] [    115374] eventReader:256 eventReader stopped

E: [watchdog] [    116374] sendPingMessage:164 Failed send ping message: X_LINK_ERROR

E: [watchdog] [    117375] sendPingMessage:164 Failed send ping message: X_LINK_ERROR

E: [watchdog] [    118375] sendPingMessage:164 Failed send ping message: X_LINK_ERROR

E: [watchdog] [    119375] sendPingMessage:164 Failed send ping message: X_LINK_ERROR

E: [watchdog] [    120375] sendPingMessage:164 Failed send ping message: X_LINK_ERROR

Leini__Mikk
Beginner
413 Views

I saw such sendPingMessage errors when i used NCS2 on RPi with USB extension cable...

om77
New Contributor I
413 Views

I have the same issue for some particular model (ncs works ok for another model). How to debug this? Is there any way to manage log level, enable debug logs coming from NCS? Or just to unroll network layer by layer?

Stefano_M_
Beginner
413 Views

nikos wrote:

Hello Katsuya-san, 

Your environment looks good. How did you create frozen_tiny_yolo_v3.pb ?  I used the latest master of tensorflow-yolo-v3 and convert_weights_pb.py  For tiny please also  --tiny  and may need to specify size ( --size 416 ).

Also in the model optimizer command please specify config ( --tensorflow_use_custom_operations_config )

An example is 

python3 mo_tf.py --input_model frozen_darknet_yolov3-tiny_model.pb --tensorflow_use_custom_operations_config ~/intel/computer_vision_sdk/deployment_tools/model_optimizer/extensions/front/tf/yolo_v3-tiny.json  --input_shape=[1,416,416,3] --data_type=FP16
You would have to create yolo_v3-tiny.json in a similar way that  the existing yolo_v3.json is with minor modifications for your network.

Cheers,

Nikos

 

I'm using OpenVino R5 and I'm trying to run "object_detection_demo_yolov3_async" with tiny-YoloV3 downloaded from https://github.com/PINTO0309/OpenVINO-YoloV3/tree/master/lrmodels/tiny-YoloV3/FP32

The sample returns me this error: [ ERROR ] This demo only accepts networks with three layers

So, I tried to download the .pb model of tiny.yoloV3 and to convert it with model optimizer.

How can I create the yolo_v3-tiny.json ?

Thanks

Stefano

 

 

 

Hyodo__Katsuya
Innovator
72 Views

Reply