- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Hello dear community,
First of all congratulations on the great job done with this VPU stick, it is really amazing!
I am working on a project which started using tensorflow in CPU, but since I needed more speed on the inference, I decided to move on Movidius stick.
I have been able to run the typical object detector, and I am using the example provided in NCApp Zoo "video_objects". So far so good
The problem I have is that my trained network is based in "Faster-RCNN-Inception-V2" ("http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_v2_coco_2018_01_28.tar.gz"), and I dont know whether it is possible to run this kind of network on the MNCS or which are the steps to follow. I am running this network with tensorflow-cpu on raspberry and it is very slow (8 minutes per inference… quite a lot, but I just need to do one inference ), and that is the reason that I would like to move on a movidius graph.
I have seen in some posts that this network is not supported but I don't know if that is still that way. In case that it is now supported, could you give me some guidance on how to export this frozen model to a movidius graph?
Thanks in advance
- タグ:
- Tensorflow
コピーされたリンク
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Update: This is what I am trying to do. As a proof of concept I am taking 2 re-trained models, one with the "Faster-RCNN-Inception-V2" network and other with the "ssd_mobilenet_v1_coco_2017_11_17"
Compiling RCNN:
sudo mvNCCompile -s 12 model.ckpt-21306.meta -in=image_tensor -on=detection_boxes,detection_scores,detection_classes,num_detections
Throws:
Traceback (most recent call last):
File "/usr/local/bin/mvNCCompile", line 118, in <module>
create_graph(args.network, args.inputnode, args.outputnode, args.outfile, args.nshaves, args.inputsize, args.weights)
File "/usr/local/bin/mvNCCompile", line 104, in create_graph
net = parse_tensor(args, myriad_config)
File "/usr/local/bin/ncsdk/Controllers/TensorFlowParser.py", line 213, in parse_tensor
saver = tf.train.import_meta_graph(path, clear_devices=True)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 1810, in import_meta_graph
**kwargs)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/meta_graph.py", line 660, in import_scoped_meta_graph
producer_op_list=producer_op_list)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/importer.py", line 292, in import_graph_def
op_def = op_dict[node.op]
KeyError: 'ParallelInterleaveDataset'
This is because ParallellInterleaveDataset is not among the items contained in the default op_dict of protos, and I don't know how to follow here, so I will use the frozen graph:
sudo mvNCCompile -s 12 inference_graph/frozen_inference_graph_great.pb -in=image_tensor -on=detection_boxes,detection_scores,detection_classes,num_detections
Which throws:
[Error 13] Toolkit Error: Provided OutputNode/InputNode name does not exist or does not match with one contained in model file Provided: detection_boxes,detection_scores,detection_classes,num_detections:0
Ok, this is because its reading the output as a whole, not separated by different output nodes, so lets try one output node:
sudo mvNCCompile -s 12 inference_graph/frozen_inference_graph_great.pb -in=image_tensor -on=detection_boxes
Which throws:
mvNCCompile v02.00, Copyright @ Movidius Ltd 2016
/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py:766: DeprecationWarning: builtin type EagerTensor has no __module__ attribute
EagerTensor = c_api.TFE_Py_InitEagerTensor(_EagerTensorBase)
/usr/local/lib/python3.5/dist-packages/tensorflow/python/util/tf_inspect.py:45: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() instead
if d.decorator_argspec is not None), _inspect.getargspec(target))
Traceback (most recent call last):
File "/usr/local/bin/mvNCCompile", line 118, in <module>
create_graph(args.network, args.inputnode, args.outputnode, args.outfile, args.nshaves, args.inputsize, args.weights)
File "/usr/local/bin/mvNCCompile", line 104, in create_graph
net = parse_tensor(args, myriad_config)
File "/usr/local/bin/ncsdk/Controllers/TensorFlowParser.py", line 259, in parse_tensor
input_data = np.random.uniform(0, 1, shape)
File "mtrand.pyx", line 1302, in mtrand.RandomState.uniform
File "mtrand.pyx", line 242, in mtrand.cont2_array_sc
TypeError: 'NoneType' object cannot be interpreted as an integer
Lets move to a supported network, mobilenet ssd. Doing the same, I basically have the same outputs.
What am I missing? Thanks in advance
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Same here. With RCNN_Resnet101.
$ mvNCCompile -s 12 rcnn_frozen_inference_graph.pb -in=image_tensor -on=detection_boxes
mvNCCompile v02.00, Copyright @ Movidius Ltd 2016
/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py:766: DeprecationWarning: builtin type EagerTensor has no __module__ attribute
EagerTensor = c_api.TFE_Py_InitEagerTensor(_EagerTensorBase)
/usr/local/lib/python3.5/dist-packages/tensorflow/python/util/tf_inspect.py:45: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() instead
if d.decorator_argspec is not None), _inspect.getargspec(target))
Traceback (most recent call last):
File "/usr/local/bin/mvNCCompile", line 118, in <module>
create_graph(args.network, args.inputnode, args.outputnode, args.outfile, args.nshaves, args.inputsize, args.weights)
File "/usr/local/bin/mvNCCompile", line 104, in create_graph
net = parse_tensor(args, myriad_config)
File "/usr/local/bin/ncsdk/Controllers/TensorFlowParser.py", line 259, in parse_tensor
input_data = np.random.uniform(0, 1, shape)
File "mtrand.pyx", line 1302, in mtrand.RandomState.uniform
File "mtrand.pyx", line 242, in mtrand.cont2_array_sc
TypeError: 'NoneType' object cannot be interpreted as an integer
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
@JoseSecmotic @azmath We haven't added any updates in regards to Faster R-CNNs for the NCSDK. As an alternative, you can check out Intel's OpenVino Toolkit which offers some support for Faster R-CNNs and is compatible with the NCS device. https://software.intel.com/en-us/articles/OpenVINO-RelNotes.
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Thanks a lot @Tome_at_Intel , I will give it a try!
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Hi again @Tome_at_Intel
I have successfully installed OpenVINO, and tried to use their model optimizer in order to obtain a graph "framework agnostic". Nevertheless, it keeps failing. Is there any other solution?
For example, I would be interested exporting my SSD Mobilenet to Movidius graph, but I have the same errors that en Faster R-CNN model. Could you give me some guidance on how to achieve this?
Thanks in advance, and best regards
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
@JoseSecmotic If you are using SSD MobileNet for TensorFlow, we don't have support for that yet on the NCSDK. I may have been mistaken about what I said about Faster R-CNNs being supported by OpenVino on the NCS. It isn't very clear what models are supported for which hardware yet on OpenVIno.
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Thanks @Tome_at_Intel !
I decided to develop a new custom model with Caffe, which I guess it won't be a problem to run on the NCS.
After a couple of weeks researching with OpenVino, I wasn't able to export the tensorflow model with the model optimizer they provide, and yet, I wasn't sure whether it would run later in the NCS…
If you have any hint or tutorial on how to develop a single object classifier for the Movidius NCS, it would be fantastic
Best regards
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Hi @Tome_at_Intel
Regarding your last comment, you say that you don't have support for SSD MobileNet for Tensorflow. Could you tell me which networks on tensorflow can be retrained to perform custom object detection?
Thanks in advance
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
@JoseSecmotic Tiny Yolo V2 works via Darkflow transformation and we have a piece of sample code at https://github.com/movidius/ncappzoo/tree/ncsdk2/tensorflow/tiny_yolo_v2.
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Thanks @Tome_at_Intel ! I will give it a shot and come back here telling about the results.
So far I am trying now from darknet with yolov2, then with transformation to caffe, and then compile it with mvNCCompiler. When I tried it in my app it gave me a MYRIAD error. I will keep you posted :)
Thanks again
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
I am a little lost by your link. This downloads a cfg and weights files. But if you have custom images that you need recognized don't you have to regenerate the weights file from training?
Do you have any tutorials on how to use a pre-configured network graph such as InceptionV3 or Yolo V2 retrain it ?
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Hi @chicagobob123
I'm in your same situation, and so far I have achieved to retrain a Yolo V2 with a custom class using darknet. Then, I've managed to export it to a frozen tensorflow, and now I'm struggling with the export to movidius, which fails me for now.
I have used the following tutorial to retrain for a single custom class: https://timebutt.github.io/static/how-to-train-yolov2-to-detect-custom-objects/
Then, instead of downloading those weights and cfg, use the weights generated by darknet and you'll have a .pb file
Hope that helps ;)
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Hello @Tome_at_Intel
Regarding the repository of https://github.com/movidius/ncappzoo/tree/ncsdk2/tensorflow/tiny_yolo_v2 , it seems very useful.
I had to make some changes to make it run with Cython, but I have the following output when trying to compile or to profile.
As something to mention, I am training the algorithm with a GPU/CUDNN cloud machine, and profiling/compiling in a local machine with the NCSDKv2 installed.
It seems very difficult to install the SDK as non root in the cloud machine, so I gave up on doing everything in the same machine…
Could you help me with this?
mvNCProfile -s 12 yolov2-tiny-voc.pb
mvNCProfile v02.00, Copyright @ Movidius Ltd 2016
/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py:766: DeprecationWarning: builtin type EagerTensor has no __module__ attribute
EagerTensor = c_api.TFE_Py_InitEagerTensor(_EagerTensorBase)
/usr/local/lib/python3.5/dist-packages/tensorflow/python/util/tf_inspect.py:45: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() instead
if d.decorator_argspec is not None), _inspect.getargspec(target))
...
/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/tensor_util.py:509: DeprecationWarning: The binary mode of fromstring is deprecated, as it behaves surprisingly on unicode inputs. Use frombuffer instead
return np.fromstring(tensor.tensor_content, dtype=dtype).reshape(shape)
2018-07-27 14:43:18.160581: E tensorflow/core/common_runtime/executor.cc:643] Executor failed to create kernel. Invalid argument: NodeDef mentions attr 'dilations' not in Op<name=Conv2D; signature=input:T, filter:T -> output:T; attr=T:type,allowed=[DT_HALF, DT_FLOAT]; attr=strides:list(int); attr=use_cudnn_on_gpu:bool,default=true; attr=padding:string,allowed=["SAME", "VALID"]; attr=data_format:string,default="NHWC",allowed=["NHWC", "NCHW"]>; NodeDef: 0-convolutional = Conv2D[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Pad, 0-convolutional/filter). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
[[Node: 0-convolutional = Conv2D[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Pad, 0-convolutional/filter)]]
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1323, in _do_call
return fn(*args)
....
ist-packages/tensorflow/python/framework/errors_impl.py", line 473, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: NodeDef mentions attr 'dilations' not in Op<name=Conv2D; signature=input:T, filter:T -> output:T; attr=T:type,allowed=[DT_HALF, DT_FLOAT]; attr=strides:list(int); attr=use_cudnn_on_gpu:bool,default=true; attr=padding:string,allowed=["SAME", "VALID"]; attr=data_format:string,default="NHWC",allowed=["NHWC", "NCHW"]>; NodeDef: 0-convolutional = Conv2D[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Pad, 0-convolutional/filter). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
[[Node: 0-convolutional = Conv2D[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Pad, 0-convolutional/filter)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/mvNCProfile", line 121, in <module>
profile_net(args.network, args.inputnode, args.outputnode, args.nshaves, args.inputsize, args.weights, args.device_no)
File "/usr/local/bin/mvNCProfile", line 104, in profile_net
....
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1120, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1317, in _do_run
options, run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1336, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: NodeDef mentions attr 'dilations' not in Op<name=Conv2D; signature=input:T, filter:T -> output:T; attr=T:type,allowed=[DT_HALF, DT_FLOAT]; attr=strides:list(int); attr=use_cudnn_on_gpu:bool,default=true; attr=padding:string,allowed=["SAME", "VALID"]; attr=data_format:string,default="NHWC",allowed=["NHWC", "NCHW"]>; NodeDef: 0-convolutional = Conv2D[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Pad, 0-convolutional/filter). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
[[Node: 0-convolutional = Conv2D[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Pad, 0-convolutional/filter)]]
Caused by op '0-convolutional', defined at:
File "/usr/local/bin/mvNCProfile", line 121, in <module>
profile_net(args.network, args.inputnode, args.outputnode, args.nshaves, args.inputsize, args.weights, args.device_no)
File "/usr/local/bin/mvNCProfile", line 104, in profile_net
net = parse_tensor(args, myriad_config, file_gen=True)
File "/usr/local/bin/ncsdk/Controllers/TensorFlowParser.py", line 211, in parse_tensor
tf.import_graph_def(graph_def, name="")
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/importer.py", line 313, in import_graph_def
op_def=op_def)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 2956, in create_op
op_def=op_def)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 1470, in __init__
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
InvalidArgumentError (see above for traceback): NodeDef mentions attr 'dilations' not in Op<name=Conv2D; signature=input:T, filter:T -> output:T; attr=T:type,allowed=[DT_HALF, DT_FLOAT]; attr=strides:list(int); attr=use_cudnn_on_gpu:bool,default=true; attr=padding:string,allowed=["SAME", "VALID"]; attr=data_format:string,default="NHWC",allowed=["NHWC", "NCHW"]>; NodeDef: 0-convolutional = Conv2D[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Pad, 0-convolutional/filter). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
[[Node: 0-convolutional = Conv2D[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Pad, 0-convolutional/filter)]]
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
First THANKS for the how to retrain Yolo. Before this I created a pb file using retrain.py and inceptionV3 with googlecodelabs. I tested it in codelabs and a static image and it worked. I was never able to get that pb file to compile with MNVCCompile. That is why I wanted an end to end how tutorial. Movidius has been gracious enough to provide graphs that I have used to test with but I can not complete this project without a retrained graph.
To me, there is no real reason to create a new CNN network since inception V3 works so well, so why bother.
In the end if I have to then I need to know whats allowed since so many are having mnvccompile issues.
Also.. You looked into OpenVino. Did you use it under Linux? I think you could not get it to work either?
Is that so?
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Hi again @chicagobob123
It seems that we have the same goals, to retrain a model in order to recognize new class/classes and to run it in Movidius :)
I looked into OpenVino and used it under Linux, but I had no success using it sadly.
I am looking forward to know if someone have had success so far retraining a model and running it in Movidius with a custom class, but it seems pretty difficult
Best regards
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
@chicagobob123 @JoseSecmotic Mr. @AshwinVijayakumar has a github repo that retrains mobilenet and uses the flowers dataset at https://github.com/ashwinvijayakumar/ncappzoo/blob/flowers/apps/flowers/Makefile. The resulting model should work on the NCS.
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Thanks @Tome_at_Intel ! I will take a look at it!
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Ok Tome_at_Intel I took a quick look and found a makefile that runs a python scripts.
I think I can make my way through it but I hope Intel understands that an end to end process from training to inference with the stick and how to do reuse and retrain current standard inference graphs would go a LONG way to make this product successful. I think the stick is a great idea and it has delivered me some mild success already. So much so I keep trying to use it. But I am currently pretty stuck and Jose has mentioned he had no success with OpenVino. So deeper spelunking I go.
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Hi @chicagobob123
I agree that it would be very helpful to have more documented processes for retraining in movidius with less restrictive scenarios.
I'm trying to reproduce the steps in the flowers dataset, and I already have my TFrecords generated, but so far, I only achieve to increase the loss. Do you know why this could happen?
Best regards
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Hi @Tome_at_Intel
Regarding the flowers example, this is a image classification, so I would need at least 2 classes.
In my case I have a single class and I would like to perform object detection instead object classification, so this is not the same scenario. Am I right?
