Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6392 Discussions

How To Use Own TF Object Detection Api Model In The ncsdk2?

idata
Employee
716 Views

Hi everyone . Now I have frozen_inference_graph.pb model.ckpt.data-00000-of-00001 model.ckpt.index model.ckpt.meta

 

What should I do next?????
0 Kudos
5 Replies
idata
Employee
383 Views

Hi @YoucanBaby

 

Thank you for reaching out! Now that you have your TensorFlow model, you will need to convert it to an Intel Movidius Graph file that can be used with the Neural Compute Stick.

 

The Neural Compute SDK includes the mvNCCompile that is used to compile the networks into the Graph.

 

Take a look at the Tools provided with the NCSDK.

 

Once you have a graph file, you will need to write a C or Python program to load the graph into the Neural Compute Stick. I would start by looking at the examples included in the NCSDK and NCAPPZOO.

 

https://github.com/movidius/ncsdk/tree/master/examples

 

https://github.com/movidius/ncappzoo/tree/master/tensorflow

 

Hope this helps!

 

Regards,

 

Jesus
0 Kudos
idata
Employee
383 Views

Hi @Jesus_at_Intel

 

Thanks! It works! :) But I meet an error. :'(

 

When I command mvNCCompile frozen_graph_slim_87%.pb -s 12 -in=input -on=InceptionV2/Predictions/Reshape_1 -is 224 224 -o inception_v2.graph . Errors have arisen.

 

Caused by op 'InceptionV2/InceptionV2/Conv2d_1a_7x7/separable_conv2d/depthwise', defined at:

 

File "/usr/local/bin/mvNCCompile", line 118, in

 

create_graph(args.network, args.inputnode, args.outputnode, args.outfile, args.nshaves, args.inputsize, args.weights)

 

File "/usr/local/bin/mvNCCompile", line 104, in create_graph

 

net = parse_tensor(args, myriad_config)

 

File "/usr/local/bin/ncsdk/Controllers/TensorFlowParser.py", line 211, in parse_tensor

 

tf.import_graph_def(graph_def, name="")

 

File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/importer.py", line 313, in import_graph_def

 

op_def=op_def)

 

File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 2956, in create_op

 

op_def=op_def)

 

File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 1470, in init

 

self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

 

InvalidArgumentError (see above for traceback): NodeDef mentions attr 'dilations' not in Op output:T; attr=T:type,allowed=[DT_FLOAT, DT_DOUBLE]; attr=strides:list(int); attr=padding:string,allowed=["SAME", "VALID"]; attr=data_format:string,default="NHWC",allowed=["NHWC", "NCHW"]>; NodeDef: InceptionV2/InceptionV2/Conv2d_1a_7x7/separable_conv2d/depthwise = DepthwiseConv2dNative[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 2, 2, 1], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_input_0_0, InceptionV2/Conv2d_1a_7x7/depthwise_weights). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).

 

[[Node: InceptionV2/InceptionV2/Conv2d_1a_7x7/separable_conv2d/depthwise = DepthwiseConv2dNative[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 2, 2, 1], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_input_0_0, InceptionV2/Conv2d_1a_7x7/depthwise_weights)]]

 

I use bazel to check OutputNode/InputNode of my model. Information follows:

 

Found 1 possible inputs: (name=input, type=float(1), shape=[1,224,224,3])

 

No variables spotted.

 

Found 1 possible outputs: (name=InceptionV2/Predictions/Reshape_1, op=Reshape)

 

Found 10187114 (10.19M) const parameters, 0 (0) variable parameters, and 0 control_edges

 

Op types used: 357 Const, 278 Identity, 70 Conv2D, 69 Relu, 68 FusedBatchNorm, 10 ConcatV2, 8 AvgPool, 5 MaxPool, 2 BiasAdd, 2 Reshape, 1 DepthwiseConv2dNative, 1 Placeholder, 1 Softmax, 1 Squeeze

 

To use with tensorflow/tools/benchmark:benchmark_model try these arguments:

 

bazel run tensorflow/tools/benchmark:benchmark_model -- --graph=/home/xuyifang/Desktop/ncs/frozen_graph_slim_87%.pb --show_flops --input_layer=input --input_layer_type=float --input_layer_shape=1,224,224,3 --output_layer=InceptionV2/Predictions/Reshape_1

 

Can you help me? Thank u verrrrrrrrrrrrry much!

0 Kudos
idata
Employee
383 Views

Hi @YoucanBaby

 

Could you share the model you are trying to compile? I would like to try this myself.

 

Which Neural Compute SDK version are you using? Are you using the Neural Compute Stick 1 or 2?

 

Regards,

 

Jesus
0 Kudos
idata
Employee
383 Views

Hi @Jesus_at_Intel

 

I used tensorflow1.5 to solve the last problem. But I face a new one. All predicted picture probabilities are nan. I don't know why, because it's still correct on my computer.

 

This is result.

 

------------prediction-----------------

 

prediction 0(probability nan) is 3 label index is: 3

 

prediction 1(probability nan) is 2 label index is: 2

 

prediction 2(probability nan) is 1 label index is: 1

 

prediction 3(probability nan) is 0 label index is: 0

 

Can u help me? That a looooot!!!!!

0 Kudos
idata
Employee
383 Views

Hi @YoucanBaby

 

Can you share the model and steps to reproduce?

 

The following thread seems to be similar to the issue you are seeing.

 

https://ncsforum.movidius.com/discussion/598/tensorflow-nan-outputs-after-training-n-steps

 

Regards,

 

Jesus
0 Kudos
Reply