Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6462 Discussions

KeyError: "The name 'input:0' refers to a Tensor which does not exist

idata
Employee
7,143 Views

Hi everyone.when I use mvNCCompile to compile trained result data.I encounter a problem.

 

KeyError: "The name 'input:0' refers to a Tensor which does not exist. The operation, 'input', does not exist in the graph."

 

Can you tell me how to solve this problem?
0 Kudos
17 Replies
idata
Employee
6,247 Views

@guohua24, I took the liberty of editing your topic, so that it's easier for other members to figure out the problem statement.

 

The error is basically saying that mvNCCompile is expecting the input node to be named as 'input', but your network might have a different name. Here are some examples of how to compile TF models (look into the Makefile).

 

https://github.com/ashwinvijayakumar/ncappzoo/tree/tensorflow-template/tensorflow/inception

 

https://github.com/ashwinvijayakumar/ncappzoo/tree/tensorflow-template/tensorflow/mobilenets

 

If you are using a network that is not supported by TF's net factory, then you can use either tensorboard or summarize_graph to look for the input node name, and use that name with mvNCCompile's -in option.

0 Kudos
idata
Employee
6,248 Views

Thanks a lot!

0 Kudos
idata
Employee
6,248 Views

@AshwinVijayakumar

 

Please elaborate! Your reply does not explain anything.

 

I have the same issue when compiling my model.

 

What IS the "input:0" ? How to solve it? And these links you provided have nothing to do with CUSTOM models. They only download preset stuff which I can't modify.

 

My model is my trained Mobilenet ssd v2 coco.

0 Kudos
idata
Employee
6,248 Views

@KoitRK

 

It is the name of placeholder.

 

For example

 

self.img_placeholder = tf.placeholder(dtype=tf.float32, shape=(227,227), name="input")

 

The model INPUT you are using is not named or has a name other than "INPUT".

 

You need to name your ”Placeholder” yourself and generate .pb or .meta.

 

Please convert the contents of .pb to .pbtxt with the script below and check the placeholder's name.

 

https://github.com/PINTO0309/MobileNet-SSD-RealSense/blob/master/tfconverter.py
0 Kudos
idata
Employee
6,248 Views

@PINTO

 

The compiler no longer yells at the input.

 

But the [Error 13] Toolkit Error: Provided OutputNode/InputNode name does not match with one contained in model file Provided: output:0

 

Any way to check for the output name as well?

 

By the way:

 

I used the script found in the link to create/freeze inference file. https://github.com/tensorflow/models/blob/master/research/object_detection/export_inference_graph.py
0 Kudos
idata
Employee
6,248 Views

@KoitRK

 

As suggested by AshwinVijayakumar, it is easiest to visualize the network using Tensorboard or summarize_graph.

0 Kudos
idata
Employee
6,248 Views

@PINTO

 

First questions first:

 

Is it even possible to compile 'mobilenet SSD' model trained with 'Tensorflow' with MULTIPLE outputs?

 

Because from the release notes it does seem like 'Mobilenet SSD' is only supported with Caffe. https://movidius.github.io/ncsdk/release_notes.html

 

If that's really the case, im doomed, because there are no beginnerfrienly caffe training tutorials. (For win10). I also tried for ubuntu, but couldn't get further from first step.

 

Also, whe I tried to look at the Tensorboard, I really have no idea what/where to look for. There is a looong list of boxes connected to eachother but no apparent useful information at first glance.

0 Kudos
idata
Employee
6,248 Views

@KoitRK

 

 

Is it even possible to compile 'mobilenet SSD' model trained with 'Tensorflow' with MULTIPLE outputs?

 

Unfortunately, No.

 

Also, it is not limited to Tensorflow.

 

Also, whe I tried to look at the Tensorboard, I really have no idea what/where to look for. There is a looong list of boxes connected to eachother but no apparent useful information at first glance.

 

 

The following is an example of the procedure to check the names of input and output.

 

1.Introduction of Bazel (Usage section)

 

https://github.com/PINTO0309/Bazel_bin.git

 

Or install in another way.

 

2.Tensorflow's Clone

 

$ git clone -b v1.11.0 https://github.com/tensorflow/tensorflow.git $ cd tensorflow $ git checkout -b v1.11.0

 

3.Build summarize_graph

 

$ bazel build tensorflow/tools/graph_transforms:summarize_graph

 

4.Running summarize_graph (Please change the path of the .pb file)

 

$ bazel-bin/tensorflow/tools/graph_transforms/summarize_graph --in_graph=tensorflow_model.pb

 

For example.

 

input

 

is input's name.

 

InceptionV1/Logits/Predictions/Reshape_1

 

is output's name.

 

Found 1 possible inputs: (name=input, type=float(1), shape=[1,224,224,3]) No variables spotted. Found 1 possible outputs: (name=InceptionV1/Logits/Predictions/Reshape_1, op=Reshape) Found 6633279 (6.63M) const parameters, 0 (0) variable parameters, and 0 control_edges Op types used: 298 Const, 231 Identity, 114 Add, 114 Mul, 58 Conv2D, 57 Relu, 57 Rsqrt, 57 Sub, 13 MaxPool, 9 ConcatV2, 2 Reshape, 1 AvgPool, 1 BiasAdd, 1 Placeholder, 1 Softmax, 1 Squeeze To use with tensorflow/tools/benchmark:benchmark_model try these arguments: bazel run tensorflow/tools/benchmark:benchmark_model -- --graph=tensorflow_model.pb --show_flops --input_layer=input --input_layer_type=float --input_layer_shape=1,224,224,3 --output_layer=InceptionV1/Logits/Predictions/Reshape_1 Found 1 possible outputs: (name=InceptionV1/Logits/Predictions/Reshape_1, op=Reshape)

 

Since this topic has nothing to do with NCS, please refer to the official Tensorflow tutorial.

 

https://www.tensorflow.org/tutorials/
0 Kudos
idata
Employee
6,248 Views

@PINTO

 

So in conclusion, the NCS is garbage, as it can not output both the bounding box and class? Great, I gotta sign off from the competition that I was about to go on tomorrow!

 

I created a robot, programmed it to run autonomously, find for objects and send pictures. I created pictures, annotated them, trained 3 models on different networks with Tensorflow, borrowed a GPU, had no sleep for the past weeks AND IT WAS ALL A TOTAL WASTE AS I CAN NOT COMPILE THE F__ GRAPH!! Good enough there was no entry fee for the competition..

0 Kudos
idata
Employee
6,248 Views

Such a thing, please tell Intel employees.

0 Kudos
idata
Employee
6,248 Views

@PINTO

 

Sorry, I'm bothering you, but you're the only one active on this dead forum.

 

BTW how did this guy get it to output multiple outputs? https://www.pyimagesearch.com/2018/02/19/real-time-object-detection-on-the-raspberry-pi-with-the-movidius-ncs/

 

Caffe works with MSSD?

 

I tried to install Caffe on multiple ubuntu devices and windows but none of it worked. No tutorial works.
0 Kudos
idata
Employee
6,248 Views

@KoitRK

 

First, I will organize the meaning of the terms.

 

pyimagesearch′s meaning of ”Multiple Output” → Multiple Boundingbox

 

The result to be returned is one type of Multidimensional list.

 

But, Multiple types of lists can not be returned.

 

The values ​​in the list are the probability value for each class and the position of the bounding box.

 

The above rule is the same for both Tensorflow and Caffe.

 

This is a rule specific to NCSDK.

 

I am running a similar sample. (with Caffe)

 

https://github.com/PINTO0309/MobileNet-SSD-RealSense
0 Kudos
idata
Employee
6,248 Views

@PINTO

 

From what I understand from your post: NCSDK can have only one output, which could one list containing multiple values. But this actually confuses me even more because why can't models trained on Tensorflow output a list while Caffe can?

 

But that's not that important anymore.

 

What would you, as someone with obviously huge knowledge and skills, recommend for me to do?

 

-I need a graph file, that takes 300 by 300 images as an input, and outputs (a list that contains?) bounding boxes and confidence.

 

-It has to be trained with my custom images

 

Should I re-install RPI and ubuntu? Which tutorial to follow? Caffe, I presume?

 

And most importantly: Can I make all that in 8 hours, so I could train the model till the next morning to make it in time?

 

OR, maybe you know where I could find graph or prototxt files that can detect a great number of animals. (which, though, kinda sounds like cheating)

0 Kudos
idata
Employee
6,248 Views

@KoitRK

 

 

NCSDK can have only one output, which could one list containing multiple values.

 

Yes, you are correct.

 

But this actually confuses me even more because why can't models trained on Tensorflow output a list while Caffe can?

 

It's not "Because it is Tensorflow".

 

Rather, it is a matter of compatibility between the model being converted and NCSDK.

 

"Mobilenet ssd v2 coco" -> It does not work. I have confirmed in the past. Both Tensorflow and Caffe do not work.

 

"Mobilenet ssd v1 voc" -> With Caffe, It works. I have confirmed in the past. However, since there are many bugs in the Tensorflow SDK implementation, the behavior is unstable.

 

Intel does not seem to emphasize the implementation of Tensorflow.

 

Compared to Tensorflow's model, converting Caffe's model is stable.

 

Many engineers are complaining.

 

I do not recommend implementing Tesorflow with NCSDK.

 

Should I re-install RPI and ubuntu? Which tutorial to follow? Caffe, I presume?

 

If Intel's installation tutorial is successful, reinstalling SDK, Rpi, Ubuntu is unnecessary.

 

All information is covered in my repository.

 

Can I make all that in 8 hours, so I could train the model till the next morning to make it in time?

 

However, training of the model requires a certain amount of time.

 

If using Caffe and retraining is necessary, I feel it is difficult to complete the test within 8 hours.

 

If you do not need retraining the original data by Caffe, you are almost done preparing the environment.

 

0 Kudos
idata
Employee
6,248 Views

@KoitRK

 

I do not know if I can support you, the following file is a pre-compiled graph of MobilenetSSD (with VOC).

 

Classify 20 classes of VOC dataset.

 

Although I have not checked the operation, It will work immediately with pyimagesearch's sample program.

 

https://github.com/PINTO0309/MobileNet-SSD-RealSense/raw/master/graph
0 Kudos
idata
Employee
6,247 Views

@PINTO

 

First of all, thank you very much for taking the time and helping a newbie out.

 

If I was previously unclear: I have today 8 hours to set up everything to the point to start training. (installing caffe, ncsdk, cuda all that stuff). And then (if I succeed in setup) I have 10-15 hours to train the model.

 

About the graph file you sent, it doesn't work: unsupported format or something. Possibly it is made for NCSDK v2? I have v1 on my raspberry. Plus, if it has random 20 classes, it probably doesn't have the necessary 10 animal classes.

 

As of the setting up - Should I follow your git read-me from start to end, or are there some things that are unnecessary for me? e.g. RealSense, multiStick.

 

Also, which parts have to be done on PC(Ubuntu) and which on raspberry? It seemed like you used NCSDK v2, so I upgrade/install the V2 on raspberry? Does Caffe also have to be installed on RPI?

 

And where to put the image test and train files and how does the annotating go? Same as tensorflow?
0 Kudos
idata
Employee
6,248 Views

@KoitRK

 

 

Possibly it is made for NCSDK v2?

 

Oh, no… That's right.

 

 

I do not think there is any meaning, but, v1 is below.

 

https://github.com/PINTO0309/MobileNet-SSD-RealSense/archive/v1.0.zip

 

 

Should I follow your git read-me from start to end, or are there some things that are unnecessary for me? e.g. RealSense, multiStick.Also, which parts have to be done on PC(Ubuntu) and which on raspberry? It seemed like you used NCSDK v2, so I upgrade/install the V2 on raspberry? Does Caffe also have to be installed on RPI?

 

You guessed. -> e.g. RealSense, multiStick.

 

I will make another suggestion.

 

Well…Below repository is easier to finish work. (Only Rpi)

 

https://github.com/PINTO0309/MobileNet-SSD/archive/v1.0.zip

 

And where to put the image test and train files and how does the annotating go? Same as tensorflow?

 

Caffe's training has more configuration files than Tensorflow and it's very troublesome…

 

You need to refer to the contents of every file and change it to your preference.

 

The reference destination of the training image is described in train.prototxt.

 

The same is true of the location of the annotation file.

 

solver.prototxt -> reference -> train.prototxt, test.prototxt

 

 

Please read "5. Generate lmdb file".

 

I'm sorry, I have to put my baby to bed, so I can not reply anymore. . .

0 Kudos
Reply