Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Beginner
366 Views

Error compiling SSD movilenet v2 model to NCS

Jump to solution

Dear colleagues,

I'm facing issues to compile a SSD mobilenet v2 model (tensorflow format), please find below the error message:

 

$ mvNCCompile frozen_inference_graph.pb -in input 

/usr/local/bin/ncsdk/Controllers/Parsers/TensorFlowParser/Convolution.py:44: SyntaxWarning: assertion is always true, perhaps remove parentheses?

assert(False, "Layer type not supported by Convolution: " + obj.type)

mvNCCompile v02.00, Copyright @ Intel Corporation 2017

 

/usr/local/lib/python3.5/dist-packages/tensorflow/python/util/tf_inspect.py:45: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() instead

Traceback (most recent call last):

File "/usr/local/bin/mvNCCompile", line 169, in <module>

create_graph(args.network, args.image, args.inputnode, args.outputnode, args.outfile, args.nshaves, args.inputsize, args.weights, args.explicit_concat, args.ma2480, args.scheduler, args.new_parser, args)

File "/usr/local/bin/mvNCCompile", line 148, in create_graph

load_ret = load_network(args, parser, myriad_config)

File "/usr/local/bin/ncsdk/Controllers/Scheduler.py", line 100, in load_network

parse_ret = parse_tensor(arguments, myriad_conf)

File "/usr/local/bin/ncsdk/Controllers/TensorFlowParser.py", line 212, in parse_tensor

tf.import_graph_def(graph_def, name="")

File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/util/deprecation.py", line 432, in new_func

return func(*args, **kwargs)

File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/importer.py", line 570, in import_graph_def

raise ValueError('No op named %s in defined operations.' % node.op)

ValueError: No op named NonMaxSuppressionV3 in defined operations.

 

According to the last release notes, this model is supported by NCS. May I missing something?

 

I've got this modei in the official Tensorflow model zoo: https://github.com/tensorflow/models/blob/master/research/object_detecti...

I'm running a VM with Ubuntu 16.04 and NCAPI2

 

Thanks in advance for the support!

 

0 Kudos

Accepted Solutions
Highlighted
Employee
100 Views

Hello FPART1,

 

The Neural Compute SDK added support for Tensorflow SSD networks recently. Take a look at the following documentation for additional information. https://movidius.github.io/ncsdk/tools/tf_ssd_config.html

 

You need to use the --tf-ssd-config parameter as follows:

 

mvNCCompile -s 12 frozen_inference_graph.pb --tf-ssd-config ssd.config

 

Regards,

Aroop

View solution in original post

0 Kudos
4 Replies
Highlighted
Employee
101 Views

Hello FPART1,

 

The Neural Compute SDK added support for Tensorflow SSD networks recently. Take a look at the following documentation for additional information. https://movidius.github.io/ncsdk/tools/tf_ssd_config.html

 

You need to use the --tf-ssd-config parameter as follows:

 

mvNCCompile -s 12 frozen_inference_graph.pb --tf-ssd-config ssd.config

 

Regards,

Aroop

View solution in original post

0 Kudos
Highlighted
Beginner
100 Views

Thanks Aroop!

This parameter was definitely missed! but unfortunately i've got another error, maybe this is related to the pre trained model I'm using, this one is from the Tensorflow zoo, but I'm not sure if this is supported by Movidius. Any idea?

 

$ mvNCCompile -s 12 frozen_inference_graph.pb --tf-ssd-config ssd.config -in image_tensor -on detection_classes

Traceback (most recent call last):

 File "/usr/local/bin/mvNCCompile", line 35, in <module>

   from Controllers.Scheduler import load_myriad_config, load_network

 File "/usr/local/bin/ncsdk/Controllers/Scheduler.py", line 24, in <module>

   from Controllers.Optimizer import postParsingOptimizations, selectImplementations, streamEverythingSchedule, fixTensors, eliminateNoOps

 File "/usr/local/bin/ncsdk/Controllers/Optimizer.py", line 25, in <module>

   from Controllers.Parsers.Parser.Conversion import Conversion

 File "/usr/local/bin/ncsdk/Controllers/Parsers/__init__.py", line 3, in <module>

   from .Caffe import CaffeParser

 File "/usr/local/bin/ncsdk/Controllers/Parsers/Caffe.py", line 30, in <module>

   import caffe

 File "/opt/movidius/caffe/python/caffe/__init__.py", line 1, in <module>

   from .pycaffe import Net, SGDSolver, NesterovSolver, AdaGradSolver, RMSPropSolver, AdaDeltaSolver, AdamSolver

 File "/opt/movidius/caffe/python/caffe/pycaffe.py", line 15, in <module>

   import caffe.io

 File "/opt/movidius/caffe/python/caffe/io.py", line 2, in <module>

   import skimage.io

 File "/usr/local/lib/python3.5/dist-packages/skimage/__init__.py", line 158, in <module>

   from .util.dtype import *

 File "/usr/local/lib/python3.5/dist-packages/skimage/util/__init__.py", line 7, in <module>

   from .arraycrop import crop

 File "/usr/local/lib/python3.5/dist-packages/skimage/util/arraycrop.py", line 8, in <module>

   from numpy.lib.arraypad import _validate_lengths

ImportError: cannot import name '_validate_lengths'

 

 

Is there any repository of pre trained object detection models I could use for fine-tuning, which are supported by Movidius afterwards?

Thanks and regards,

Felipe

 

0 Kudos
Highlighted
Employee
100 Views

Hi FPART1,

 

Could you try uninstalling numpy from your system using this command:

pip3 uninstall numpy

 

By the way, the Neural Compute SDK is no longer being maintained. Have you considered using the OpenVINO Toolkit? It has support for both the original Neural Compute Stick and the Intel Neural Compute Stick 2.

 

Regards,

Aroop

0 Kudos
Highlighted
Beginner
100 Views

Hi Aroop,

Thanks for the support, unfortunately the numpy reinstallation didn´t work so I moved to openVINO as you recommended.

I have installed openVINO in my raspberry, in order to run my mobilenet v2 SSD object detector. But I'm struggling to get this working, since I've read in the documentation that SSD object detector API doesn't work in the movidius VPU sticks, so I would have to run my model via python code thru openCV which is running the inference in the VPU stick. But this is not using inference engine, am I correct?

In order to try to use the inference engine I searched for any pre trained IR mobilenet SSD trained in imagenet, but I only found one trained in COCO, which is available in the model zoo. Is there any recommendation?

I have a pre trained model from tensorflow zoo trained with imagenet, how can I ensure that this model will be compatible with the VPU stick, may I assume that any tensorflow model can be read in openCV via CV.dnn.readnetwork ? I have the openVINO framework successfully installed in my raspberry.

Thanks in advance for your support!

Regards

Felipe

0 Kudos