Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6482 Discussions

How to fix RuntimeError: dimension (0) in node dim must be a positive integer

Ben__Hsu
Beginner
2,151 Views

I convert the Pytorch model(CIResNet22_RPN.pth) from SiamDW project to ONNX model and there is no error to get the artifact (siamdw.onnx in attached).

After that, i ref the workaround (may be incorrect) to shape inference conflict and convert ONNX model(siamdw.onnx) to OpenVINO IR model with the command as bellowed.

$ cd dldt/model-optimizer $ python3 mo_onnx.py --input_model '~/SiamDW/models/siamdw.onnx' --log_level DEBUG

 

Then, i get the following errors.

[ ERROR ]  Exception occurred during running replacer "fusing (<class 'extensions.middle.fusings.Fusing'>)": 
[ ERROR ]  Traceback (most recent call last):
  File "~/dldt/model-optimizer/mo/utils/class_registration.py", line 286, in apply_transform
    replacer.find_and_replace_pattern(graph)
  File "~/dldt/model-optimizer/extensions/middle/fusings.py", line 82, in find_and_replace_pattern
    for_graph_and_each_sub_graph_recursively(graph, fuse_mul_add_sequence)
  File "~/dldt/model-optimizer/mo/middle/pattern_match.py", line 58, in for_graph_and_each_sub_graph_recursively
    func(graph)
  File "~/dldt/model-optimizer/mo/middle/passes/fusing/fuse_linear_seq.py", line 145, in fuse_mul_add_sequence
    is_fused |= _fuse_linear_sequence(graph, node)
  File "~/dldt/model-optimizer/mo/middle/passes/fusing/fuse_linear_seq.py", line 75, in _fuse_linear_sequence
    assert (np.array_equal(get_tensor_in_port(fnodes[0]).data.get_shape(), fnodes[-1].out_port(0).data.get_shape()))
AssertionError
...

 

I ref Linear Operations Fusing but no idea how to fix, so i disabled that optimization.

$ python3 mo_onnx.py --input_model '~/SiamDW/models/siamdw.onnx' **--disable_fusing** --log_level DEBUG

 

Keep going and i get bellowed another error.
Here i am not sure if it is related to previous workaround to shape inference conflict.

[ ERROR ]  Error while emitting attributes for layer 329/mean (id = 48). It usually means that there is unsupported pattern around this node or unsupported combination of attributes.
[ 2020-04-22 17:25:54,809 ] [ DEBUG ] [ main:317 ]  Traceback (most recent call last):
  File "~/dldt/model-optimizer/mo/back/ie_ir_ver_2/emitter.py", line 275, in serialize_node_attributes
    xml_ports(node, parent_element, edges)
  File "~/dldt/model-optimizer/mo/back/ie_ir_ver_2/emitter.py", line 181, in xml_ports
    xml_shape(node.graph.node['shape'], port)
  File "~/dldt/model-optimizer/mo/back/ie_ir_ver_2/emitter.py", line 123, in xml_shape
    'wrong.'.format(d))
mo.utils.error.Error: The value "-1" for shape is less 0. May be the input shape of the topology is wrong.


The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "~/dldt/model-optimizer/mo/back/ie_ir_ver_2/emitter.py", line 279, in serialize_node_attributes
    refer_to_faq_msg(3)).format(node.id)) from e
mo.utils.error.Error: Unable to create ports for node with id 48. 
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #3. 


The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "~/dldt/model-optimizer/mo/back/ie_ir_ver_2/emitter.py", line 291, in serialize_node_attributes
    serialize_element(graph, node, s, parent_element, edges, unsupported)
  File "~/dldt/model-optimizer/mo/back/ie_ir_ver_2/emitter.py", line 245, in serialize_element
    serialize_node_attributes(graph, node, subelements, element, edges, unsupported)
  File "~/dldt/model-optimizer/mo/back/ie_ir_ver_2/emitter.py", line 298, in serialize_node_attributes
    ) from e
mo.utils.error.Error: Error while emitting attributes for layer 329/mean (id = 48). It usually means that there is unsupported pattern around this node or unsupported combination of attributes.

 

From the error message, i think the root cause is at this line.

The value "-1" for shape is less 0. May be the input shape of the topology is wrong.

 

Look into the code, i think this may be ignore so i make the following changes(mark the error raise and add pass).

source: dldt/model-optimizer/mo/back/ie_ir_ver_2/emitter.py

def xml_shape(shape: np.ndarray, element: Element):
    for d in shape:
        dim = SubElement(element, 'dim')
        if d < 0:
            #raise Error('The value "{}" for shape is less 0. May be the input shape of the topology is '
            #            'wrong.'.format(d))
            pass
        if int(d) != d:
            raise Error('The value "{}" for shape is not integer.'.format(d))
        if not isinstance(d, np.int64):
            log.warning('The element of shape is not np.int64 value. Converting the value "{}" to integer'.format(d))
            d = int(d)
        dim.text = str(d)

 

Finally, i get the desired OpenVINO IR model (siamdw.xml and siamdw.bin).

Keep going to score the OpenVINO IR model with two random inputs for Siamese Network.

# Prepare network
model_xml = '~/SiamDW/models/siamdw.xml'
model_bin = '~/SiamDW/models/siamdw.bin'
device = 'CPU'
ie_core = IECore()
ie_net = IENetwork(model=model_xml, weights=model_bin) # Error here
exec_net = ie.load_network(ie_net, device)

# Prepare dummy inputs
batch_size = 1 
dummy_z = Variable(torch.randn(batch_size, 3, 127, 127, requires_grad=True))
dummy_x = Variable(torch.randn(batch_size, 3, 255, 255, requires_grad=True))

# Perform inference
ie_results = exec_net.infer(inputs={'template': dummy_z, 'search':dummy_x})

 

But i get the following error at line to instance IENetwork.
I think it is related to previous steps but still no idea how to fix? Too many unknown...

...
File "ie_api.pyx", line 980, in openvino.inference_engine.ie_api.IENetwork.__cinit__
RuntimeError: dimension (0) in node dim must be a positive integer: at offset 16089

 

Any help in appreciation.

 

Sincerely,
Ben.

0 Kudos
6 Replies
SIRIGIRI_V_Intel
Employee
2,150 Views

Thank you for your patience.

The Siamese network doesn’t seems to be the supported pytorch model to onnx conversion. We recommend to use the supported topologies.

Regards,

Ram prasad

0 Kudos
Ben__Hsu
Beginner
2,150 Views

I check the operators inside of siamdw.onnx are supported by OpenVINO from ONNX* Supported Operators.

I think the problem is shape conflict when model optimizer doing shape inference.

So i posted another issue.

Why Model Optimizer Gave Shape Conflict When Customized ONNX Model Converted to OpenVINO IR Model?

 

Any idea in appreciation.

Ben.

 

0 Kudos
SIRIGIRI_V_Intel
Employee
2,150 Views

The architectures should be supported by OpenVINO because the layers will be same across the other supported topologies. Currently, the OpenVINO doesn’t support Siamese network. Please keep an eye on the forum or stay tuned for the updates.

Regards,

Ram prasad

0 Kudos
Ben__Hsu
Beginner
2,150 Views

@Ram,

  Thank you for reply.

  I turn to use ONNXRuntime with OpenVINO backend and perform the ONNX model inference successfully. (The prediction precision to be verified)

  I think the difference to OpenVINO IR is nGraph.

  I am interesting what the strategy nGraph grows because OpenVINO IR seems to be converted nGraph IR finally.

 

Best Regards,

Ben.

0 Kudos
ShivSD
Beginner
2,079 Views

Hi Ben,

 

I'm having similar issue, how did you run the onnx model using onnxRuntime with openvino backend. I'm new to this, any example code or link would be helpful.

 

thanks

shiv

 

0 Kudos
SIRIGIRI_V_Intel
Employee
2,150 Views

The Siamese network is not supported by OpenVINO. Keep an eye on the forum or stay tuned for updates.

Regards,

Ram prasad

0 Kudos
Reply