- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I try the new model SINet (paper:https://arxiv.org/abs/1911.09099 github:https://github.com/clovaai/ext_portrait_segmentation) and retrain with a little modification on network structure to fit onnx transformation: change bilinear upsample to nearest neighbour, so as i can use opset_version=10 and fit the requirement of onnx model_optimizor of Openvino 2020.2.
The result are same between onnx(run by onnxRuntime) and pytorch. There are no Error or Warning in the process of transfer onnx model to IR. The benchmark app is also run fine.
However, I get the different result from Infer Engine by ExecutableNetwork.infer() and also from cv2.dnn (opencv4.2.0-openvino). Code will be shown below.
System: Ubuntu 18.04
python: 3.6.9
Openvino: 2020.2
opencv: 4.2.0-openvino
pytorch: 1.4.0+cu100
Network: SINet(with DWConv, PReLU, view, transpose, nearest neighbour upsample, BN, AvgPool2d, Linear, add, max,...).
.onnx model and IR download link
rul: https://pan.baidu.com/s/1ySK08ORMHm7AhN3NLG8XDA
psw: jjox
or download the attach file.
i convert onnx model by mo_onnx.py following instructure. I have also try to close all fusion and optimize of op in om by arg or force generate old version IR(7), but still get a wrong result.
the code for converting pytorch model to onnx and test pytorch infer, onnx infer and Openvino infer is given below.
import torch import onnx from argparse import ArgumentParser import json import os import models import numpy as np import os os.environ['CUDA_VISIBLE_DEVICES'] = '-1' if __name__ == '__main__': parser = ArgumentParser() parser.add_argument('-c', '--config', type=str, default='./setting/SINet_Infer.json', help='JSON file for configuration') parser.add_argument('--cuda', action='store_true', help='use gpu or not') args = parser.parse_args() with open(args.config) as fin: config = json.load(fin) test_config = config['test_config'] data_config = config['data_config'] Lovasz = test_config["loss"] == "Lovasz" num_classes = test_config["num_classes"] -1 if Lovasz else test_config["num_classes"] p = test_config["p"] q = test_config["q"] upsample = test_config['Upsample'] model_name = test_config['Model'] chnn = test_config["chnn"] ckpt = test_config['weight_name'] result_dir = test_config['result_dir'] input_dir = data_config['img_dir'] input_ext = data_config['img_ext'] if not os.path.exists(result_dir): os.mkdir(result_dir) model = models.__dict__[model_name](classes=num_classes, p=p, q=q, chnn=chnn,upsample=upsample) if torch.cuda.device_count() > 0 and args.cuda: model=model.cuda() if torch.cuda.is_available() and args.cuda: model.load_state_dict(torch.load(ckpt)) else: model.load_state_dict(torch.load(ckpt,"cpu")) model.eval() dummy_input = (np.random.randn(1,3,480, 640)/128).astype(np.float32) torch_input = torch.from_numpy(dummy_input) if torch.cuda.device_count() > 0 and args.cuda: torch_input=torch_input.cuda() # torch.onnx.export(model, # torch_input, # "{}.onnx".format(model_name), # export_params=True, # do_constant_folding=True, # keep_initializers_as_inputs=True, # verbose = True, # opset_version=10) # opset_version=10 only support upsample nearest # upsample bilinear with align_corners=False will curse wrong result when opset_version=10 # upsample bilinear with align_corners=Ture supported only when opset_version=11 import onnx from onnx import optimizer import onnxruntime as ort from time import time ori_model = onnx.load("./Dnc_SINet.onnx") opt_model = optimizer.optimize(ori_model) onnx.save(opt_model,"./Dnc_SINet.onnx") ort_sess = ort.InferenceSession("./Dnc_SINet.onnx") torch_out = model(torch_input) onnx_out = ort_sess.run(None, {'input.1': torch_input.numpy()}) print(torch_out) print(onnx_out) from openvino.inference_engine import IECore import sys model_xml = "/opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/Dnc_SINet.xml" model_bin = "/opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/Dnc_SINet.bin" ie = IECore() net = ie.read_network(model_xml,model_bin) exec_net = ie.load_network(network=net, device_name='CPU') input_layer_name = next(iter(net.inputs)) output_layer_name = next(iter(net.outputs)) print(list(net.inputs.keys())) print(list(net.outputs.keys())) output = exec_net.infer(inputs={input_layer_name: dummy_input}) print(output)
I suppose there are some structures or op of onnx have not been supported by newest openvino.
Is someone know the reason? Thanks
P.S:
output from om_onnx.py
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /home/qqai-cv/yexing/workspace/ext_portrait_segmentation/Dnc_SINet.onnx
- Path for generated IR: /opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/.
- IR output name: Dnc_SINet
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: [1,3,480,640]
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
ONNX specific parameters:
Model Optimizer version: 2020.2.0-60-g0bc66e26ff
[ SUCCESS ] Generated IR version 10 model.
[ SUCCESS ] XML file: /opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/./Dnc_SINet.xml
[ SUCCESS ] BIN file: /opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/./Dnc_SINet.bin
[ SUCCESS ] Total execution time: 11.45 seconds.
[ SUCCESS ] Memory consumed: 93 MB.
output of the former code:
Dnc_SINet
SB Net Enc bracnch num : 2
SB Net Enc chnn num: 1
SINet Enc bracnch num : 2
SINet Enc chnn num: 1
This module has [[3, 1], [5, 1]]
This module has [[3, 1], [3, 1]]
This module has [[3, 1], [5, 1]]
This module has [[3, 1], [3, 1]]
This module has [[5, 1], [3, 2]]
This module has [[5, 2], [3, 4]]
This module has [[3, 1], [3, 1]]
This module has [[5, 1], [5, 1]]
This module has [[3, 2], [3, 4]]
This module has [[3, 1], [5, 2]]
tensor([[[[ 0.3819, 0.5347, 0.5347, ..., 0.4762, 0.4762, 0.3106],
[ 0.7500, 1.0357, 1.0357, ..., 0.8123, 0.8123, 0.5036],
[ 0.7500, 1.0357, 1.0357, ..., 0.8123, 0.8123, 0.5036],
...,
[ 0.4873, 0.5107, 0.5107, ..., -0.5339, -0.5339, -0.4042],
[ 0.4873, 0.5107, 0.5107, ..., -0.5339, -0.5339, -0.4042],
[ 0.3734, 0.4535, 0.4535, ..., -0.3036, -0.3036, -0.2189]],
[[-0.3828, -0.5603, -0.5603, ..., -0.4949, -0.4949, -0.3234],
[-0.7407, -1.0680, -1.0680, ..., -0.8161, -0.8161, -0.5189],
[-0.7407, -1.0680, -1.0680, ..., -0.8161, -0.8161, -0.5189],
...,
[-0.4727, -0.5542, -0.5542, ..., 0.5203, 0.5203, 0.3691],
[-0.4727, -0.5542, -0.5542, ..., 0.5203, 0.5203, 0.3691],
[-0.3962, -0.5192, -0.5192, ..., 0.2657, 0.2657, 0.1787]]]],
grad_fn=<MkldnnConvolutionBackward>)
[array([[[[ 0.3818878 , 0.5346629 , 0.5346629 , ..., 0.47616172,
0.47616172, 0.31056982],
[ 0.750038 , 1.0357037 , 1.0357037 , ..., 0.8122927 ,
0.8122927 , 0.5036248 ],
[ 0.750038 , 1.0357037 , 1.0357037 , ..., 0.8122927 ,
0.8122927 , 0.5036248 ],
...,
[ 0.48733354, 0.5107298 , 0.5107298 , ..., -0.5339395 ,
-0.5339395 , -0.4042134 ],
[ 0.48733354, 0.5107298 , 0.5107298 , ..., -0.5339395 ,
-0.5339395 , -0.4042134 ],
[ 0.3734363 , 0.45352763, 0.45352763, ..., -0.30358696,
-0.30358696, -0.21885665]],
[[-0.38280687, -0.5603347 , -0.5603347 , ..., -0.4948506 ,
-0.4948506 , -0.32344514],
[-0.7407261 , -1.0680082 , -1.0680082 , ..., -0.8160989 ,
-0.8160989 , -0.51887286],
[-0.7407261 , -1.0680082 , -1.0680082 , ..., -0.8160989 ,
-0.8160989 , -0.51887286],
...,
[-0.47271097, -0.5542413 , -0.5542413 , ..., 0.5202785 ,
0.5202785 , 0.369121 ],
[-0.47271097, -0.5542413 , -0.5542413 , ..., 0.5202785 ,
0.5202785 , 0.369121 ],
[-0.39621764, -0.51916736, -0.51916736, ..., 0.26572707,
0.26572707, 0.17874075]]]], dtype=float32)]
['input.1']
['962']
{'962': array([[[[ 0.32619184, 0.39257142, 0.39257142, ..., 0.45147315,
0.45147315, 0.3005295 ],
[ 0.6733598 , 0.8392231 , 0.8392231 , ..., 0.7603874 ,
0.7603874 , 0.48128414],
[ 0.6733598 , 0.8392231 , 0.8392231 , ..., 0.7603874 ,
0.7603874 , 0.48128414],
...,
[ 0.9983263 , 1.3471146 , 1.3471146 , ..., 0.60972846,
0.60972846, 0.41599974],
[ 0.9983263 , 1.3471146 , 1.3471146 , ..., 0.60972846,
0.60972846, 0.41599974],
[ 0.7210743 , 1.0018882 , 1.0018882 , ..., 0.37571615,
0.37571615, 0.24157274]],
[[-0.31721306, -0.41440713, -0.41440713, ..., -0.4688219 ,
-0.4688219 , -0.3080279 ],
[-0.6602123 , -0.88080376, -0.88080376, ..., -0.7617766 ,
-0.7617766 , -0.48949003],
[-0.6602123 , -0.88080376, -0.88080376, ..., -0.7617766 ,
-0.7617766 , -0.48949003],
...,
[-0.9842702 , -1.3944788 , -1.3944788 , ..., -0.6041991 ,
-0.6041991 , -0.40403605],
[-0.9842702 , -1.3944788 , -1.3944788 , ..., -0.6041991 ,
-0.6041991 , -0.40403605],
[-0.7329229 , -1.0536808 , -1.0536808 , ..., -0.3506116 ,
-0.3506116 , -0.22082609]]]], dtype=float32)}
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Kevin,
SINet is not a tried and tested OpenVINO python to ONNX conversion topology. Please refer to Supported Pytorch* Models via ONNX Conversion.
Best Regards,
Surya
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page