Ref: https://docs.openvinotoolkit.org/latest/omz_tools_accuracy_checker_adapters.html
yolo_v3
- converting output of YOLO v3 family models toDetectionPrediction
representation.classes
- number of detection classes (default 80).anchors
- anchor values provided as comma-separited list or precomputed:yolo_v3
-[10.0, 13.0, 16.0, 30.0, 33.0, 23.0, 30.0, 61.0, 62.0, 45.0, 59.0, 119.0, 116.0, 90.0, 156.0, 198.0, 373.0, 326.0]
tiny_yolo_v3
-[10.0, 14.0, 23.0, 27.0, 37.0, 58.0, 81.0, 82.0, 135.0, 169.0, 344.0, 319.0]
coords
- number of bbox coordinates (default 4).num
- num parameter from DarkNet configuration file (default 3).anchor_mask
- mask for used anchors for each output layer (Optional, if not provided default way for selecting anchors will be used.)threshold
- minimal objectness score value for valid detections (default 0.001).input_width
andinput_height
- network input width and height correspondingly (default 416).outputs
- the list of output layers names.raw_output
- enabling additional preprocessing for raw YOLO output format (defaultFalse
).output_format
- setting output layer format - boxes first (BHW
)(default, also default for generated IRs), boxes last (HWB
). Applicable only if network output not 3D (4D with batch) tensor.cells
- sets grid size for each layer, accordingoutputs
filed. Works only withdo_reshape=True
or when output tensor dimensions not equal 3.do_reshape
- forces reshape output tensor to [B,Cy,Cx] or [Cy,Cx,B] format, depending onoutput_format
value ([B,Cy,Cx] by default). You may need to specifycells
value.transpose
- transpose output tensor to specified format (optional).
May I ask about this:
input_width
andinput_height
- network input width and height correspondingly (default 416).
Why does OpenVINO 2021.3 report an error to me when I set this parameter during quantification?
"launchers": [
{
"framework": "dlsdk",
"adapter": {
"type": "yolo_v3",
"anchors": "14.0, 8.0, 20.0, 11.0, 29.0, 16.0, 21.0, 29.0, 47.0, 29.0, 73.0, 45.0",
"classes": 1,
"coords": 4,
"num": 6,
"input_width": 512,
"input_height": 288,
"threshold": 0.001,
"anchor_masks": [[3, 4, 5], [0, 1, 2]],
"outputs": ["detector/yolo-v4-tiny/Conv_17/BiasAdd/YoloRegion", "detector/yolo-v4-tiny/Conv_20/BiasAdd/YoloRegion"]
}
}
],
I remember that all previous versions of OpenVINO can be set. Why can’t the input size be specified in the latest version?
The official website has instructions in the latest version (2021.3) to set this parameter.
The error is as follows:
File "/opt/intel/openvino_2021/deployment_tools/tools/post_training_optimization_toolkit/libs/open_model_zoo/tools/accuracy_checker/accuracy_checker/config/config_validator.py", line 59, in raise_error
raise error
libs.open_model_zoo.tools.accuracy_checker.accuracy_checker.config.config_validator.ConfigError: Invalid value "{'type': 'yolo_v3', 'anchors': '14.0, 8.0, 20.0, 11.0, 29.0, 16.0, 21.0, 29.0, 47.0, 29.0, 73.0, 45.0', 'classes': 1, 'coords': 4, 'num': 6, 'input_width': 512, 'input_height': 288, 'threshold': 0.001, 'anchor_masks': [[3, 4, 5], [0, 1, 2]], 'outputs': ['detector/yolo-v4-tiny/Conv_17/BiasAdd/YoloRegion', 'detector/yolo-v4-tiny/Conv_20/BiasAdd/YoloRegion']}" for adapter.yolo_v3: adapter.yolo_v3 specifies unknown options: ['input_width', 'input_height']
Terminal: pot -c yolov4-tiny-spp-gray-custom-license_plate_prune_0.5_keep_0.01_288x512_0604_qtz.json --output-dir backup -e
When I cancel the input size parameter, it can work normally.
I suspect this has something to do with the question I asked before. Please help to answer in detail, thank you!
Please refer to the attachment link!
链接已复制
Hello Peng Chang-Jan.
Thank you for your patience.
We have confirmed that the parameter was enabled when AC tool cannot extract input resolution from the network. However, the issue was fixed in our latest version and it can automatically detect input shapes in the adapter. Therefore, such parameters are not required anymore. The documentation will be updated and fix, these changes will be reflected in the next version release.
Regards,
Zulkifli
