when I run object_detection_demo_yolov3_async.py with custom trained yolov3 tiny model, I'm experiencing the following error:
/opt/intel/openvino/deployment_tools/open_model_zoo/demos/python_demos/object_detection_demo_yolov3_async$ python3 object_detection_demo_yolov3_async.py -i 'cam' -m /home/user/Documents/Progetti/AI/ML_DL/YOLOv3_Keras_Cust_Model/YOLOv4/worker_safety_yolov3_tiny/IR_FP16/frozen_yolov3-tiny_workersafety_final_416.xml -d MYRIAD
/usr/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.7 of module 'openvino.inference_engine.ie_api' does not match runtime version 3.6
return f(*args, **kwds)
[ INFO ] Creating Inference Engine...
[ INFO ] Loading network
[ INFO ] Preparing inputs
[ INFO ] Loading model to the plugin
[ INFO ] Starting inference...
To close the application, press 'CTRL+C' here or switch to the output window and press ESC key
To switch between sync/async modes, press TAB key in the output window
object_detection_demo_yolov3_async.py:274: DeprecationWarning: shape property of IENetLayer is deprecated. Please use shape property of DataPtr instead objects returned by in_data or out_data property to access shape of input or output data on corresponding ports
out_blob = out_blob.reshape(net.layers[net.layers[layer_name].parents].shape)
[ INFO ] Layer detector/yolo-v3-tiny/Conv_12/BiasAdd/YoloRegion parameters:
[ INFO ] classes : 2
[ INFO ] num : 3
[ INFO ] coords : 4
[ INFO ] anchors : [10.0, 14.0, 23.0, 27.0, 37.0, 58.0]
/usr/lib/python3/dist-packages/apport/report.py:13: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import fnmatch, glob, traceback, errno, sys, atexit, locale, imp, stat
Traceback (most recent call last):
File "object_detection_demo_yolov3_async.py", line 359, in <module>
sys.exit(main() or 0)
File "object_detection_demo_yolov3_async.py", line 280, in main
File "object_detection_demo_yolov3_async.py", line 151, in parse_yolo_region
File "object_detection_demo_yolov3_async.py", line 100, in scale_bbox
xmin = int((x - w / 2) * w_scale)
ValueError: cannot convert float NaN to integer
Ubuntu 18.04 LTS
OpenVino 2020 R3 LTS
I trained the Darknet Model using Intel CPU and I converted the wheights and PB to IR following the following instructions:
I suppose the problem is about .json file used with mo_tf.py to convert .pb to IR, anyway, eventually I don't see the issue.
Attached you can find the .cfg used to train the Model and the .json file used with mo_tf.py
Important Note: using the same data, procedure to train yolov3 model and to convert wheights and pb with NOT tiny, everything is working properly.
Please, do you have suggestions
We had previously trained a yolov3-tiny on a custom dataset and the .json config has one small change when compared to yours.We suggest that you modify your json to match the following. The only difference is the "mask" values. Basically you would need to remove [3,4,5] and see if it makes a difference.
"mask": [0, 1, 2],
"anchors": [10, 14, 23, 27, 37, 58, 81, 82, 135, 169, 344, 319],
"entry_points": ["detector/yolo-v3-tiny/Reshape", "detector/yolo-v3-tiny/Reshape_4"]
I modified the .json following the tips, but running the model optimizer mo_tf.py the output is the following error:
[ WARNING ] Use of deprecated cli option --tensorflow_use_custom_operations_config detected. Option use in the following releases will be fatal. Please use --transformations_config cli option instead
Model Optimizer arguments:
- Path to the Input Model: /home/usr/Documents/Progetti/AI/ML_DL/YOLOv3_Keras_Cust_Model/YOLOv4/worker_safety_yolov3_tiny/frozen_yolov3-tiny_workersafety_Intel_416_final.pb
- Path for generated IR: /opt/intel/openvino_2020.3.341/deployment_tools/model_optimizer/.
- IR output name: frozen_yolov3-tiny_workersafety_Intel_416_final
- Log level: ERROR
- Batch: 1
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: Not specified, inherited from the model
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP16
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Use the config file: /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/workersafety_yolo_v3_tiny_Intel.json
Model Optimizer version:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:523: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:524: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:532: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
[ ERROR ] Cannot infer shapes or values for node "detector/yolo-v3-tiny/Conv_12/BiasAdd/YoloRegion".
[ ERROR ] object of type 'int' has no len()
[ ERROR ]
[ ERROR ] It can happen due to bug in custom shape infer function <function RegionYoloOp.regionyolo_infer at 0x7fae23e43d90>.
[ ERROR ] Or because the node inputs have incorrect values/shapes.
[ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information.
[ ANALYSIS INFO ] Your model looks like YOLOv3 Model.
To generate the IR, provide TensorFlow YOLOv3 Model to the Model Optimizer with the following parameters:
Detailed information about conversion of this model can be fount at
[ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "detector/yolo-v3-tiny/Conv_12/BiasAdd/YoloRegion" node.
For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38.
Please, let me know your opinion
Thank you for your support