- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
this is mystic123 code:
with slim.arg_scope([slim.conv2d], normalizer_fn=slim.batch_norm,
normalizer_params=batch_norm_params,
biases_initializer=None,
activation_fn=lambda x: tf.nn.leaky_relu(x, alpha=_LEAKY_RELU)):
but i found other version 's convert.py:
with slim.arg_scope([slim.conv2d],
normalizer_fn=slim.batch_norm,
normalizer_params=batch_norm_params,
biases_initializer=None,
activation_fn=lambda x: tf.nn.leaky_relu(x, alpha=0.1),
weights_regularizer=slim.l2_regularizer(weight_decay)):
weights_dacay is 0.0005.
and this version:
with slim.arg_scope([slim.conv2d],
normalizer_fn=slim.batch_norm,
normalizer_params=batch_norm_params,
biases_initializer=None,
activation_fn=lambda x: tf.nn.leaky_relu(x, alpha=0.1),
weights_regularizer=slim.l2_regularizer(weight_decay),
weights_initializer=tf.truncated_normal_initializer(0.0, 0.01)):
so what do "weights_regularizer" and "weights_initializer" do and we need it?
Thanks!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Greetings,
Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question
Sincerely,
Iffa
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Greetings,
The YOLO image is divided into multiple grids and each grid cells of the image runs the same algorithms. The initializers(in this case weights_initializer) are used to define layers.
YOLO uses a regularization technique called batch normalization (or in this case named as weights_regularizer) after its convolutional layers.
The idea behind “batch norm” is that neural network layers work best when the data is clean. Ideally, the input to a layer has an average value of 0 and not too much variance. This should sound familiar to anyone who’s done any machine learning because we often use a technique called “feature scaling” or “whitening” on our input data to achieve this.
Batch normalization does a similar kind of feature scaling for the data in between layers. This technique really helps neural networks perform better because it stops the data from deteriorating as it flows through the network.
You can find further infos here:
https://machinethink.net/blog/object-detection-with-yolo/
Sincerely,
Iffa
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Openvino guide book use https://github.com/mystic123/tensorflow-yolo-v3 to convert yolo model and then convert .pb to IR
mystic123 also set BN parameters like this:
_BATCH_NORM_DECAY = 0.9
_BATCH_NORM_EPSILON = 1e-05
# Set activation_fn and parameters for conv2d, batch_norm.
with slim.arg_scope([slim.conv2d, slim.batch_norm, _fixed_padding], data_format=data_format, reuse=reuse):
with slim.arg_scope([slim.conv2d], normalizer_fn=slim.batch_norm,
normalizer_params=batch_norm_params,
biases_initializer=None,
activation_fn=lambda x: tf.nn.leaky_relu(x, alpha=_LEAKY_RELU)):
but other tf1.x version yoloconvert.py add one or two more parameters in BN:
weights_regularizer=slim.l2_regularizer(weight_decay),
weights_initializer=tf.truncated_normal_initializer(0.0, 0.01)
and should i add this to mystic123's code to convert .weights to .pb and then convert.pb to IR.
https://github.com/PINTO0309/OpenVINO-YoloV3 this project keeps update ,but it still use original(mystic123's) BNparameters. why didn't openvino adopt weights_regularizer and initializer in the BNparameters
and will it help yolo perform better on openvino?
Thanks!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
First and foremost, if you intended to use Openvino, you should rely on the official OpenVino tutorial here:
You definitely need the model https://github.com/mystic123/tensorflow-yolo-v3 together with coco.names and yolov3.weights
Here is a video to make the steps clearer: https://www.youtube.com/watch?v=FaqVhvJ6-Uc
If you intend to use TF1.x range, It is recommended to use TF1.14 or TF1.15 .
For topology that is trained manually to the TF, you can see which are compatible with OpenVino here:
Tensorflow API such as faster_rcnn_support_api_v1.14.json would optimize the performance of YOLO model in OpenVino, however, this is supported in maximum performance
Sincerely,
Iffa
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Greetings,
Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question
Sincerely,
Iffa

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page