- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
This is my first program using MKL library and i want to include a simple batch normalization call after convolution. This is my code for BN
Does anyone have an idea what i did wrong and why the execution of my code is failing ?
/*** BN1 section ***/
CHECK_ERR(dnnBatchNormalizationCreateForward_F64(&bn1, attributes, lt_conv1_output, 0), err);
resBn1[dnnResourceSrc] = resConv1[dnnResourceDst];
CHECK_ERR(dnnLayoutCreateFromPrimitive_F64(<_bn1_output, bn1, dnnResourceDst), err);
CHECK_ERR(dnnAllocateBuffer_F64((void **)&resBn1[dnnResourceDst], lt_bn1_output), err);
CHECK_ERR(init_conversion(&cv_relu1_to_user_output, &user_o, lt_user_output, lt_bn1_output, resBn1[dnnResourceDst]), err);
...............
CHECK_ERR(dnnExecute_F64(bn1, (void *)resBn1), err);
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Radjaradja,
Thank you for submitting the question about the Deep learning Batch normalization. Do you have small test case for reproduce the issue?
On the other hand, you may have known, considering the strong Deep learning compute request , we actually provide one dedicated open source library MKL-DNN for that. It's github address : https://github.com/intel/mkl-dnn
for example, you can find one whole sample in https://github.com/intel/mkl-dnn/blob/master/examples/simple_net.c and you can add Batch normalization part and let us know if it can reproduce the problem
Best Regards,
Ying
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Radjaradja,
Actually, the Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. And the Open Source MKL-DNN have same or better performance. for example, the Batch Normalization + ReLU was fused in CONV in MKL-DNN
I run your case, and it returns
ok
FAILED
Press any key to continue . . .
As you see, the BN's input, like mean, variance, weights are required before to execute the BN.
So maybe there is something goes wrong in BN. but as we remove them, i may suggest to look at https://github.com/intel/mkl-dnn ; and https://intel.github.io/mkl-dnn/structmkldnn_1_1batch__normalization__forward.html
Best Regards,
Ying

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page