- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I decided to try simple convolution using DAAL
void conv() { auto input = new float[4 * 4]; for (int i = 0; i < 1; i++) for (int j = 0; j < 4; j++) for (int k = 0; k < 4; k++) input[i * 4 * 4 + j * 4 + k] = 1.0; auto kernel = new float[2 * 2]; for (int i = 0; i < 1; i++) for (int j = 0; j < 2; j++) for (int k = 0; k < 2; k++) kernel[i * 2 * 2 + j * 2 + k] = 1.0; size_t nDimInput = 4, dimsInput[] = { 1, 1, 4, 4 }; size_t nDimKernel = 4, dimsKernel[] = {1, 1, 2, 2 }; SharedPtr<Tensor> inputData(new HomogenTensor<float>(nDimInput, dimsInput, input)); SharedPtr<Tensor> kernelData(new HomogenTensor<float>(nDimKernel, dimsKernel, kernel)); convolution2d::forward::Batch<> convolution2dLayerForward; convolution2dLayerForward.parameter.paddings = { 0, 0 }; convolution2dLayerForward.parameter.strides = { 1, 1 }; convolution2dLayerForward.input.set(forward::data, inputData); convolution2dLayerForward.input.set(forward::weights, kernelData); convolution2dLayerForward.compute(); SharedPtr<convolution2d::forward::Result> forwardResult = convolution2dLayerForward.getResult(); SharedPtr<Tensor> conv1_value = forwardResult->get(forward::value); printTensorAsArray(conv1_value, 9); printTensorAsArray(forwardResult->get(convolution2d::auxWeights), 4); }
out:
-0.1580 -0.1580 -0.1580 -0.1580 -0.1580 -0.1580 -0.1580 -0.1580 -0.1580
-0.2707 -0.0542 -0.0544 0.4919
Problem is wrong output and weights. What am I doing wrong?
How to set a few matixes for input and convolution kernel in their tensors? Is it first or second element in array dimensions (1, 1, 4, 4)?
How to set mask for input tensor and kernels tensor for using not all, but some of their matrixes?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Alexander,
Your usage of tensors is correct. Input tensors have following structure:
dim[0] = batch_size dim[1] = number_of_channels_of_image dim[2] = height_of_image dim[3] = width_of_image
And for the weights:
dim[0] = number_of_kernels dim[1] = number_of_channels_of_image dim[2] = kernel_height dim[3] = kernel_width
By default convolution2d doesn’t use weights parameters which you set into the algorithm with
convolution2dLayerForward.input.set(forward::weights, kernelData);
They are generated with weightsInitializer, it’s also parameter of the layer. You can find description of the common layers parameters here: https://software.intel.com/en-us/node/701669
There is parameter weightsAndBiasesInitialized (= false by default). If it’s false layer will call specified initializer to fill its weight and biases values. Otherwise, weights and biases will be used from the input.
I slightly changed your code to get it work.
void conv() { auto input = new float[4 * 4]; for (int i = 0; i < 1; i++) for (int j = 0; j < 4; j++) for (int k = 0; k < 4; k++) input[i * 4 * 4 + j * 4 + k] = 1.0; auto kernel = new float[2 * 2]; for (int i = 0; i < 1; i++) for (int j = 0; j < 2; j++) for (int k = 0; k < 2; k++) kernel[i * 2 * 2 + j * 2 + k] = 1.0; auto biases = new float[1]; biases[0] = 0.5; size_t nDimInput = 4, dimsInput[] = { 1, 1, 4, 4 }; size_t nDimKernel = 4, dimsKernel[] = { 1, 1, 2, 2 }; size_t nDimBiases = 1, dimsBiases[] = { 1 }; SharedPtr<Tensor> inputData(new HomogenTensor<float>(nDimInput, dimsInput, input)); SharedPtr<Tensor> kernelData(new HomogenTensor<float>(nDimKernel, dimsKernel, kernel)); SharedPtr<Tensor> biasesData(new HomogenTensor<float>(nDimBiases, dimsBiases, biases)); convolution2d::forward::Batch<> convolution2dLayerForward; convolution2dLayerForward.parameter.paddings = { 0, 0 }; convolution2dLayerForward.parameter.strides = { 1, 1 }; convolution2dLayerForward.parameter.weightsAndBiasesInitialized = true; convolution2dLayerForward.input.set(forward::data, inputData); convolution2dLayerForward.input.set(forward::weights, kernelData); convolution2dLayerForward.input.set(forward::biases, biasesData); convolution2dLayerForward.compute(); SharedPtr<convolution2d::forward::Result> forwardResult = convolution2dLayerForward.getResult(); SharedPtr<Tensor> conv1_value = forwardResult->get(forward::value); printTensorAsArray(conv1_value, 9); printTensorAsArray(forwardResult->get(convolution2d::auxWeights), 4); }
And it prints expected results:
4.500000 4.500000 4.500000 4.500000 4.500000 4.500000 4.500000 4.500000 4.500000
1.000000 1.000000 1.000000 1.000000
Does it help you?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Alexander,
I can reproduce your result. The setting should be right. In early version(2017 beta), it should be able to get the correct result. Not sure if there some tricks in latest version. We will investigate the problem and get back to you later.
The number of sample and kernel was setting with first dim.
2D Convolution Forward Layer
The forward two-dimensional (2D) convolution layer computes the tensor Y of values by applying a set of nKernels 2D kernels K of size m 1 x m 2 to the input tensor X. The library supports four-dimensional input tensors X ∈ R n 1 x n 2 x n 3 x n 4 . Therefore, the following formula applies:
where i + a < n 3, j + b < n 4, and r is the kernel index.
Problem Statement
Without loss of generality, let's assume that convolution kernels are applied to the last two dimensions.
Given:
-
Four-dimensional tensor X ∈ R n 1 x n 2 x n 3 x n 4 with input data
-
Four-dimensional tensor K ∈ R nKernels x m 2 x m 3 x m 4 with kernel parameters/weights of kernels (convolutions)
-
One-dimensional tensor B ∈ R nKernels with the bias of each kernel.
inDims.push_back(2);
inDims.push_back(1);
inDims.push_back(16);
inDims.push_back(16);
Best Regards,
Ying
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Alexander,
Your usage of tensors is correct. Input tensors have following structure:
dim[0] = batch_size dim[1] = number_of_channels_of_image dim[2] = height_of_image dim[3] = width_of_image
And for the weights:
dim[0] = number_of_kernels dim[1] = number_of_channels_of_image dim[2] = kernel_height dim[3] = kernel_width
By default convolution2d doesn’t use weights parameters which you set into the algorithm with
convolution2dLayerForward.input.set(forward::weights, kernelData);
They are generated with weightsInitializer, it’s also parameter of the layer. You can find description of the common layers parameters here: https://software.intel.com/en-us/node/701669
There is parameter weightsAndBiasesInitialized (= false by default). If it’s false layer will call specified initializer to fill its weight and biases values. Otherwise, weights and biases will be used from the input.
I slightly changed your code to get it work.
void conv() { auto input = new float[4 * 4]; for (int i = 0; i < 1; i++) for (int j = 0; j < 4; j++) for (int k = 0; k < 4; k++) input[i * 4 * 4 + j * 4 + k] = 1.0; auto kernel = new float[2 * 2]; for (int i = 0; i < 1; i++) for (int j = 0; j < 2; j++) for (int k = 0; k < 2; k++) kernel[i * 2 * 2 + j * 2 + k] = 1.0; auto biases = new float[1]; biases[0] = 0.5; size_t nDimInput = 4, dimsInput[] = { 1, 1, 4, 4 }; size_t nDimKernel = 4, dimsKernel[] = { 1, 1, 2, 2 }; size_t nDimBiases = 1, dimsBiases[] = { 1 }; SharedPtr<Tensor> inputData(new HomogenTensor<float>(nDimInput, dimsInput, input)); SharedPtr<Tensor> kernelData(new HomogenTensor<float>(nDimKernel, dimsKernel, kernel)); SharedPtr<Tensor> biasesData(new HomogenTensor<float>(nDimBiases, dimsBiases, biases)); convolution2d::forward::Batch<> convolution2dLayerForward; convolution2dLayerForward.parameter.paddings = { 0, 0 }; convolution2dLayerForward.parameter.strides = { 1, 1 }; convolution2dLayerForward.parameter.weightsAndBiasesInitialized = true; convolution2dLayerForward.input.set(forward::data, inputData); convolution2dLayerForward.input.set(forward::weights, kernelData); convolution2dLayerForward.input.set(forward::biases, biasesData); convolution2dLayerForward.compute(); SharedPtr<convolution2d::forward::Result> forwardResult = convolution2dLayerForward.getResult(); SharedPtr<Tensor> conv1_value = forwardResult->get(forward::value); printTensorAsArray(conv1_value, 9); printTensorAsArray(forwardResult->get(convolution2d::auxWeights), 4); }
And it prints expected results:
4.500000 4.500000 4.500000 4.500000 4.500000 4.500000 4.500000 4.500000 4.500000
1.000000 1.000000 1.000000 1.000000
Does it help you?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks guys, it's exactly, what I need
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Please, Help
I get exception in method "compute()", when I use 2 kernels instead 1.
void conv() { auto input = new float[4 * 4]; for (int i = 0; i < 1; i++) for (int j = 0; j < 4; j++) for (int k = 0; k < 4; k++) input[i * 4 * 4 + j * 4 + k] = 1.0; auto kernel = new float[2 * 2 * 2]; for (int i = 0; i < 2; i++) for (int j = 0; j < 2; j++) for (int k = 0; k < 2; k++) kernel[i * 2 * 2 + j * 2 + k] = 1.0; auto biases = new float[1]; biases[0] = 0.0; size_t nDimInput = 4, dimsInput[] = { 1, 1, 4, 4 }; size_t nDimKernel = 4, dimsKernel[] = { 2, 1, 2, 2 }; size_t nDimBiases = 1, dimsBiases[] = { 1 }; SharedPtr<Tensor> inputData(new HomogenTensor<float>(nDimInput, dimsInput, input)); SharedPtr<Tensor> kernelData(new HomogenTensor<float>(nDimKernel, dimsKernel, kernel)); SharedPtr<Tensor> biasesData(new HomogenTensor<float>(nDimBiases, dimsBiases, biases)); convolution2d::forward::Batch<> convolution2dLayerForward; convolution2dLayerForward.parameter.paddings = { 0, 0 }; convolution2dLayerForward.parameter.strides = { 1, 1 }; convolution2dLayerForward.parameter.weightsAndBiasesInitialized = true; convolution2dLayerForward.input.set(forward::data, inputData); convolution2dLayerForward.input.set(forward::weights, kernelData); convolution2dLayerForward.input.set(forward::biases, biasesData); convolution2dLayerForward.compute(); SharedPtr<convolution2d::forward::Result> forwardResult = convolution2dLayerForward.getResult(); SharedPtr<Tensor> conv1_value = forwardResult->get(forward::value); printTensorAsArray(conv1_value, 9); printTensorAsArray(forwardResult->get(convolution2d::auxWeights), 8); }
What am I doing wrong?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Alexander,
The problem may be in the Bias, which is One-dimensional tensor and size is nKernel
You may try
size_t nDimKernel = 4, dimsKernel[] = {2,1, 2, 2 };
size_t nDimBias =1, dimsBias[] = {2 };
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks, Ying, now it works.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Please, Help again.
Same error, when kernel size 3x3 instead 2x2
void conv1() { auto input = new float[4 * 4]; for (int i = 0; i < 1; i++) for (int j = 0; j < 4; j++) for (int k = 0; k < 4; k++) input[i * 4 * 4 + j * 4 + k] = 1.0; auto kernel = new float[3 * 3]; int kInd = 0; for (int j = 0; j < 3; j++) for (int k = 0; k < 3; k++) { kernel[kInd] = 1.0; kInd++; } float *biases = new float[1]; biases[0] = 0.0; size_t nDimInput = 4, dimsInput[] = { 1, 1, 4, 4 }; size_t nDimKernel = 4, dimsKernel[] = { 1, 1, 3, 3 }; size_t nDimBiases = 1, dimsBiases[] = { 1 }; SharedPtr<Tensor> inputData(new HomogenTensor<float>(nDimInput, dimsInput, input)); SharedPtr<Tensor> kernelData(new HomogenTensor<float>(nDimKernel, dimsKernel, kernel)); SharedPtr<Tensor> biasesData(new HomogenTensor<float>(nDimBiases, dimsBiases, biases)); convolution2d::forward::Batch<> convolution2dLayerForward; convolution2dLayerForward.parameter.paddings = { 0, 0 }; convolution2dLayerForward.parameter.strides = { 1, 1 }; convolution2dLayerForward.parameter.weightsAndBiasesInitialized = true; convolution2dLayerForward.input.set(forward::data, inputData); convolution2dLayerForward.input.set(forward::weights, kernelData); convolution2dLayerForward.input.set(forward::biases, biasesData); convolution2dLayerForward.compute(); SharedPtr<convolution2d::forward::Result> forwardResult = convolution2dLayerForward.getResult(); SharedPtr<Tensor> conv1_value = forwardResult->get(forward::value); printTensorAsArray(conv1_value, 4); printTensorAsArray(forwardResult->get(convolution2d::auxWeights), 9); }
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Alexandr,
You’ve changed kernel sizes, but haven’t set kernelSizes algorithm parameter, that is {2, 2} by default. Just add the following:
convolution2dLayerForward.parameter.kernelSizes = { 3, 3 };
It should help.
-Daria
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks, Daria, it helps me
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page