Intel® oneAPI Data Analytics Library
Learn from community members on how to build compute-intensive applications that run efficiently on Intel® architecture.
225 Discussions

Struggling in setting up a simple convolution layer in a NN

joachim_d_
Beginner
469 Views

Hi 

Sorry for this extreme beginner question but I am having a hard time in setting up a very basic 2D Convolution forward/backward layer in a neural network. All the example codes provided build an explicit forward and backward layer.

Here is my Neural network layer configuration code:

Collection<LayerDescriptor> configureSimpleNet()
{
   Collection<LayerDescriptor> configuration;
   SharedPtr<layers::convolution2d::Batch<> > conv2d(new layers::convolution2d::Batch<>);
   conv2d->parameter.kernelSize = layers::convolution2d::KernelSize(5,5);
   conv2d->parameter.nKernels = 6;
   conv2d->parameter.spatialDimensions = layers::convolution2d::SpatialDimensions(32,32);
   configuration.push_back(LayerDescriptor(0, conv2d, NextLayers()));
   return configuration;
}

First question: Do I have to set the spatialDimension parameter and if so what does it reflect? Documentation is a bit vague on this.

Basically I assume with that I have 5x5 convolution kernel with 6 different kernels basically output a 28x28x6 tensor from this step, correct?

Well my input feed into the network is a Homogenous Tensor consisting of a 32x32x1 data dimension, here is the code

float* testData = new float[32*32];
float* testData2 = new float[1];
testData2[0] = 1;
for (int j = 0; j < 32; j++)
    for (int i = 0; i < 32; i++)
        testData[i + j * 32] = float(i) * float(j);

size_t nDim = 3, dims[] = {32,32,1};
size_t dims2[] = {1};
         
SharedPtr<Tensor> bogusData(new HomogenTensor<float>(nDim, dims, (float*)testData));
SharedPtr<Tensor> bogusData2(new HomogenTensor<float>(1,dims2,(float*)testData2));

And this is how I set up my network

training::Batch<> trainingNet;

Collection<LayerDescriptor> layersConfiguration = configureSimpleNet();
trainingNet.initialize(bogusData->getDimensions(), layersConfiguration);

trainingNet.input.set(training::data, bogusData);
trainingNet.input.set(groundTruth, bogusData2);

trainingNet.parameter.optimizationSolver->parameter.learningRateSequence =
        SharedPtr<NumericTable>(new HomogenNumericTable<>(1, 1, NumericTable::doAllocate, 0.001));

trainingNet.parameter.nIterations = 6000;

trainingNet.compute();

It all compiles nicely but when I run it I get this

terminate called after throwing an instance of 'daal::services::interface1::Exception'
  what():  Convolution layer internal error

What am I doing wrong?

 

0 Kudos
5 Replies
Ilya_B_Intel
Employee
469 Views

Hi, Joachim

Thanks for your interest.

At the current moment we did not yet add full support of spatialDimensions parameter.

Currently convolution layer expects 4-d tensor as follows:

size_t nDim = 4, dims[] = {1,1,32,32};

The first dimensions is a batch dimension, the second - channels dimension, the last two dimensions are spatial. 

If you compile your code with -DDAAL_CHECK_PARAMETER define, additional checks will be enabled in run-time, which may provide more details in error message. 

0 Kudos
joachim_d_
Beginner
469 Views

ILYA B. (Intel) wrote:

Currently convolution layer expects 4-d tensor as follows:

size_t nDim = 4, dims[] = {1,1,32,32};

The first dimensions is a batch dimension, the second - channels dimension, the last two dimensions are spatial. 

If you compile your code with -DDAAL_CHECK_PARAMETER define, additional checks will be enabled in run-time, which may provide more details in error message. 

Ah I see thank you good to know. Yes that resolved the problem and thank you about the compiler switch that might indeed come handy. So I guess if I concatenate n-32x32 input data blocks together in a single Tensor and treat them each as a 32x32 spatial convolution in the NN layer 

I would have to change the dims[] = {n,1,32,32}, right?

0 Kudos
Ilya_B_Intel
Employee
469 Views

Yes, that should work

0 Kudos
joachim_d_
Beginner
469 Views

Excellent okay sorry to bombard you with another follow-up noop question.

But what if I attach a maximum_2d_pool layer underneath it, do I need to specify also 4 dimensions in the Batch constructor and then set firstIndex = 2 and secondIndex = 3 in its parameter field member?

0 Kudos
VictoriyaS_F_Intel
469 Views

Yes, in this case max pooling will be performed for the same spatial dimensions as convolution.

0 Kudos
Reply