Intel® oneAPI Data Analytics Library
Learn from community members on how to build compute-intensive applications that run efficiently on Intel® architecture.
228 Discussions

Some accuracy questions about neural network in DAAL

Hanxi_F_
Beginner
814 Views

Here I use this data set: http://archive.ics.uci.edu/ml/datasets/Statlog+(Landsat+Satellite)

There's 1-7 classes but class 6 is empty so I change the class label to 0-5

My code is attached and the question is the result of the prediction is not so well. Even cannot reach 50%.

So maybe there's some mistake in my code. Thanks for your help.

 

BTW, parameters like accuracyThreshold and nIteration in optimization solver maybe are not used? When I changed these parameters and the prediction results are the same. Is there any stopping criteria for us to use?

0 Kudos
6 Replies
Ying_H_Intel
Employee
814 Views

Hi Hanxi,

You are right, the present version of the library does not use the parameters such as the number of iterations and the accuracy threshold for the neural network training. You may change the batch size or learningRate to see if there any different.

We will investigate the problem and get back to you later.

Best Regards,

Ying 

 

0 Kudos
Daria_K_Intel
Employee
814 Views

Hi Hanxi,

We have seen same low accuracy with your original code on our side so there are no mistakes in your code. However you can get better accuracy with Satellite dataset and Neural Network topology by applying the following techniques:

  1. Change Neural Network initializer from uniform to Xavier
    fullyConnectedLayer1->parameter.weightsInitializer.reset(new initializers::xavier::Batch<>());
    fullyConnectedLayer2->parameter.weightsInitializer.reset(new initializers::xavier::Batch<>());

     

  1. Increase hiddenNeurons from 10 to 20 and batchSize parameter from 10 to 128
    const int hiddenNeurons = 20;
    const size_t batchSize  = 128;

     

  1. Split dataset in batches and add additional epochs to training stage
for (size_t epoch = 0; epoch < 400; epoch++)
{
        for(size_t i = 0; i < nBatches; i++)
        {
            net.input.set(training::data, trainingDataArray);
            net.input.set(training::groundTruth, trainingGroundTruthArray);
            net.compute();
        }
}

In addition to it you can also shuffle the data that also contribute to better accuracy. As a result we were able to achieve accuracy of 0.795. Please find modified example and datasets attached.

Let us know if it will not help you.

Thank you and happy holidays!

-Daria

0 Kudos
Hanxi_F_
Beginner
814 Views

Really appreciate for your help.

I'm working on your feedback and will follow up in a few days.

0 Kudos
Hanxi_F_
Beginner
814 Views

Hi Daria,

I've followed your code and tested for a few of things, and the accuracy result performed on my machine is 0.7915.

1. I added two declaration about biases initializer
    fullyConnectedLayer1->parameter.biasesInitializer.reset(new initializers::xavier::Batch<>());
    fullyConnectedLayer2->parameter.biasesInitializer.reset(new initializers::xavier::Batch<>());
    The accuracy reduced to 0.7905. Why?

2. How to get appropriate batchSize? I've tested a few batchSize. Here's the result.(with no biases initializer)
    batchSize         accuracy
    10                    0.7940
    20                    0.8000
    50                    0.7905
    128                  0.7915
    500                  0.7610
    4435                0.7435

3. I want to make this neural network random, so I add these two sentences.
    srand(time(NULL));
    sgdAlgorithm->parameter.seed = rand();

    But I found no changes, is this parameter no use or I've made a mistake?

 

Thanks a lot for your help.

Best Regards,

Hanxi

 

 

 

0 Kudos
Daria_K_Intel
Employee
814 Views

Hi Hanxi,

Quality of the neural network training is defined by multiple parameters including initialization, learning rate, batch size etc. Search of the optimal values for those parameters is important stage in the training of the model and requires some effort what was also confirmed by results of your experiments. By default, parameters of the neural network, weights and biases, are initialized using uniform initializer (-0.5; 0.5). Use of the default initializer for bias demonstrates better accuracy results in your topology and dataset.

Similarly, the batch size that equals to 20 results in the better training accuracy. By the way, the common observation in choice of batch size is as following: with smaller batch size you may get less accuracy while using the bigger batch size can result in longer training time. The model parameters are updated for each batch after each iteration using optimization solver such as SGD (thus, if size of batch equals to the size of the dataset, only one parameter update will be completed).

Neither seed nor nIterations and accuracyThreshold parameters are used during the parameter update (see also https://software.intel.com/en-us/forums/intel-data-analytics-acceleration-library/topic/705133).

Please, use learningRate parameter of the optimization solver to see any effect on the neural network model training.

Let me know about any results of the experiments with neural network training on your side or if you have additional comments or questions.

-Daria

0 Kudos
Daria_K_Intel
Employee
814 Views

Hi Hanxi,

Please, have a look at the additional details for use of learningRate parameter during neural network training.

Hope it helps,

-Daria 

0 Kudos
Reply