Intel® oneAPI Data Analytics Library
Learn from community members on how to build compute-intensive applications that run efficiently on Intel® architecture.

DAAL svm learning very slow

Ian_Watson
Beginner
772 Views

I am building svm models on two class problems (Cheminformatics), and am comparing results with what we get from svm_lite.

Considering two dimensions of performance: quality of results, time to train models.

Training Time: Everything serial for now. For smaller datasets, few thousand, most of the time DAAL is comparable with svm_lite, although often faster. But I have some larger datasets, 22k items, where DAAL is taking 6.5 hours and svm_lite is taking 12 minutes. This is completely mysterious to me - I believe the same data is being presented to both tools. Using decent cache size for both. Any ideas??

Quality of Results: Here things are murkier. Generally DAAL is underperforming svm_lite, but I have a custom kernel function built into svm_lite. However the differences seem larger than what I would expect from the difference in kernel functions. So, if I am going to do an apples to apples comparison on quality of results, I need to figure out how to implement our custom kernel function in DAAL. I have not seen any examples of how to do that, and looking at daal/include/algorithms/kernel_function/ does not give me anything concrete (at least as far as I can understand). Any suggestions/pointers  would be very welcome.

And yes, these training runs that take 6.5 hours produce results that are good, but not quite as good as svm_lite. Both svm_lite and DAAL are both noticeably better than Naive Bayes, so good models are being built.

But the first big problem is the training run-time problem...

Again, any help would be much appreciated.

Thanks

Ian

0 Kudos
12 Replies
VictoriyaS_F_Intel
772 Views

Hello Ian,

Please provide additional details on your SVM use case, including the library version, build/link line as well as SVM configuration including type of the kernel, description on the input matrix (sparse or dense, number of features and feature vectors), cache size, and other parameters passed into Intel DAAL version of SVM .

Answering your question on the custom kernel function.

Design of the library is expected to cover the case of creation of user-defined algorithms including kernel functions that rely on the conventions of Intel DAAL API. However, the present version of the library does not cover this use case in the documentation and examples.

We can create the example for you that demonstrates how to develop the user-defined kernel function and use it with SVM.  Would this option work?

Best regards,

Victoriya

0 Kudos
Ian_Watson
Beginner
772 Views

Hi Victoriya

Thanks very much for replying

I have this version of the compiler

icpc (ICC) 16.0.3 20160415

and am using the version of DAAL that came with that. The file daal/include/services/library_version_info.h contains

#define __INTEL_DAAL_BUILD_DATE 20160413

#define __INTEL_DAAL__ 2016
#define __INTEL_DAAL_MINOR__ 0
#define __INTEL_DAAL_UPDATE__ 3

While I had initially found this problem in my own code, I replicated the 6.5 hour run time by just changing

daal/examples/cpp/source/svm/svm_two_class_csr_batch.cpp

to read my input file rather than what was in that file initially. So, all the settings are whatever they are in that example file - default settings which had worked well for my other (smaller) jobs. I built the example with

make libintel64 example=svm_two_class_csr_batch compiler=intel threading=sequential mode=build

In terms of the data, it consists of 22069 observations, and the number of features is 263791 - the largest number of features associated with any observation is around 700. The input labels are either -1 or 1, and all the values are small positive integers - these are molecular fingerprints.

I would be happy to share the input files but the CSR file is quite large, 54 MB. If we have some means of transferring such a large file, I would be very happy to do so....

In terms of writing a custom kernel function, yes, if you could give me some pointers towards how one might create a custom kernel function, that would be great. We did get a performance increase when we switched to a Tanimoto variant type kernel with svm_lite, so I would be very curious to see how that might impact DAAL models too...

Let me know what you would like me to do to help move this forward...

Thank you

Ian

0 Kudos
Gennady_F_Intel
Moderator
772 Views

Ian, you may attach this file to this thread. or just upload somewhere to the cloud and give us the link....

0 Kudos
Andrey_N_Intel
Employee
772 Views

Hi Ian, also please provide the details on OS and CPU you used in your experiments. Do you use 64-bit version of the library? Andrey

0 Kudos
Ian_Watson
Beginner
772 Views

Hi Thanks for getting back to me.

I am on a Linux system, somewhat old

Linux bunyip 2.6.32-504.16.2.el6.x86_64 #1 SMP Tue Mar 10 17:01:00 EDT 2015 x86_64 GNU/Linux

model name      : Intel(R) Xeon(R) CPU           X5675  @ 3.07GHz


I am attaching a .tar.bz2 file containing 3 files.

ex2.train.0.csr # training data

ex2.train.0.labels   # training labels

ex2.train.svml  # the same information in svm-lite input format

I am pretty sure I have the same information in the svm-lite file. The files compressed pretty well, and the .tar.bz2 file is around 20 MB

If you could figure out why we are seeing such dramatically different run-times between the DAAL svm learner and what we see with svm_lite that would be great. Very happy to provide any more info you might need, I am very curious about this...

Thank you

Ian

0 Kudos
VictoriyaS_F_Intel
772 Views

Hello Ian,

Thanks for providing the detailed information of your environment and the dataset

Per our analysis of Intel DAAL SVM algorithm using your dataset, we found that its performance is not optimal.

 

According to our experiments, performance of the algorithm can be improved up to ~85 seconds at 12-core Intel(R) Xeon(R) CPU X5675 @ 3.07GHz for this dataset.

In addition to the code optimizations in the library on our side, use of the parallel version of the library and 4GB cache size are required on the application level to get this level of the performance.

Is it okay for your application to use those settings?

 

We plan to include performance improvements in SVM algorithm into Intel DAAL 2017 Update 1, which is expected later this year

Depending on the importance and urgency of the code modifications in the library for your application we can explore more options to make the library version with those tunings available to you earlier. Please, suggest when you would like to have the library with performance improvements in SVM algorithm.

Best regards,

Victoriya

0 Kudos
Ian_Watson
Beginner
772 Views

Hi Victoriya

This is a good outcome. Thank you very much for taking the time to look into this for me. Performance looks very good. One thing I do like about the DAAL architecture is the easy switching between parallel and serial execution modes, so we can devote whatever hardware resources are needed - which will depend on things like the number of jobs being run, or whether people are waiting for an answer. Nice.

This is not urgent, I will wait for your new version to make its way through the regular release cycle. No need for anything special right now...

Actually the one thing that would be good is someone earlier mentioned the idea of giving me some pointers on writing a custom kernel function for the svm learner. Now, if that too is something that will soon show up in the regular release cycle, then no urgency with that either.

Overall, thanks very much to the Intel DAAL team for being responsive to the performance issue. Impressive.

Thanks

Ian

0 Kudos
Harvey_S_
Beginner
772 Views

Sorry for the hijack, but do you have any recommendations on the size of the cache for SVM?  I've been setting it to feature_count*row_count*sizeof_feature bounded by the amount of ram available which seems to work but I have no idea if its optimal.

Cheers Harvey

 

0 Kudos
VictoriyaS_F_Intel
772 Views

Hello Harvey,

For the best two-class SVM performance please use cache size equal to row_count * row_count * sizeof(feature_data_type).

Best regards,

Victoriya

0 Kudos
VictoriyaS_F_Intel
772 Views

Ian,

We are working on the code that shows how to provide a custom kernel to SVM classifier. The code will be available in a couple of days.

Victoriya

0 Kudos
VictoriyaS_F_Intel
772 Views

Hello Ian,

We have prepared an instruction on how to implement a custom kernel function and use it with DAAL SVM.

I attach an archive that contain following files:

  • Custom kernel function.pdf - a file with instruction
  • user_defined_kernel_function.h - example code that shows the steps which have to be done to implement a custom kernel function
  • svm_two_class_csr_user_defined_batch.cpp - example code that shows how to use a custom kernel function with DAAL SVM

user_defined_kernel_function.h contains a part of custom kernel function implementation that is aimed to show the basic idea of how a custom DAAL algorithm could be implemented. You will need to implement the missing parts to have a fully functional kernel function that could be used together with SVM.

In case you have problems with implementation of the missing parts I also attach user_defined_kernel_function_full.h and svm_two_class_csr_user_defined_batch_full.cpp files which form a complete example of custom kernel function.

Best regards,

Victoriya

0 Kudos
Ian_Watson
Beginner
772 Views

Got it.

Thank you very much! Will see what it does to prediction accuracy.

Thx

Ian

0 Kudos
Reply