Linear decision function in SVM model is defined as D(x) = w* ∙ x + b. And the hyperplane normal w* for linear SVM is computed using the formula below:
w* = ∑yk ak xk,
where xk and ck = yk ak are support vectors and classification coefficients, respectively, computed during the training of SVM model.
You can read them using the methods svm::Model::getSupportVectors() and svm::Model::getClassificationCoefficients(), respectively.
For additional details please see chapter 2.1 in B. E. Boser, I. Guyon, and V. Vapnik. A training algorithm for optimal margin classiﬁers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, pp: 144–152.
Thanks for explaining.
There is one more question I would like to ask.
When using linear svm there is no need to add const bias term to every feature, i.e. bias term is taken care by the algorithm itself.
Is that right?
Yes, there is no need to add bias term to each feature.
SVM training algorithm computes bias term as part of a model. svm::Model::getBias() method returns the value of bias term.