- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
In this case it is a simple linear function. Not sure how to get hyperplane normal from training results.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
Linear decision function in SVM model is defined as D(x) = w* ∙ x + b. And the hyperplane normal w* for linear SVM is computed using the formula below:
w* = ∑yk ak xk,
where xk and ck = yk ak are support vectors and classification coefficients, respectively, computed during the training of SVM model.
You can read them using the methods svm::Model::getSupportVectors() and svm::Model::getClassificationCoefficients(), respectively.
For additional details please see chapter 2.1 in B. E. Boser, I. Guyon, and V. Vapnik. A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, pp: 144–152.
Best regards,
Victoriya
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for explaining.
There is one more question I would like to ask.
When using linear svm there is no need to add const bias term to every feature, i.e. bias term is taken care by the algorithm itself.
Is that right?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yes, there is no need to add bias term to each feature.
SVM training algorithm computes bias term as part of a model. svm::Model::getBias() method returns the value of bias term.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page