31 SVM

Linear SVM’s

The maximum margin linear classifier is the simplest kind of SVM (Actually called LSVM),

Non-Linear SVM’s

The original feature space can always be mapped to some higher-dimensional feature space where the training data set is seperable

Overfitting can be controlled by soft margin approach

Choice of kernel

Gaussian or polynomial kernel is default.

Optimization Criteria: Hard margin vs soft margin.

Parameters

C behaves as a regularization parameter in the SVM

Low c -> Large margin, misclassification
Large c -> Small margin, high precision

Gamma parameter defines how far the influence of the simple training example reaches

High gamma -> Close reach
Low gamma -> far reach

Use a for loop to test different values of c and gamma to make sure selecting a good c value

 

Python 3 Example: Please click here to see the Python3 Example.

 

License

Building Skills for Data Science Copyright © by Dr. Nouhad Rizk. All Rights Reserved.

Share This Book