If github in unable to render a Jupyter notebook, copy the link of the notebook and enter into the nbviewer: https://nbviewer.jupyter.org/
SVM is a discriminative learning modek that makes an assumption on the form of the discriminant (decision boundary) between the classes.
In a binary classification problem, the form of the SVM discriminant function is modeled by the widest possible boundary between the classes. That's why SVM is known as large margin classifier.
The goal of the large margin SVM classifier is to maximize the margin between two classes. However it has to respect a constraint:
-- The margin should be maximized while making sure that data are correctly classified (i.e., data belonging to two classes are off the margin).
Hence the SVM classification problem can be modeled as a constrained maximization problem.
Depending on the nature of the data, the SVM constrained maximization algorithm varies.
To understand different SVM algorithms and approaches, we will consider several cases in this notebook series on SVM.
-
Linearly Separable Data
-- No Outlier -- Outlier
-
Linearly Non-Separable Data
-- Feature Augmentation -- Kernelized SVM: Polynomial Kernel -- Kernelized SVM: Gaussian Radial Basis Function (RBF) Kernel
There are 7 notebooks on SVM based classifiers.
-
Support Vector Machine-0-Bird's-eye View
-- A bird's-eye view representation of the main algorithms of the Support Vector Machine (SVM) model for solving binary classification problems.
-
Support Vector Machine-1-Linearly Separable Data
-- Hard margin & soft margin classifier using the LinearSVC model
-
Support Vector Machine-2-Nonlinear Data
-- Polynomial models with LinearSVC and Kernelized SVM (Polynomial & Gaussian RBF kernel)
-
Support Vector Machine-3-Gaussian RBF Kernel
-- In depth investigation of Gaussian RBF Kernel (how to fine tune the hyperparameters)
-
Support Vector Machine-4-Multiclass Classification
-- Multiclass classification using the SVC class that implements the One-versus-One (OvO) technique
-
Support Vector Machine-5-Stochastic Gradient Descent-Linear Data
-- The SGD algorithm for the SVM model to solve a binary classification problem on a linearly separable dataset
-
Support Vector Machine-6-Stochastic Gradient Descent-Nonlinear Data
-- The SGD algorithm for the SVM model to solve a binary classification problem on a linearly non-separable dataset
Finally, we will apply SVM on two application scenarios. We will see that these two applications require two very different SVM algorithms (linear and complex models). We will conduct in dept investigations on these two models in the context of these two applications.
-
Application 1 - Image Classification (Gaussian RBF model performs well & why)
-
Application 2 - Text Classification (LinearSVC performs well & why)
There are at least two very different ways to find the maximum margin decision boundary.
-
Modeling the max margin problem as a constrained optimization problem and solove it using Quadratic Programming (QP) solver
-
Modeling the max margin problem as an unconstrained optimization problem and solve it using Gradient Descent/coordinate descent
We can model the max margin problem as a constrained optimization problem in two ways.
-
Primal Problem (computationally expensive for large feature dimension)
-
Dual Problem
The SVM finds the max margin decision boundary by solving the following constrained optimization problem.
Subject to the following constraints:
Here:
-
$\xi$ : slack variable that controls margin violation. #($\xi > 0$ ) = the number of non-separable points (measure of error/misclassification). -
C: regularization/penalty. Controls the trade-off between margin maximization and error minimization.
This convex optimization problem is known as the primal problem and its complexity depends on feature dimension.
We can use a Quadratic Programming (QP) solver to find optimal
Due to the computational complexity of the primal optimization (minimization) problem, we transform it into a form such that its complexity no more depends on the feature dimension, instead depends on the size of the data. This new form is known as the dual form and we solve the dual optimization (maximization) problem.
Subject to the constraints:
Here
The complexity of the dual problem depends on the size of the training data (N), not on the feature dimension. Thus, for high-dimensional data, solving the dual problem is much more efficient than solving the primal problem.
We can implement the gradient descent (GD) or stochastic gradient descent (SGD) or coordinate descent (CD) based approach to find optimal
To apply these iterative optimazation approaches for the SVM, we define the cost function as follows.
Here:
-
$h(z)$ : the Hinge loss function:$h(z) = max(0, 1 - z)$ - C: regularization/penalty. Controls the trade-off between margin maximization and error minimization.
The Hinge loss function varies between 0 and
Observe that the Hinge loss or cost function of SVM is similar to the Linear Regression and Logistic Regression regularized cost function.
In case of SVM:
- The first term is the regularization/penalty term
- The second term is the loss objective function
Unlike Linear/Logistic regression, the regularization/penalty parameter (C) is with the loss function.
It's a hyperparameter that controls the trade-off between margin maximization and error minimization.
- If C is too large, we have a high penalty for nonseparable points, and we may store many support vectors and overfit.
- If C is too small, we may find too simple solutions that underfit.
Scikit-Learn provides four SVM models to perform classification:
-
SVC (Solves the dual optimization problem. Used to implement kernelized SVM, such as polynomial kernel, Gaussian Radial Basis Function or RBF kernel)
-
LinearSVC (Uses the Coordinate Descent approach. Similar to SVC with linear kernel)
-
NuSVC (Nu-Support Vector Classification. Similar to SVC but uses a parameter to control the number of support vectors)
-
SGDClassifier (Uses Stochastic Gradient Descent approach)
We will investigate both SVC and LinearSVC in greater detail. Also for the image classsification application we will use the SGDClasssifier.
-
SVC:
$O(N^2d)$ ~$O(N^3d)$ -
LinearSVC:
$O(Nd)$
N: No. of training data
d: No. of features
The LinearSVC class is based on the liblinear library, which implements an optimized algorithm for linear SVMs. It does not support the kernel trick, but it scales almost linearly with the number of training instances and the number of features ($O(Nd)$). Moreover, the LinearSVC class has more flexibility in the choice of penalties (l2 & l1) and loss functions.
The SVC class is based on the libsvm library, which implements an algorithm that supports the kernel trick. Due to its complexity between
Model selection is done by hyperparameter tuning. We can choose both the algorithm (LilearSVC, SVC with varying kernels) and the optimal hyperparameters via cross-validated grid search.
However, brute-force grid search is time consuming. We should have a high-level understanding of the suitability of the algorithms based on the dataset. Then, we can fine tune the hyperparameters.
So, before doing any Machine Learning with SVM, we should address these questions.
-
How do we choose the most suitable model between LinearSVC and SVC?
-
If SVC is suitable, then how do we choose the optimal kernel (usually between polynomial and RBF)?
-
N is very large but d is small (
$N > d$ ): LinearSVC -
d is large relative to N (
$d \geq N$ ): LinearSVC -
N is small to medium and d small (
$N > d$ ): SVC with Gaussian RBF kernel
In this notebook we will classify a linearly separable dataset. Both N and d are small in this dataset. Also, it is linearly separable.
Thus, will use the LinearSVC model.
-
The "loss" hyperparameter should be set to "hinge".
-
The hyperparameter "C" controls the penalty for the error (margin violation). It should be selected via grid search. We will investigate its effect shortly.
-
Finally, for better performance we should set the "dual" hyperparameter to False, unless there are more features than training instances.
The SVM classfication is influenced by the varying scale of the features.
SVMs try to fit the largest possible “street” between the classes. So if the training set is not scaled, the SVM will tend to neglect small features.
Thus, we should standardize the data before training.
We will consider two cases.
-
Data doesn't have outlier
-
Data has outliers