# What is support vector machine and its Working

## Introduction

In this blog, we will discuss What is support vector machine and its Working. A support vector machine (SVM) is a supervised learning algorithm that can be used for both classification and regression tasks. The goal of an SVM is to find the best line (or hyperplane) that can separate the data into classes. In the case of classification, the data is usually split into two classes (e.g. spam and non-spam).

The SVM will find the line that maximizes the distance between the two classes. This line is also known as the decision boundary. In the case of regression, the SVM will find the line that minimizes the error between the data points and the line. The SVM algorithm is very versatile and can be used with different kernel functions. The SVM algorithm is very powerful and can be used to solve many different types of problems.

## What is SVM?

The algorithm is a discriminative classifier that finds a decision boundary between two classes by maximizing the margin between them. SVMs are more versatile than other classification algorithms such as logistic regression because they can be used to solve non-linear classification problems. They are also effective in high-dimensional spaces and can be used with data that is not linearly separable. In other words, we are looking for the line (or plane) that gives the largest margin between the two classes. Once we have found this hyperplane, we can use it to make predictions for new data points.

If a new data point falls on the “correct” side of the hyperplane, we predict that it belongs to the first class; if it falls on the “incorrect” side, we predict that it belongs to the second class. support vector machine can be used for both linear and nonlinear classification. In the linear case, we are looking for a hyperplane that separates the data points perfectly. In the nonlinear case, we are looking for a hyperplane that is as close as possible to the data points.

## Working on Support Vector Machine

The main idea behind an SVM is to find a hyperplane (a decision boundary) that maximizes the margin between the two classes. In other words, we want to find a line (or decision boundary) that separates the two classes as much as possible. The way an SVM finds this decision boundary is by first creating a line that is as close to both classes as possible and then finding the point on that line that is farthest away from both classes. This point is called the support vector and the line is called the decision boundary. The decision boundary is then used to classify new data points. There are a few things to note about SVMs.

First, they are very effective when the data is linearly separable (i.e. the two classes can be separated by a line). Second, they are also effective when the data is not linearly separable but can be transformed into a higher dimensional space where it is separable. This is done using a kernel function. Kernel functions are used to transform data into a higher dimensional space. There are many different types of kernel functions but the most common ones are the linear, polynomial, and RBF (Radial Basis Function) kernels.

The linear kernel is the simplest and just transforms the data into a higher dimensional space. The polynomial kernel is a bit more complex and can be used to find non-linear decision boundaries. The RBF kernel is used when the data is not linearly separable. Once the decision boundary is found, the SVM can then be used to classify new data points. To do this, the new data point is transformed into the same space as the training data and then it is classified based on which side of the decision boundary it falls on. SVMs are a very powerful tool and have been used in many different applications such as face recognition, text classification, and hand-written digit recognition.

## Advantages of support vector machines

- They are effective in high-dimensional spaces.
- They are versatile and can be used for both classification and regression tasks.
- They can be used with data that is not linearly separable.

## Disadvantages of support vector machines

- SVMs can be sensitive to noise in the data, particularly when the data is not scaled
- SVMs can be slow to train on large datasets
- SVMs can be difficult to interpret
- SVMs can be less accurate than other methods on some datasets

Also, read – What is KMeans Clustering and its Working

Pingback: Python Entanglement Entropy - Study Experts