Have you ever tried to use SVM (support vector machine) models ie. regressor or classifier. In this we will using both for different dataset.
So this recipe is a short example of how we can use SVM Classifier and Regressor in Python.
from sklearn import datasets
from sklearn import metrics
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use("ggplot")
from sklearn.svm import SVC, SVR
Here we have imported various modules like datasets, SVC, SVR and test_train_split from differnt libraries. We will understand the use of these later while using it in the in the code snipet.
For now just have a look on these imports.
Here we have used datasets to load the inbuilt cancer dataset and we have created objects X and y to store the data and the target value respectively.
dataset = datasets.load_breast_cancer()
X = dataset.data
y = dataset.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
Here, we are using Support Vector Classifier (SVC) as a Machine Learning model to fit the data.
model = SVC()
model.fit(X_train, y_train)
print(model)
Now we have predicted the output by passing X_test and also stored real target in expected_y.
expected_y = y_test
predicted_y = model.predict(X_test)
Here we have printed classification report and confusion matrix for the classifier.
print(metrics.classification_report(expected_y, predicted_y))
print(metrics.confusion_matrix(expected_y, predicted_y))
Here we have used datasets to load the inbuilt boston dataset and we have created objects X and y to store the data and the target value respectively.
dataset = datasets.load_boston()
X = dataset.data
y = dataset.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
Here, we are using Support Vector Regressor (SVR) as a Machine Learning model to fit the data.
model = SVR()
model.fit(X_train, y_train)
print(model)
Now we have predicted the output by passing X_test and also stored real target in expected_y.
expected_y = y_test
predicted_y = model.predict(X_test)
Here we have printed r2 score and mean squared log error for the Regressor.
print(metrics.r2_score(expected_y, predicted_y))
print(metrics.mean_squared_log_error(expected_y, predicted_y))
plt.figure(figsize=(10,10))
sns.regplot(expected_y, predicted_y, fit_reg=True, scatter_kws={"s": 100})
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, decision_function_shape="ovr", degree=3, gamma="auto_deprecated", kernel="rbf", max_iter=-1, probability=False, random_state=None, shrinking=True, tol=0.001, verbose=False) precision recall f1-score support 0 0.00 0.00 0.00 47 1 0.67 1.00 0.80 96 micro avg 0.67 0.67 0.67 143 macro avg 0.34 0.50 0.40 143 weighted avg 0.45 0.67 0.54 143 [[ 0 47] [ 0 96]] SVR(C=1.0, cache_size=200, coef0=0.0, degree=3, epsilon=0.1, gamma="auto_deprecated", kernel="rbf", max_iter=-1, shrinking=True, tol=0.001, verbose=False) -0.014799656287679985 0.16697303856023255