After training a model we need a measure to check its performance, their are many scoring metric on which we can score the model's performance. Out of many metric we will be using f1 score to measure our models performance. We will also be using cross validation to test the model on multiple sets of data.
This data science python source code does the following:
1. Classification metrics used for validation of model.
2. Performs train_test_split to seperate training and testing dataset
3. Implements CrossValidation on models and calculating the final result using "F1 Score" method.
So this is the recipe on How we can check model's f1-score using cross validation in Python.
from sklearn.model_selection import cross_val_score
from sklearn.tree import DecisionTreeClassifier
from sklearn.datasets import make_classification
We have imported various modules from differnt libraries such as cross_val_score, DecisionTreeClassifier and make_classification.
We are generating a dataset with make_classification function which will generate a classification dataset as per the passed parameters.
X, y = make_classification(n_samples = 10000,
n_features = 3,
n_informative = 3,
n_redundant = 0,
n_classes = 2,
random_state = 42)
We are using DecisionTreeClassifier as a model to train the data. We are training the model with cross_validation which will train the data on different training set and it will calculate f1 score for all the test train split.
We are printing the f1 score for all the splits in cross validation and we are also printing mean and standard deviation of f1 score.
dec_tree = DecisionTreeClassifier()
print(cross_val_score(dec_tree, X, y, scoring="f1", cv = 7))
mean_score = cross_val_score(dec_tree, X, y, scoring="f1", cv = 7).mean()
std_score = cross_val_score(dec_tree, X, y, scoring="f1", cv = 7).std()
print(mean_score)
print(std_score)
So the output comes as
[0.92254013 0.91392582 0.93802817 0.92426367 0.93614035 0.92210526 0.9260539 ] 0.9257145721528974 0.006172506932493186