MACHINE LEARNING RECIPES
DATA CLEANING PYTHON
DATA MUNGING
PANDAS CHEATSHEET
ALL TAGS
# How to use nearest neighbours for Regression?

# How to use nearest neighbours for Regression?

This recipe helps you use nearest neighbours for Regression

Many a times while working on a dataset and using a Machine Learning model we don"t know which set of hyperparameters will give us the best result. Passing all sets of hyperparameters manually through the model and checking the result might be a hectic work and may not be possible to do.

To get the best set of hyperparameters we can use Grid Search. Grid Search passes all combinations of hyperparameters one by one into the model and check the result. Finally it gives us the set of hyperparemeters which gives the best result after passing in the model.

So this recipe is a short example of how can can use nearest neighbours for Regression.

```
from sklearn import decomposition, datasets
from sklearn import neighbors
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV, cross_val_score
from sklearn.preprocessing import StandardScaler
```

Here we have imported various modules like decomposition, datasets, tree, Pipeline, StandardScaler and GridSearchCV from differnt libraries. We will understand the use of these later while using it in the in the code snipet.

For now just have a look on these imports.

Here we have created a regression dataset with python datasets.
```
dataset = datasets.make_regression(n_samples=1000, n_features=20, n_informative=10,
n_targets=1, bias=0.0, effective_rank=None, tail_strength=0.5, noise=0.0,
shuffle=True, coef=False, random_state=None)
X = dataset[0]
y = dataset[1]
```

StandardScaler is used to remove the outliners and scale the data by making the mean of the data 0 and standard deviation as 1. So we are creating an object std_scl to use standardScaler.
```
std_slc = StandardScaler()
```

We are also using Principal Component Analysis(PCA) which will reduce the dimension of features by creating new features which have most of the varience of the original data.
```
pca = decomposition.PCA()
```

Here, we are using KNeighbors Regressor as a Machine Learning model to use GridSearchCV. So we have created an object KNN.
```
KNN = neighbors.KNeighborsRegressor()
```

Pipeline will helps us by passing modules one by one through GridSearchCV for which we want to get the best parameters. So we are making an object pipe to create a pipeline for all the three objects std_scl, pca and knn.
```
pipe = Pipeline(steps=[("std_slc", std_slc),
("pca", pca),
("KNN", KNN)])
```

Now we have to define the parameters that we want to optimise for these three objects.

StandardScaler doesnot requires any parameters to be optimised by GridSearchCV.

Principal Component Analysis requires a parameter "n_components" to be optimised. "n_components" signifies the number of components to keep after reducing the dimension.
```
n_components = list(range(1,X.shape[1]+1,1))
```

DecisionTreeClassifier requires two parameters "n_neighbors" and "algorithm" to be optimised by GridSearchCV. So we have set these two parameters as a list of values form which GridSearchCV will select the best value of parameter.
```
n_neighbors = [2, 3, 5, 10]
algorithm = ["auto", "ball_tree", "kd_tree", "brute"]
```

Now we are creating a dictionary to set all the parameters options for different objects.
```
parameters = dict(pca__n_components=n_components,
KNN__n_neighbors=n_neighbors,
KNN__algorithm=algorithm)
```

Before using GridSearchCV, lets have a look on the important parameters.

- estimator: In this we have to pass the models or functions on which we want to use GridSearchCV
- param_grid: Dictionary or list of parameters of models or function in which GridSearchCV have to select the best.
- Scoring: It is used as a evaluating metric for the model performance to decide the best hyperparameters, if not especified then it uses estimator score.

```
clf = GridSearchCV(pipe, parameters)
clf.fit(X, y)
```

Now we are using print statements to print the results. It will give the values of hyperparameters as a result.
```
print("Best Number Of Components:", clf.best_estimator_.get_params()["pca__n_components"])
print(); print(clf.best_estimator_.get_params()["KNN"])
CV_Result = cross_val_score(clf, X, y, cv=3, n_jobs=-1, scoring="r2")
print(); print(CV_Result)
print(); print(CV_Result.mean())
print(); print(CV_Result.std())
```

As an output we get:
Best Number Of Components: 20 KNeighborsRegressor(algorithm="auto", leaf_size=30, metric="minkowski", metric_params=None, n_jobs=None, n_neighbors=10, p=2, weights="uniform") [0.60800965 0.53874633 0.57159348] 0.5727831547316445 0.028289142973677704

**
Download Materials
**

In this machine learning resume parser example we use the popular Spacy NLP python library for OCR and text classification.

The project will use rasa NLU for the Intent classifier, spacy for entity tagging, and mongo dB as the DB. The project will incorporate slot filling and context management and will be supporting the following intent and entities. Intents : product_info | ask_price|cancel_order Entities : product_name|location|order id The project will demonstrate how to generate data on the fly, annotate using framework and how to process those for different pieces of training as discussed above .

Deep Learning Project to implement an Abstractive Text Summarizer using Google's Transformers-BART Model to generate news article headlines.

In this ensemble machine learning project, we will predict what kind of claims an insurance company will get. This is implemented in python using ensemble machine learning algorithms.

Use cluster analysis to identify the groups of characteristically similar schools in the College Scorecard dataset. Considerations: Clustering Algorithm Data Preparation How will you deal with missing values? Categorical variables? Feature intercorrelations? Feature normalization or scaling? Dimensionality reduction? Hyperparameters How will you set the parameters -- the algorithm's knobs and dials, so to speak -- in order to achieve valid and useful output? Interpretation Is it possible to explain what each cluster represents? Did you retain or prepare a set of features that enables a meaningful interpretation of the clusters? Do the compositions of the clusters seem to make sense? Validation How will you measure the validity of your clustering process? Which metrics will you use and how will you apply them?

In this time series project, you will build a model to predict the stock prices and identify the best time series forecasting model that gives reliable and authentic results for decision making.

Estimating churners before they discontinue using a product or service is extremely important. In this ML project, you will develop a churn prediction model in telecom to predict customers who are most likely subject to churn.

In this project you will use Python to implement various machine learning methods( RNN, LSTM, GRU) for fake news classification.

Data Science Project - Build a recommendation engine which will predict the products to be purchased by an Instacart consumer again.

In this deep learning project, you will build a convolutional neural network using MNIST dataset for handwritten digit recognition.