What are Laplacian derivatives of an Image in OpenCV

This recipe explains what are Laplacian derivatives of an Image in OpenCV

Recipe Objective: What are Laplacian derivatives of an Image in OpenCV?

Let us take this recipe to understand are Laplacian derivatives of an Image.

Step 1: Import the libraries and read the image.

Let us first import the necessary libraries and read the image. The image that we are using here is the one shown below.

Input image

import numpy as np
import cv2
from matplotlib import pyplot as plt
image = cv2.imread('chess.jpg',0)

Step 2: Understanding image derivatives and Sobel Operator

Before we start extracting Laplacian derivatives, let us first take a moment to understand what image derivatives are and why they are helpful. Image derivatives are highly used in detecting the edges of the image. Image derivatives locate the places in the image where the pixel intensity changes in a drastic manner. This helps us map the edges of any image. The Sobel operator is one such operator which can be used to find the derivative of an image.

Step 3: Calculating the derivative of an image using Laplacian Operator

Sobel Operator usually calculates the first derivative of the image. But estimating the second derivative turns out to be zero in most cases where the edges are detected. This is the principle behind Laplacian derivatives. The Laplacian operator makes use of the Sobel operator internally. It is also important to note that zeros would appear only in the edges. They can occur in other absurd places also. We can overcome this by applying any of the appropriate filters available in OpenCV if needed before extracting its derivative.

Laplacian derivative can be calculated in python using the cv2.Laplacian() function, which takes the following arguments.

  • src: The input image
  • ddepth: The data type of the output image
  • ksize: (Optional) The size of the kernel matrix

We already know that the data type of our input image would be uint8. Generally, the derivates of Black to White transitions are positive. In contrast, the White to Black transitions are negative, and hence it is highly recommended to choose a higher-order output datatype such as cv2.CV_64F and then convert the result to a uint8 type array to avoid missing any edges.

lap_1 = cv2.Laplacian(image, cv2.CV_64F)
lap_1_abs = np.uint(np.absolute(lap_1))

In the above chunk of code, we calculate the Laplacian derivate of the input image and store it in the lap_1 variable. We then take the absolute value of the result and convert it to a uint8 array using the np.absolute() and np.uint8() functions, respectively, which are available in the Numpy package.

Let us also try altering the kernel size and see how the results come out.

lap_2 = cv2.Laplacian(image, cv2.CV_64F, ksize=7)
lap_2_abs = np.uint(np.absolute(lap_2))

Step 4: Displaying the Output

Let us display the results using matplotlib.

titles = ['Original Image',"Laplacian derivative with default ksize", 'Laplacian derivative with kszie=7']
images = [image,lap_1_abs,lap_2_abs]
plt.figure(figsize=(13,5))
for i in range(3):
    plt.subplot(1,3,i+1)
    plt.imshow(images[i],'gray')
    plt.title(titles[i])
    plt.xticks([])
    plt.yticks([])
plt.tight_layout()
plt.show()

Output:

Laplacian derivative

Download Materials

What Users are saying..

profile image

Savvy Sahai

Data Science Intern, Capgemini
linkedin profile url

As a student looking to break into the field of data engineering and data science, one can get really confused as to which path to take. Very few ways to do it are Google, YouTube, etc. I was one of... Read More

Relevant Projects

AWS Project to Build and Deploy LSTM Model with Sagemaker
In this AWS Sagemaker Project, you will learn to build a LSTM model on Sagemaker for sales forecasting while analyzing the impact of weather conditions on Sales.

Learn Hyperparameter Tuning for Neural Networks with PyTorch
In this Deep Learning Project, you will learn how to optimally tune the hyperparameters (learning rate, epochs, dropout, early stopping) of a neural network model in PyTorch to improve model performance.

Time Series Analysis with Facebook Prophet Python and Cesium
Time Series Analysis Project - Use the Facebook Prophet and Cesium Open Source Library for Time Series Forecasting in Python

BERT Text Classification using DistilBERT and ALBERT Models
This Project Explains how to perform Text Classification using ALBERT and DistilBERT

Personalized Medicine: Redefining Cancer Treatment
In this Personalized Medicine Machine Learning Project you will learn to classify genetic mutations on the basis of medical literature into 9 classes.

NLP Project for Multi Class Text Classification using BERT Model
In this NLP Project, you will learn how to build a multi-class text classification model using using the pre-trained BERT model.

BigMart Sales Prediction ML Project in Python
The goal of the BigMart Sales Prediction ML project is to build and evaluate different predictive models and determine the sales of each product at a store.

Multi-Class Text Classification with Deep Learning using BERT
In this deep learning project, you will implement one of the most popular state of the art Transformer models, BERT for Multi-Class Text Classification

ML Model Deployment on AWS for Customer Churn Prediction
MLOps Project-Deploy Machine Learning Model to Production Python on AWS for Customer Churn Prediction

MLOps AWS Project on Topic Modeling using Gunicorn Flask
In this project we will see the end-to-end machine learning development process to design, build and manage reproducible, testable, and evolvable machine learning models by using AWS