Real-Time Streaming of Twitter Sentiments AWS EC2 NiFi

Real-Time Streaming of Twitter Sentiments AWS EC2 NiFi

Learn to perform 1) Twitter Sentiment Analysis using Spark Streaming, NiFi and Kafka, and 2) Build an Interactive Data Visualization for the analysis using Python Plotly.
explanation image


Each project comes with 2-5 hours of micro-videos explaining the solution.

ipython image

Code & Dataset

Get access to 50+ solved projects with iPython notebooks and datasets.

project experience

Project Experience

Add project experience to your Linkedin/Github profiles.

Customer Love

Read All Reviews
profile image

Camille St. Omer linkedin profile url

Artificial Intelligence Researcher, Quora 'Most Viewed Writer in 'Data Mining'

I came to the platform with no experience and now I am knowledgeable in Machine Learning with Python. No easy thing I must say, the sessions are challenging and go to the depths. I looked at graduate... Read More

profile image

Swati Patra linkedin profile url

Systems Advisor , IBM

I have 11 years of experience and work with IBM. My domain is Travel, Hospitality and Banking - both sectors process lots of data. The way the projects were set up and the mentors' explanation was... Read More

What will you learn

Understanding the project and how to use AWS EC2 Instance
Understanding the basics of Containers, sentiment analysis, and their application
Visualizing the complete Architecture of the system
Introduction to Docker
Usage of docker-composer and starting all tools
Exploring dataset and bucketizing dataset for labelling
Training the model and saving it
Installing NiFi and using it for data ingestion
Installing Kafka and using it for creating topics
Publishing tweets using NiFi
Integration of NiFi and Kafka
Installing Spark and using it for data processing
Integration of Kafka and Spark
Extracting schema from the stream of tweets
Reading data from Kafka
Analyzing sentiments in tweets in Spark
Integration of Spark and MongoDB
Continuously loading data in MongoDB for aggregated results
Integrating MongoDB and Plotly and Dash
Displaying live stream results using Python Plotly and Dash

Project Description

What is Twitter Sentiment?

Twitter sentiment is a term used to define the analysis of sentiments in the tweets generated by users on social media platform like Twitter. Generally, twitter sentiments are analysed in most of the projects using parsing. Analyzing sentiments of users on twitter is fruitful to companies for their product that is mostly focused on social media trends, users sentiments and future view of the online community.


Data Pipeline:

It refers to a system for moving data from one system to another. The data may or may not be transformed, and it may be processed in real time (or streaming) instead of batches. Right from extracting or capturing data using various tools, storing raw data, cleaning, validating data, transforming data into query worthy format, visualisation of KPIs including Orchestration of the above process is data pipeline.


What is the Agenda of the project?

Agenda of the project involves Real-time streaming of Twitter Sentiments with visualization web app. We first launch an EC2 instance on AWS, and install Docker in it with tools like Apache Spark, Apache NiFi, Apache Kafka, Jupyter Lab, MongoDB, Plotly and Dash. Then, supervised classification model is created using Data exploration, Bucketizing, Stratified sampling, Dataset splitting, Extracting the features using tokenizing, removing stop words, TF-IDF etc., Creating Pipeline, Training the model, Evaluating model with binary classification evaluation and Saving classified model. It is followed by Extraction using Apache NiFi and Apache Kafka, followed by Transformation and Load using MongoDB and finally Visualizing it using python plotly and Dash with the usage of graph and table app call-back.


Usage of Dataset:

Here we are going to use Twitter sentiments data in the following ways:

- Extraction: During extraction process, NiFi process and connections are set up followed by creation of twitter app in twitter developer account. The data is streamed from the twitter API using NiFi followed by creation of topics and publishing tweets in NiFi using apache Kafka.

- Transformation and Load: During transformation and load process, schema is extracted from the stream of tweets followed by reading of data form apache Kafka as streaming a dataframe with extraction and cleansing of twitter data and analyzing sentiments in tweets. Then data is written in MongoDB for the visualization in Dash.


Data Analysis:

  • From given website, data is downloaded containing text of review, rating of product and summary of review. Data is bucketized to label features followed by partitioning of data to homogenous sample..

  • Dataset is splitted in appropriate ratios following by features extraction using tokenisation, TF-IDF and logistic regression.

  • Data pipeline is created to train the model and evaluate it with binary classification evaluator followed by saving of classified model.

  • The extraction process is done using NiFi and Kafka, by streaming data from twitter API using NiFi and creating topics, publishing tweets using Kafka.

  • In transformation and load process, schema is extracted from twitter streams and data is read from Kafka as streaming dataframe.

  • Twitter data is extracted and cleansed followed by sentiment analysis of tweets.

  • Finally continuous data is loaded into MongoDB and data is visualized using scatter graph and table definitions in python plotly and Dash.

Similar Projects

In this hadoop project, you will be using a sample application log file from an application server to a demonstrated scaled-down server log processing pipeline.

The goal of this apache kafka project is to process log entries from applications in real-time using Kafka for the streaming architecture in a microservice sense.

In this spark streaming project, we are going to build the backend of a IT job ad website by streaming data from twitter for analysis in spark.

Curriculum For This Mini Project

Agenda and Architecture
Environment Setup Part 1
Environment Setup Part 2
Classification model creation
Dataset exploration and Bucketizing
Stratified sampling and Dataset splitting
Feature extraction and Pipeline creation
Model training and Evaluation
Saving model and Evaluation
Defining NiFi and its extraction process
Twitter app creation
Setting up NiFi
Defining Kafka and its extraction process
Topic and publishing messages
Schema extraction in transform and load
Reading data in transform and load
Extraction and Cleansing in transform and load
Sentiment analysis and Writing in transform and load
Introduction to Dash
Code explanation in visualization
Code walkthrough and runung notebooks