Real-Time Log Processing in Kafka for Streaming Architecture

Real-Time Log Processing in Kafka for Streaming Architecture

The goal of this apache kafka project is to process log entries from applications in real-time using Kafka for the streaming architecture in a microservice sense.

Videos

Each project comes with 2-5 hours of micro-videos explaining the solution.

Code & Dataset

Get access to 50+ solved projects with iPython notebooks and datasets.

Project Experience

Add project experience to your Linkedin/Github profiles.

What will you learn

Understanding the roadmap of the project
Kafka as Real-time app and data pipeline builder
Understanding Service-Oriented architecture as Microservices
Role of Log file in Businesses
Re-state the case for real-time processing of log files
Run through our application and real-time log collection using Flume Log4J appenders
Creating Events in Fume
Ingestion of Data in Kafka by integrating Flume and Kafka
Selecting between Kafka and Flume appenders
Handling Massive data in batch and stream-processing using Lambda architecture
Kafka Stream and Kafka connect
Initiating Zookeeper for starting Kafka
Processing data in Kafka platform
Steps to use Kafka for Streaming Architecture in Microservices
Transforming Kafka streams into the object by parsing
Storing the final processed data(HBase, Cassandra, MongoDB)
Extending our architecture in a microservice world

Project Description

In our previous Spark Project-Real-Time Log Processing using Spark Streaming Architecture, we built on a previous topic of log processing by using the speed layer of the lambda architecture. We performed a real time processing of log entries from application using Spark Streaming, storing the final data in a hbase table.

In this kafka project, we will repeat the same objectives using another set of real time technologies. The idea is to compare both approaches of doing real time data processing which will soon become mainstream in various industries.

We will be using Kafka for the streaming architecture in a microservice sense.

The major highlight of this big data project will be students having to compare the spark streaming approach vs the Kafka-only approach. This is a great session for developers, analyst as much as architects.

Note: It is worthy of note that the Cloudera QuickStart VM does not have Kafka. We intend to work around that. So come prepare to do Kafka Installation in Cloudera quickstart vm.

Similar Projects

In this big data project, we will see how data ingestion and loading is done with Kafka connect APIs while transformation will be done with Kafka Streaming API.

Spark Project - Discuss real-time monitoring of taxis in a city. The real-time data streaming will be simulated using Flume. The ingestion will be done using Spark Streaming.

In this big data spark project, we will do Twitter sentiment analysis using spark streaming on the incoming streaming data.

Curriculum For This Mini Project

Agenda for the Project
07m
What is Kafka?
04m
Microservices and Its Architecture
06m
Why businesses need logs?
03m
Making a case for real-time log processing
10m
Run through the application using Flume Log4j appenders
10m
Using Flume for Events
12m
Getting data into Kafka
08m
Download and Install Kafka
20m
Kafka and Flume Integration
12m
Lambda Architecture
26m
Recap of the Previous Session
05m
Kafka Streams and Kafka Connect
01m
Starting Kafka Agents -Zookeeper
13m
Kafka Streams
01m
Kafka as a Processing Platform
06m
Steps to use Kafka for Streaming Architecture in Microservices
05m
Kafka Streaming Application
01m
LogParserProcessor
09m
KStream
03m
Applying Business Logic on KStream
11m
Parsing the Stream and Transforming into Object
09m
Processed Logs
12m
Resource Counter
05m
Storing the Data into the Destination -HBase, Cassandra, MongoDB
00m
Using Kafka Connect
05m
Example on how to use Kafka Connect
43m
Discussion on using Kafka for Microservices
01m
Resource Counter Process
07m