Real-Time Log Processing using Spark Streaming Architecture

In this Spark project, we are going to bring processing to the speed layer of the lambda architecture which opens up capabilities to monitor application real time performance, measure real time comfort with applications and real time alert in case of security
Each project comes with 2-5 hours of micro-videos explaining the solution.
Code & Dataset
Get access to 50+ solved projects with iPython notebooks and datasets.
Project Experience
Add project experience to your Linkedin/Github profiles.

What will you learn

  • Making a case for real time processing of log files
  • Getting logs at real time using Flume Log4J appenders
  • Making a case for Kafka for log aggregation.
  • Storing log event as a time series datasets in HBase
  • Integrating Hive and HBase for data retrieval using query.
  • Troubleshooting

Project Description

A while back, we did web server access log processing using spark and hive. However, that processing was batch processing and in the lambda architecture, we will only be able to operate in the batch and serving layer.

In this big data project, we are going one step further by bringing processing to the speed layer of the lambda architecture which opens up more capabilities. One of such capability will be ability monitor application real time perform or measure real time comfort with applications or real time alert in case of security breach.

The abilities and functionalities will be explored using Spark Streaming in a streaming architecture. 

Note: It is worthy of note that the Cloudera QuickStart VM does not have Kafka. However, like in our objective, we will make the case for using Kafka but our implementation will not be using Kafka. Instead, we will integrate the log agent with Spark streaming in this big data project.

Curriculum For This Mini Project

  Web Server Log Processing in Batch Mode and the Concept of Rollover
  Downloading NASA Dataset
  Understading the Contents of the Log File -Common and Combined Log Format
  Making a case for real-time processing of log file
  Getting logs at real-time using Flume Log4j Appenders
  Making a case for Kafka for Log Aggregation
  Starting Flume Agent for Log Processing in Real-Time
  Analyse Data before Storing to HBase -Cracking the Design
  Discussion on the topics for next session
  Recap of previous session
  Difference between Cassandra and HBase
  Agenda for the Session
  Why HBase?
  HBase Design
  How to store EDGAR log file dataset?
  Understanding the Streaming Application Code
  Hive and HBase Integration
  Architectural Extensions