Each project comes with 2-5 hours of micro-videos explaining the solution.
Code & Dataset
Get access to 50+ solved projects with iPython notebooks and datasets.
Add project experience to your Linkedin/Github profiles.
Concept of Layover and batch processing for the webserver log Processing
Downloading the necessary Dataset
Understanding the dataset and its variables
Integrating the complete system for Real-time Log tracking
Fetching Real-time Log files using Fume Log4j appenders
Using Kafka for Log Aggregation
Real-Time Log Processing using Flume and integrating it with Kafka
Performing Data Analysis before storing the data in HBase in order of time
Understanding Cassandra and HBase, difference , similarities and its use in different scenarios
, Understanding components of a database and related terminologies
Understanding an HBase design
Variables of EDGAR data files and its description
Storing the EDGAR log file dataset
Selecting the Role key by combining different variables for saving in the database
Understanding the Streaming Application Code
Integrating Hive and HBase for data retrieval using query
Using the same created Architecture in different sectors
A while back, we did web server access log processing using spark and hive. However, that processing was batch processing and in the lambda architecture, we will only be able to operate in the batch and serving layer.
In this big data project, we are going one step further by bringing processing to the speed layer of the lambda architecture which opens up more capabilities. One of such capability will be ability monitor application real time perform or measure real time comfort with applications or real time alert in case of security breach.
The abilities and functionalities will be explored using Spark Streaming in a streaming architecture.
Note: It is worthy of note that the Cloudera QuickStart VM does not have Kafka. However, like in our objective, we will make the case for using Kafka but our implementation will not be using Kafka. Instead, we will integrate the log agent with Spark streaming in this big data project.