Big Data Hadoop Project-Visualize Daily Wikipedia Trends

Big Data Hadoop Project-Visualize Daily Wikipedia Trends

In this big data project, we'll work with Apache Airflow and write scheduled workflow, which will download data from Wikipedia archives, upload to S3, process them in HIVE and finally analyze on Zeppelin Notebooks.

Videos

Each project comes with 2-5 hours of micro-videos explaining the solution.

Code & Dataset

Get access to 50+ solved projects with iPython notebooks and datasets.

Project Experience

Add project experience to your Linkedin/Github profiles.

What will you learn

Creating your own virtual environment in Python
Installing the Dependencies in the environment
Understanding Workflows and their uses
Installing Apache Airflow, Airflow Web server and Airflow Sheduler
Creating Tasks in Airflow and setting up the downtrend
Working with Qubole and S3
Creating page table in Hive using SQL dumps
Registering the Database and Extracting the desired Data
Understanding how to design schemas, performing inner joins , Double joins etc.
Visualizing and executing paths in AIrflow
Fetching Incoming Data and putting it on S3
Filtering Data Via Hive and Hadoop
Mapping the filtered data with the SQL data
Creating your own Airflow Scheduler on QU BOLE for auto task completeion
Final Charting via Zeppelin Notebooks

Project Description

In this big data project we build a live workflow for a real project using Apache Airflow which is the new edge workflow management platform. We will go through the use cases of workflow, different tools available to manage workflow, important features of workflow like CLI and UI and how Airflow is differnt. We will install Airflow and run some simple workflows. 

In this big data hadoop project, we will download the raw page counts data from wikipedia archieve  and we will process them via Hadoop. Then map that processed data to raw SQL data to identify the most lived up pages of a given day. Then we will visualize the proecessed data via Zeppelin Notebooks to identify the daily trends. We will use Qubole to power up Hadoop and Notebooks.

All steps like downloading, copying data to S3, creating tables and processing them via Hadoop would be task in Airflow and we will learn how to craft scheduled workflow in Airflow.

Similar Projects

In this big data project, we will talk about Apache Zeppelin. We will write code, write notes, build charts and share all in one single data analytics environment using Hive, Spark and Pig.

In this hadoop hive project, you will work on Hive and HQL to analyze movie ratings using MovieLens dataset for better movie recommendation.

In this big data project, we will continue from a previous hive project "Data engineering on Yelp Datasets using Hadoop tools" and do the entire data processing using spark.

Curriculum For This Mini Project

10-Sept-2016
03h 59m