Airline Dataset Analysis using Hadoop, Hive, Pig and Impala

Airline Dataset Analysis using Hadoop, Hive, Pig and Impala

Hadoop Project- Perform basic big data analysis on airline dataset using big data tools -Pig, Hive and Impala.

Videos

Each project comes with 2-5 hours of micro-videos explaining the solution.

Code & Dataset

Get access to 50+ solved projects with iPython notebooks and datasets.

Project Experience

Add project experience to your Linkedin/Github profiles.

What will you learn

Introduction to Data infrastructures Methods for ingestion of data(Backend Service, Data Warehouse)
Tackling Small file problem
Roadmap of the project and business problem
Hive JDBC and Impala ODBC driver
Extracting and loading the data in Cloudera VMware
Data preprocessing with Pig
Writing Queries in Hue Hive for creating tables
Hive vs. MPP database systems (Hive vs. Impala/Drill)
Basic EDA using Hive
Hive/Impala partitioning and clustering
Writing data from Pig to Hive directly using HCatloader
Data compression, tuning and query optimization using parquet
Using database views to represent data
Clustering , Sampling and Bucketed Tables
Building time series data model
Impala compute Stats and File format
Visualizing data using Microsoft Excel via ODBC

Project Description

Before data on any platform will become an asset to any organization, it has to pass through processing stage to ensure quality and availability. Afterward, that data has to be available to users (both human and system users). The availability of quality data in any organization is the guarantee of the value that data science (in general) will be to that organization. 

We are using the airline on-time performance dataset (flights data csv) to demonstrate these principles and techniques in this hadoop project and we will proceed to answer the below questions -

  • When is the best time of day/day of week/time of year to fly to minimize delays?
  • Do older planes suffer more delays?
  • How does the number of people flying between different locations change over time?

We will also transform the data access model into time series and demonstrate how clients can access data in our big data infrastructure using a simple tool like the Excel spreadsheet.

Similar Projects

In this hive project, you will design a data warehouse for e-commerce environments.

In this big data project, we will continue from a previous hive project "Data engineering on Yelp Datasets using Hadoop tools" and do the entire data processing using spark.

In this hadoop project, we are going to be continuing the series on data engineering by discussing and implementing various ways to solve the hadoop small file problem.

Curriculum For This Mini Project

Introduction to Data Infrastructure
07m
Methods to ingest data in a data infrastructure
06m
Messaging Layer Example
11m
Small File Problem
03m
Business problem overview and topics covered
02m
Hive JDBC and Impala ODBC drivers
02m
Data Pre-processing
06m
Data Extraction and Loading
03m
Setting up the Datawarehouse
13m
Creating Data Table
02m
Impala Architecture
14m
Working with Hive versus Impala & File Formats
08m
Hive query for Airline data analysis + Parquet - 1
21m
Hive query for Airline data analysis + Parquet - 2
05m
Hive query for Airline data analysis + Parquet - 3
16m
Read and write data to tables
16m
Parquet data compression
06m
Calculate average flight delay
10m
Partitioning Basics
02m
Where to do the data processing - Hive or Impala ?
10m
Partitioning Calculations
15m
Dynamic Paritioninig
04m
Clustering, Sampling, Bucketed Tables
13m
Hive Compression and Execution Engine
15m
Impala COMPUTE STATS and File Formats
13m
Using database views to represent data
15m
Using Excel or Qlikview for Visualization
31m