Implementing Slow Changing Dimensions in a Data Warehouse using Hive and Spark

Implementing Slow Changing Dimensions in a Data Warehouse using Hive and Spark

Hive Project- Understand the various types of SCDs and implement these slowly changing dimesnsion in Hadoop Hive and Spark.


Each project comes with 2-5 hours of micro-videos explaining the solution.

Code & Dataset

Get access to 50+ solved projects with iPython notebooks and datasets.

Project Experience

Add project experience to your Linkedin/Github profiles.

What will you learn

Getting the overview of the project
Understanding DataWarehousing using HIve for DataWarehousing
What is a slow-changing dimension (scd)
Types of slow-changing dimension
What is Parquet and ORC, similarities, differences and their use
Downloading the AdventureWorks Dataset
Transferring the data to Hive using Scoop
Denormalizing the Data for data analysis
Saving as Parquet Data commands and running scoop jobs
Viewing the tables created in Hive using Hue
Understanding the Changing Dimensions in Customers Demographics
What is ELT and ETL, similarities, differences and their use
Data Lake as a Storage Repository for saving structured, semi-structured, and unstructured data
Creating Customer tables with SCD-type2
Transformation for SCD Type-1 on Credit Card Table
Tuning and Configuring Hive for SCD
Implementing SCD 2 & 3 in Hive and Spark

Project Description

One of the broadest use of Hadoop today is building data warehousing platform off a data lake. And in building a data warehouse, the traditions left us by Kimball and Inmon is still very much in play.

Why not every one of the legacy rules should be implemented as as-is in the big data platform, the issue of slow-changing dimensions is still a front-burner.

The slow changing dimension of warehouse dimension that is said to rarely change. However, when they change, there should be a systematic approach to capturing that change. Examples of SCDs are customer and products information.

In this hive project, we will look at the various types of SCDs and learn to implements SCDs in Hive and Spark.

Similar Projects

In this project, we will take a look at three different SQL-on-Hadoop engines - Hive, Phoenix, Impala and Presto.

In this spark project, we will continue building the data warehouse from the previous project Yelp Data Processing Using Spark And Hive Part 1 and will do further data processing to develop diverse data products.

Hive Project -Learn to write a Hive program to find the first unique URL, given 'n' number of URL's.

Curriculum For This Mini Project

Project Overview
What is Datawarehousing?
Difference between Parquet and ORC
What is slow changing dimension?
Working with AdventureWorks Dataset to Understand SCD
Copy data using Scoop to hive
Denormalize Data
Example to understand SCD
Running the Scoop Job
Hive Querying to View the Data using Hue
Understanding the Changing Dimensions in Customer Demographics
Understanding Different Types of SCD's
Discussion on ELT vs ETL
Datawarehouse vs Data Lake
Data Lakes from a Data Architecture Perspective
Create Customer Table with SCD-Type 2
Create Customer Demo Table SCD-Type 4 and CreditCard Table with SCD Type 1
Transformations for SCD Type 1 on Credit Card Table
Hive Configurations to set SCD
Transformations for SCD Type 1 Continued
Transformations for SCD Type 4 with example