Implementing Slow Changing Dimensions in a Data Warehouse using Hive and Spark

Implementing Slow Changing Dimensions in a Data Warehouse using Hive and Spark

Hive Project- Understand the various types of SCDs and implement these slowly changing dimesnsion in Hadoop Hive and Spark.


Each project comes with 2-5 hours of micro-videos explaining the solution.

Code & Dataset

Get access to 50+ solved projects with iPython notebooks and datasets.

Project Experience

Add project experience to your Linkedin/Github profiles.

Customer Love

Read All Reviews

Shailesh Kurdekar

Solutions Architect at Capital One

I have worked for more than 15 years in Java and J2EE and have recently developed an interest in Big Data technologies and Machine learning due to a big need at my workspace. I was referred here by a... Read More

Mohamed Yusef Ahmed

Software Developer at Taske

Recently I became interested in Hadoop as I think its a great platform for storing and analyzing large structured and unstructured data sets. The experts did a great job not only explaining the... Read More

What will you learn

Getting the overview of the project
Understanding DataWarehousing using HIve for DataWarehousing
What is a slow-changing dimension (scd)
Types of slow-changing dimension
What is Parquet and ORC, similarities, differences and their use
Downloading the AdventureWorks Dataset
Transferring the data to Hive using Scoop
Denormalizing the Data for data analysis
Saving as Parquet Data commands and running scoop jobs
Viewing the tables created in Hive using Hue
Understanding the Changing Dimensions in Customers Demographics
What is ELT and ETL, similarities, differences and their use
Data Lake as a Storage Repository for saving structured, semi-structured, and unstructured data
Creating Customer tables with SCD-type2
Transformation for SCD Type-1 on Credit Card Table
Tuning and Configuring Hive for SCD
Implementing SCD 2 & 3 in Hive and Spark

Project Description

One of the broadest use of Hadoop today is building data warehousing platform off a data lake. And in building a data warehouse, the traditions left us by Kimball and Inmon is still very much in play.

Why not every one of the legacy rules should be implemented as as-is in the big data platform, the issue of slow-changing dimensions is still a front-burner.

The slow changing dimension of warehouse dimension that is said to rarely change. However, when they change, there should be a systematic approach to capturing that change. Examples of SCDs are customer and products information.

In this hive project, we will look at the various types of SCDs and learn to implements SCDs in Hive and Spark.

Similar Projects

In this big data project, we will be performing an OLAP cube design using AdventureWorks database. The deliverable for this session will be to design a cube, build and implement it using Kylin, query the cube and even connect familiar tools (like Excel) with our new cube.

In this project, we are going to talk about insurance forecast by using regression techniques.

In this project, we will walk through all the various classes of NoSQL database and try to establish where they are the best fit.

Curriculum For This Mini Project

Project Overview
What is Datawarehousing?
Difference between Parquet and ORC
What is slow changing dimension?
Working with AdventureWorks Dataset to Understand SCD
Copy data using Scoop to hive
Denormalize Data
Example to understand SCD
Running the Scoop Job
Hive Querying to View the Data using Hue
Understanding the Changing Dimensions in Customer Demographics
Understanding Different Types of SCD's
Discussion on ELT vs ETL
Datawarehouse vs Data Lake
Data Lakes from a Data Architecture Perspective
Create Customer Table with SCD-Type 2
Create Customer Demo Table SCD-Type 4 and CreditCard Table with SCD Type 1
Transformations for SCD Type 1 on Credit Card Table
Hive Configurations to set SCD
Transformations for SCD Type 1 Continued
Transformations for SCD Type 4 with example