Tough engineering choices with large datasets in Hive Part - 1

Tough engineering choices with large datasets in Hive Part - 1

Explore hive usage efficiently in this hadoop hive project using various file formats such as JSON, CSV, ORC, AVRO and compare their relative performances

Videos

Each project comes with 2-5 hours of micro-videos explaining the solution.

Code & Dataset

Get access to 50+ solved projects with iPython notebooks and datasets.

Project Experience

Add project experience to your Linkedin/Github profiles.

Customer Love

Read All Reviews

James Peebles

Data Analytics Leader, IQVIA

This is one of the best of investments you can make with regards to career progression and growth in technological knowledge. I was pointed in this direction by a mentor in the IT world who I highly... Read More

Shailesh Kurdekar

Solutions Architect at Capital One

I have worked for more than 15 years in Java and J2EE and have recently developed an interest in Big Data technologies and Machine learning due to a big need at my workspace. I was referred here by a... Read More

What will you learn

Understanding the road map of the project
Setting up virtual environment n Cloudera Quick VM ware
Downloading the Airtime Online performance Data
Understanding use of HIve as Transformational Layer program
Various uses of HIve (Partitioning, Clustering, Integration etc.)
Creating a Star Schema for the Dataset
Creating Database and tables in HQL
Performing Statistical Data Analysis and Visualizing the data
How to use and interpret the Hive's explain command
File formats and their relative performance (Text, JSON, SequenceFile, Avro, ORC, and Parquet)
Comparing Apache Hive, Apache Pig , Apache Spark and Hadoop Map Reduce
Understanding Distributed Computing via MapReduce
Spark and hive for transformation
Improving Performance of the Dataset using Partitioning
Using HCatalog to prevent Information lost during partitioning
Improving time Queries including sampling and Mapside by Clustering Method
Compression
Execution engines and performance

Project Description

The use of Hive or the hive meta-store is so ubiquitous in big data engineering that achieving efficient use of the tool is a factor in the success of many big data projects. Whether in integrating with Spark or using hive as an ETL tool, many big data projects either fail or succeed as they grow in scale and complexity because of decisions made in the early lifecycle of the analytics project.

In this hive project, we will explore using hive efficiently and this big data project format will take an exploratory pattern rather than a project building pattern. The goal of these sessions will be to explore Hive in uncommon ways towards mastery.

We will be using different sample dataset for hive in the series of these hive real time projects, exploring different Hadoop file formats like text, CSV, JSON, ORC, parquet, AVRO and sequence file, will look at compression and different codecs and take a look at the performance of each when you try integration with either spark or impala. The idea of this hadoop hive project is to explore enough so that we can be made a reasonable argument about what to do or not in any given scenario.

Similar Projects

In this hive project, you will design a data warehouse for e-commerce environments.

In this hadoop project, we are going to be continuing the series on data engineering by discussing and implementing various ways to solve the hadoop small file problem.

In this Apache Spark SQL project, we will go through provisioning data for retrieval using Spark SQL.

Curriculum For This Mini Project

Overview of the Project
07m
Datasets used for the Project
02m
Downloading IBM Analytics DemoCloud
02m
Logging to IBM Analytics DemoCloud
07m
Downloading Airline Ontime Performance Dataset
12m
Introduction to Hive
04m
General Discussion on the Purpose of the Project
07m
Agenda for the Project
15m
Star Schema
03m
Run Scripts to Create Database
17m
Data Exploration
04m
Data Analysis
03m
Why Hive still is the Swiss Army Knife of Big Data?
34m
Data Analysis Continuation
02m
Quick Recap of the Previous Session
01m
Partitioning
37m
Use Hive Integration to read Data -Hive Metastore
14m
Partioning using HCatalog
10m
Partitioning -Alter, Drop, Move Partitions Notes
09m
Clustering
21m
Explain and Statistics
28m
Different Types of Explain
03m