The use of Hive or the hive meta store is so ubiquitous in big data engineering that achieving efficient use of the tool is a factor in the success of many projects. Whether in integrating with Spark or using hive as an ETL tool, many projects either fail or succeed as they grow in scale and complexity because of decisions made early in the project.
In this big data project on hive, we will explore using hive efficiently and this hive porject format will take an exploratory pattern rather than a project building pattern. The goal of this big data project is to explore Hive in uncommon ways towards mastery.
We will be using different datasets in this sessions, exploring different Hadoop file formats like text, CSV, JSON, ORC, Parquet, Avro, and SequenceFile, will look at compression and different codecs and take a look at the performance of each when you try integration with either spark or impala.
The idea is to explore enough so that we can make a reasonable argument about what to do or not in any given big scenario.