1-844-696-6465 (US)        +91 77600 44484        help@dezyre.com
tough-engineering-choices-with-large-datasets-in-hive.jpg

Tough engineering choices with large datasets in Hive Part - 1

Explore hive usage efficiently in this hadoop hive project using various file formats such as JSON, CSV, ORC, AVRO and compare their relative performances

Users who bought this project also bought

What will you learn

  • Common misuse/abuse of hive
  • How to use and interpret Hive's explain command
  • File formats and their relative performance (Text, JSON, SequenceFile, Avro, ORC and Parquet)
  • Compression
  • Spark and hive for transformation
  • Hive and Impala - making choices
  • Execution engines and performance

What will you get

  • Access to recording of the complete project
  • Access to all material related to project like data files, solution files etc.

Prerequisites

  • It is expected that students have a fair knowledge of Hadoop and Hive.
  • Installation Cloudera Quickstart VM or any other Hadoop cluster.
  • It will be also nice if we can explore the tez execution engine as well. Tez is currently available in the Hortonworks HDP sandbox so it will be nice if students download and set up this sandbox as well. It is not mandatory but would be complementary.

Project Description

The use of Hive or the hive meta-store is so ubiquitous in big data engineering that achieving efficient use of the tool is a factor in the success of many big data projects. Whether in integrating with Spark or using hive as an ETL tool, many big data projects either fail or succeed as they grow in scale and complexity because of decisions made in the early lifecycle of the analytics project.

In this hive project, we will explore using hive efficiently and this big data project format will take an exploratory pattern rather than a project building pattern. The goal of these sessions will be to explore Hive in uncommon ways towards mastery.

We will be using different sample dataset for hive in the series of these hive real time projects, exploring different Hadoop file formats like text, CSV, JSON, ORC, parquet, AVRO and sequence file, will look at compression and different codecs and take a look at the performance of each when you try integration with either spark or impala. The idea of this hadoop hive project is to explore enough so that we can be made a reasonable argument about what to do or not in any given scenario.

Instructors

 
Michael

Big Data & Enterprise Software Engineer

I am passionate about software development, databases, data analysis and the android platform. My native language is java but no one has stopped me so far from learning and using angular and node.js. Data and data analysis is thrilling and so are my experiences with SQL on Oracle, Microsoft SQL Server, Postgres and MyS see more...

Curriculum For This Mini Project

 
  21-Oct-2017
02:52:57
  22-Oct-2017
02:35:52