Each project comes with 2-5 hours of micro-videos explaining the solution.
Get access to 50+ solved projects with iPython notebooks and datasets.
Add project experience to your Linkedin/Github profiles.
This hive project aims to build a Hive data warehouse from a raw dataset stored in HDFS and present the data in a relational structure so that querying the data will is natural. The dataset set for this big data project is from the movielens open dataset on movie ratings.
The spark project makes use of some advance concepts in Spark programming and also stores it final output incrementally in Hive tables built using the parquet data storage format. We will also demostrate some complex queries on this tables using Hive and impala. The spark application will be written in scala and the development process will be automated using the Scala Build tool(sbt).
The data warehouse is built by loading, extracting and transforming the dataset into structures that will provide the basis for data scientists to perform different forms of model discovery.
We will use following tools in this project:
In this NoSQL project, we will use two NoSQL databases(HBase and MongoDB) to store Yelp business attributes and learn how to retrieve this data for processing or query.
In this hive project, you will work on denormalizing the JSON data and create HIVE scripts with ORC file format.
In this spark project, we will continue building the data warehouse from the previous project Yelp Data Processing Using Spark And Hive Part 1 and will do further data processing to develop diverse data products.