Each project comes with 2-5 hours of micro-videos explaining the solution.
Get access to 50+ solved projects with iPython notebooks and datasets.
Add project experience to your Linkedin/Github profiles.
A notebook is a code execution environment that allows for creating, sharing code and its execution, visualization and other text information (like markups). It enables an interactive computing in the area of data exploration or analysis. It is logical to a sharable Grunt shell for Pig, or scala shell and PySpark shell for Spark, or beeline for Hive but with visualization, discovery and collaboration.
In this big data Project, we will talk about one of this notebook - Apache Zeppelin. With Zeppelin, we will do a number of data analysis by answering some questions on the crime dataset using Hive, Spark and Pig. We will prepare some chart to better represent our results and finally share our results with the collaborative or sharing feature of the notebook.
On completing this big data project using zeppelin, participants will have known what Zeppelin is, gained the ability to install new interpreters, use Zeppelin for performing data analysis, sharing results with their friends or colleagues. Also, the participant will be informed of other notebooks in the data ecosystem like Jupyter or the databricks cloud notebooks.
This is in continuation of the previous Hive project "Tough engineering choices with large datasets in Hive Part - 1", where we will work on processing big data sets using Hive.
In this big data project, we will be performing an OLAP cube design using AdventureWorks database. The deliverable for this session will be to design a cube, build and implement it using Kylin, query the cube and even connect familiar tools (like Excel) with our new cube.
In this project, we will evaluate and demonstrate how to handle unstructured data using Spark.