Data Analysis and Visualisation using Spark and Zeppelin

Data Analysis and Visualisation using Spark and Zeppelin

In this big data project, we will talk about Apache Zeppelin. We will write code, write notes, build charts and share all in one single data analytics environment using Hive, Spark and Pig.

Videos

Each project comes with 2-5 hours of micro-videos explaining the solution.

Code & Dataset

Get access to 50+ solved projects with iPython notebooks and datasets.

Project Experience

Add project experience to your Linkedin/Github profiles.

What will you learn

Apache Zeppelin: What it is and how it works
Installing Zeppelin interpreters
Running Spark, Hive and Pig code on your notebook
Writing markdown notes or narrative text.
Collaboration or Sharing your book with others
Discuss other notebook alternatives like (Jupyter or Databricks notebooks)

Project Description

A notebook is a code execution environment that allows for creating, sharing code and its execution, visualization and other text information (like markups). It enables an interactive computing in the area of data exploration or analysis. It is logical to a sharable Grunt shell for Pig, or scala shell and PySpark shell for Spark, or beeline for Hive but with visualization, discovery and collaboration.

In this big data Project, we will talk about one of this notebook - Apache Zeppelin. With Zeppelin, we will do a number of data analysis by answering some questions on the crime dataset using Hive, Spark and Pig. We will prepare some chart to better represent our results and finally share our results with the collaborative or sharing feature of the notebook.

On completing this big data project using zeppelin, participants will have known what Zeppelin is, gained the ability to install new interpreters, use Zeppelin for performing data analysis, sharing results with their friends or colleagues. Also, the participant will be informed of other notebooks in the data ecosystem like Jupyter or the databricks cloud notebooks.

Similar Projects

This is in continuation of the previous Hive project "Tough engineering choices with large datasets in Hive Part - 1", where we will work on processing big data sets using Hive.

In this big data project, we will be performing an OLAP cube design using AdventureWorks database. The deliverable for this session will be to design a cube, build and implement it using Kylin, query the cube and even connect familiar tools (like Excel) with our new cube.

In this project, we will evaluate and demonstrate how to handle unstructured data using Spark.

Curriculum For This Mini Project

17-Mar-2018
02h 50m
18-Mar-2018
02h 54m