Data Analysis and Visualisation using Spark and Zeppelin

Data Analysis and Visualisation using Spark and Zeppelin

In this big data project, we will talk about Apache Zeppelin. We will write code, write notes, build charts and share all in one single data analytics environment using Hive, Spark and Pig.

Videos

Each project comes with 2-5 hours of micro-videos explaining the solution.

Code & Dataset

Get access to 50+ solved projects with iPython notebooks and datasets.

Project Experience

Add project experience to your Linkedin/Github profiles.

What will you learn

Apache Zeppelin: What it is and how it works
Installing Zeppelin interpreters
Running Spark, Hive and Pig code on your notebook
Writing markdown notes or narrative text.
Collaboration or Sharing your book with others
Discuss other notebook alternatives like (Jupyter or Databricks notebooks)

Project Description

A notebook is a code execution environment that allows for creating, sharing code and its execution, visualization and other text information (like markups). It enables an interactive computing in the area of data exploration or analysis. It is logical to a sharable Grunt shell for Pig, or scala shell and PySpark shell for Spark, or beeline for Hive but with visualization, discovery and collaboration.

In this big data Project, we will talk about one of this notebook - Apache Zeppelin. With Zeppelin, we will do a number of data analysis by answering some questions on the crime dataset using Hive, Spark and Pig. We will prepare some chart to better represent our results and finally share our results with the collaborative or sharing feature of the notebook.

On completing this big data project using zeppelin, participants will have known what Zeppelin is, gained the ability to install new interpreters, use Zeppelin for performing data analysis, sharing results with their friends or colleagues. Also, the participant will be informed of other notebooks in the data ecosystem like Jupyter or the databricks cloud notebooks.

Similar Projects

In this PySpark project, you will simulate a complex real-world data pipeline based on messaging. This project is deployed using the following tech stack - NiFi, PySpark, Hive, HDFS, Kafka, Airflow, Tableau and AWS QuickSight.

Explore hive usage efficiently in this hadoop hive project using various file formats such as JSON, CSV, ORC, AVRO and compare their relative performances

Use the Hadoop ecosystem to glean valuable insights from the Yelp dataset. You will be analyzing the different patterns that can be found in the Yelp data set, to come up with various approaches in solving a business problem.

Curriculum For This Mini Project

17-Mar-2018
02h 50m
18-Mar-2018
02h 54m