Each project comes with 2-5 hours of micro-videos explaining the solution.
Get access to 102+ solved projects with iPython notebooks and datasets.
Add project experience to your Linkedin/Github profiles.
In this Databricks Azure tutorial project, you will use Spark Sql to analyse the movielens dataset to provide movie recommendations. As part of this you will deploy Azure data factory, data pipelines and visualise the analysis.
In this PySpark project, you will simulate a complex real-world data pipeline based on messaging. This project is deployed using the following tech stack - NiFi, PySpark, Hive, HDFS, Kafka, Airflow, Tableau and AWS QuickSight.
Hive Project -Learn to write a Hive program to find the first unique URL, given 'n' number of URL's.
Learn to design Hadoop Architecture and understand how to store data using data acquisition tools in Hadoop.
Bitcoin Mining on AWS - Learn how to use AWS Cloud for building a data pipeline and analysing bitcoin data.
In this hive project, you will design a data warehouse for e-commerce environments.
In this hadoop project, learn about the features in Hive that allow us to perform analytical queries over large datasets.
Hive Project- Understand the various types of SCDs and implement these slowly changing dimesnsion in Hadoop Hive and Spark.
In this hadoop project, we are going to be continuing the series on data engineering by discussing and implementing various ways to solve the hadoop small file problem.
In this Apache Spark SQL project, we will go through provisioning data for retrieval using Spark SQL.
In this big data project, we will continue from a previous hive project "Data engineering on Yelp Datasets using Hadoop tools" and do the entire data processing using spark.
In this big data project, we will embark on real-time data collection and aggregation from a simulated real-time system using Spark Streaming.
The goal of this hadoop project is to apply some data engineering principles to Yelp Dataset in the areas of processing, storage, and retrieval.
Hadoop Project- Perform basic big data analysis on airline dataset using big data tools -Pig, Hive and Impala.
In this spark project, we will continue building the data warehouse from the previous project Yelp Data Processing Using Spark And Hive Part 1 and will do further data processing to develop diverse data products.
Explore hive usage efficiently in this hadoop hive project using various file formats such as JSON, CSV, ORC, AVRO and compare their relative performances
Analyze clickstream data of a website using Hadoop Hive to increase sales by optimizing every aspect of the customer experience on the website from the first mouse click to the last.
In this hive project, you will work on denormalizing the JSON data and create HIVE scripts with ORC file format.
In this big data project, we will discover songs for those artists that are associated with the different cultures across the globe.
In this NoSQL project, we will use two NoSQL databases(HBase and MongoDB) to store Yelp business attributes and learn how to retrieve this data for processing or query.
The goal of this IoT project is to build an argument for generalized streaming architecture for reactive data ingestion based on a microservice architecture.
Learn to write a Hadoop Hive Program for real-time querying.
In this big data project, we will look at how to mine and make sense of connections in a simple way by building a Spark GraphX Algorithm and a Network Crawler.
In this big data project, we will talk about Apache Zeppelin. We will write code, write notes, build charts and share all in one single data analytics environment using Hive, Spark and Pig.
In this big data project, we'll work through a real-world scenario using the Cortana Intelligence Suite tools, including the Microsoft Azure Portal, PowerShell, and Visual Studio.
In this project, we will walk through all the various classes of NoSQL database and try to establish where they are the best fit.
In this big data project, we will be performing an OLAP cube design using AdventureWorks database. The deliverable for this session will be to design a cube, build and implement it using Kylin, query the cube and even connect familiar tools (like Excel) with our new cube.
In this project, we will take a look at three different SQL-on-Hadoop engines - Hive, Phoenix, Impala and Presto.
In this project, we will evaluate and demonstrate how to handle unstructured data using Spark.
In this project, we will look at two database platforms - MongoDB and Cassandra and look at the philosophical difference in how these databases work and perform analytical queries.
In this project, we will show how to build an ETL pipeline on streaming datasets using Kafka.
“How and where can I get projects in Hadoop, Hive, Pig or HBase to get more exposure to the big data tools and technologies?”
ProjectPro’s mini projects on Hadoop are designed to provide big data beginners and experienced professionals better understanding of complex Hadoop architecture and its components with practice big data sets across diverse business domains -Retail, Travel, Banking, Finance, Media and more.
For big data beginners who want to get started learning with the basics of Hadoop ecosystem, ProjectPro has interesting Hadoop project ideas for beginners that will help them learn Hadoop through 10 projects -
Today every organization needs a data infrastructure that can help them deliver contextual experiences in real-time to their customers. Be it the language of a transactional email sent, an advertisement shown on Facebook, or be it the home screen of any mobile application. There is often a requirement to process data in real-time for optimum results and ensure quick response times when needed. There are several big data tools available in the Hadoop ecosystem that enable big data developers to process data in real-time. You can master these big data tools by practising and working on these hands-on real-time big data projects.
In this project, you will get a log file that contains details about users who have visited various pages on a particular site. The aim is to implement a Hadoop job to answer queries such as "Which page did user C visit more than four times a day?" and "Which pages were visited by users exactly ten times in a day?"
If you want to get some hands-on big data experience in building an ETL pipeline on streaming datasets using Kafka as a tool and get exposure to using KSQL, this project is a good choice.
You would have explored quite a few Apache Hadoop projects, but be sure to go ahead and explore some more end-to-end projects with source code provided by ProjectPro to build your skills even more by having a look at some of these projects