8 Common Hadoop Projects and Spark Projects

8 Common Hadoop Projects and Spark Projects


Divya Sistla

Divya is a Senior Big Data Engineer at Uber. Previously she graduated with a Masters in Data Science with distinction from BITS, Pilani. She has over 8+ years of experience in companies such as Amazon and Accenture.

Big data has taken over many aspects of our lives and as it continues to grow and expand, big data is creating the need for better and faster data storage and analysis. Apache Hadoop and Apache Spark fulfil this need as is quite evident from the various projects that these two frameworks are getting better at faster data storage and analysis. Apache Hadoop projects are mostly into migration, integration, scalability, data analytics and streaming analysis. Apache Spark projects are mostly into link prediction, cloud hosting, data analysis and speech analysis. These projects are proof of how far Apache Hadoop and Apache Spark have come and how they are making big data analysis a profitable enterprise.

As we step into the latter half of the present decade, we can’t help but notice the way Big Data has entered all crucial technology powered domains such as banking and financial services, telecom, manufacturing, information technology, operations and logistics.

With Big Data came a need for programming languages and platforms that could provide fast computing and processing capabilities. Parallel emergence of Cloud Computing emphasized on distributed computing and there was a need for programming languages and software libraries that could store and process data locally (minimizing the hardware required to maintain high availability). Apache™, an open source software development project, came up with open source software for reliable computing that was distributed and scalable.

Hadoop and Spark are two solutions from the stable of Apache that aim to provide developers around the world a fast, reliable computing solution that is easily scalable. Built to support local computing and storage, these platforms do not demand massive hardware infrastructure to deliver high uptime. At the bottom lies a library that is designed to treat failures at the Application layer itself, which results in highly reliable service on top of a distributed set of computers, each of which is capable of functioning as a local storage point.

Why Apache Hadoop?

Apache houses a number of Hadoop projects developed to deliver different solutions. Hadoop Common houses the common utilities that support other modules, Hadoop Distributed File System (HDFS™) provides high throughput access to application data, Hadoop YARN is a job scheduling framework that is responsible for cluster resource management and Hadoop MapReduce facilitates parallel processing of large data sets. A number of big data Hadoop projects have been built on this platform as it has fundamentally changed a number of assumptions we had about data. Hadoop looks at architecture in an entirely different way. Hadoop projects make optimum use of ever increasing parallel processing capabilities of processors and expanding storage spaces to deliver cost effective, reliable solutions.

Why Apache Spark?

Owned by Apache Software Foundation, Apache Spark is an open source data processing framework. It sits within the Apache Hadoop umbrella of solutions and facilitates fast development of end – to – end Big Data applications. It plays a key role in streaming and interactive analytics on Big Data projects.

It is an improvement over Hadoop’s two stage MapReduce paradigm. By providing multi-stage in-memory primitives, Apache Spark improves performance multi fold, at times by a factor of 100! It can interface with a wide variety of solutions both within and outside the Hadoop ecosystem.

Hadoop Projects and Spark Projects

Hadoop Projects and Spark Projects

Apache has gained popularity around the world and there is a very active community that is continuously building new solutions, sharing knowledge and innovating to support the movement. A number of times developers feel they are working on a really cool project but in reality, they are doing something that thousands of developers around the world are already doing. The aim of this article is to mention some very common projects involving Apache Hadoop and Apache Spark.

1.Migration

Being open source Apache Hadoop and Apache Spark have been the preferred choice of a number of organizations to replace the old, legacy software tools which demanded a heavy license fee to procure and a considerable fraction of it for maintenance. Unlike years ago, open source platforms have a large talent pool available for managers to choose from who can help design better, more accurate and faster solutions. Hadoop ecosystem has a very desirable ability to blend with popular programming and scripting platforms such as SQL, Java, Python and the like which makes migration projects easier to execute.

2.Integration

Businesses seldom start big. Most of them start as isolated, individual entities and grow over a period of time. Digital explosion of the present century has seen businesses undergo exponential growth curves. Given the operation and maintenance costs of centralized data centres, they often choose to expand in a decentralized, dispersed manner. Given the constraints imposed by time, technology, resources and talent pool, they end up choosing different technologies for different geographies and when it comes to integration, they find going tough.

That is where Apache Hadoop and Apache Spark come in. Given their ability to transfer, process and store data from heterogeneous sources in a fast, reliable and cost effective manner, they have been the preferred choice for integrating systems across organizations.

3.Scalability

As mentioned earlier, scalability is a huge plus with Apache Spark. Its ability to expand systems and build scalable solutions in a fast, efficient and cost effective manner outsmart a number of other alternatives. Apache Spark has been built in a way that it runs on top of Hadoop framework (for parallel processing of MapReduce jobs). As the data volumes grow, processing times noticeably go on increasing which adversely affects performance. Hadoop can be used to carry out data processing using either the traditional (map/reduce) or Spark based (providing interactive platform to process queries in real time) approach.

For the complete list of big data companies and their salaries- CLICK HERE

4.Link Prediction

Link prediction is a recently recognized project that finds its application across a variety of domains – the most attractive of them being social media. Given a graphical relation between variables, an algorithm needs to be developed which predicts which two nodes are most likely to be connected? This can be applied in the financial services industry – where an analyst is required to find out which are the kinds of frauds a potential customer is most likely to commit? It can also be applied to social media where the need is to develop an algorithm which would take in a number of inputs such as age, location, schools and colleges attended, workplace and pages liked friends can be suggested to users.

Given Spark’s ability to process real time data at a greater pace than conventional platforms, it is used to power a number of critical, time sensitive calculations and can serve as a global standard for advanced analytics.

5.Cloud Hosting

Apache Hadoop is equally adept at hosting data at on-site, customer owned servers or in the Cloud. Cloud deployment saves a lot of time, cost and resources. Organizations are no longer required to spend over the top for procurement of servers and associated hardware infrastructure and then hire staff to maintain it. Instead, cloud service providers such as Google, Amazon and Microsoft provide hosting and maintenance services at a fraction of the cost. Cloud hosting also allows organizations to pay for actual space utilized whereas in procuring physical storage, companies have to keep in mind the growth rate and procure more space than required.

6.Specialized Data Analytics

Organizations often choose to store data in separate locations in a distributed manner rather than at one central location. Besides risk mitigation (which is the primary objective on most occasions) there can be other factors behind it such as audit, regulatory, advantages of localization, etc.

It is only logical to extract only the relevant data from warehouses to reduce the time and resources required for transmission and hosting. For example, in financial services there are a number of categories that require fast data processing (time series analysis, risk analysis, liquidity risk calculation, Monte Carlo simulations, etc.).

Hadoop and Spark facilitate faster data extraction and processing to give actionable insights to users. Separate systems are built to carry out problem specific analysis and are programmed to use resources judiciously.

7.Streaming Analytics

To set the context, streaming analytics is a lot different from streaming. Streaming analytics is a real time analysis of data streams that must (almost instantaneously) report abnormalities and trigger suitable actions. For example, when an attempted password hack is attempted on a bank’s server, it would be better served by acting instantly rather than detecting it hours after the attempt by going through gigabytes of server log!

Streaming analytics requires high speed data processing which can be facilitated by Apache Spark or Storm systems in place over a data store using HBase. Streaming analytics is not a one stop analytics solution, as organizations would still need to go through historical data for trend analysis, time series analysis, predictive analysis, etc.

8.Speech Analysis

Computer Telephone Integration has revolutionized the call centre industry. Speech analytics is still in a niche stage but is gaining popularity owing to its huge potential. Consider a situation where a customer uses foul language, words associated with emotions such as anger, happiness, frustration and so on are used by a customer over a call. Instead of someone having to go through huge volumes of audio files or relying on the call handling executive to flag the calls accordingly, why not have an automated solution?

Hadoop and Spark excel in conditions where such fast paced solutions are required. This reduces manual effort multi – fold and when analysis is required, calls can be sorted based on the flags assigned to them for better, more accurate and efficient analysis.


Hadoop and Spark real time mini-project examples:

Real time project 1: Hive Project - Visualising Website Clickstream Data with Apache Hadoop

Problem: Ecommerce and other commercial websites track where visitors click and the path they take through the website. This data can be analysed using big data analytics to maximise revenue and profits. 

Big Data technologies used: AWS EC2, AWS S3, Flume, Spark, Spark Sql, Tableau, Airflow

Big Data Architecture: This implementation is deployed on AWS EC2 and uses flume for ingestion, S3 as a data store, Spark Sql tables for processing, Tableau for visualisation and Airflow for orchestration. 


Real time project 2: Movielens dataset analysis using Hive for Movie Recommendations
Problem: The movielens dataset contains a large number of movies, with information regarding actors, ratings, duration etc. We need to analyse this data and answer a few queries such as which movies were popular etc. 

 

Big data technologies used: Microsoft Azure, Azure Data Factory, Azure Databricks, Spark

 

Big Data Architecture: This projects starts of by creating a resource group in azure. To this group we add a storage account and move the raw data. Then we create and run Azure data factory (ADF) pipelines. Following this we spring up the Azure spark cluster to perform transformations on the data using Spark Sql. This makes the data ready for visualization that answers our analysis.

 


 

Learn Hadoop to crunch your organization's big data.

 

PREVIOUS

NEXT

Work on hands on projects on Big Data and Hadoop with Industry Professionals

Relevant Projects

Finding Unique URL's using Hadoop Hive
Hive Project -Learn to write a Hive program to find the first unique URL, given 'n' number of URL's.

Spark Project-Analysis and Visualization on Yelp Dataset
The goal of this Spark project is to analyze business reviews from Yelp dataset and ingest the final output of data processing in Elastic Search.Also, use the visualisation tool in the ELK stack to visualize various kinds of ad-hoc reports from the data.

Hadoop Project-Analysis of Yelp Dataset using Hadoop Hive
The goal of this hadoop project is to apply some data engineering principles to Yelp Dataset in the areas of processing, storage, and retrieval.

Tough engineering choices with large datasets in Hive Part - 2
This is in continuation of the previous Hive project "Tough engineering choices with large datasets in Hive Part - 1", where we will work on processing big data sets using Hive.

Data Mining Project on Yelp Dataset using Hadoop Hive
Use the Hadoop ecosystem to glean valuable insights from the Yelp dataset. You will be analyzing the different patterns that can be found in the Yelp data set, to come up with various approaches in solving a business problem.

Real-time Auto Tracking with Spark-Redis
Spark Project - Discuss real-time monitoring of taxis in a city. The real-time data streaming will be simulated using Flume. The ingestion will be done using Spark Streaming.

Real-Time Log Processing using Spark Streaming Architecture
In this Spark project, we are going to bring processing to the speed layer of the lambda architecture which opens up capabilities to monitor application real time performance, measure real time comfort with applications and real time alert in case of security

Hadoop Project for Beginners-SQL Analytics with Hive
In this hadoop project, learn about the features in Hive that allow us to perform analytical queries over large datasets.

Explore features of Spark SQL in practice on Spark 2.0
The goal of this spark project for students is to explore the features of Spark SQL in practice on the latest version of Spark i.e. Spark 2.0.

Real-Time Log Processing in Kafka for Streaming Architecture
The goal of this apache kafka project is to process log entries from applications in real-time using Kafka for the streaming architecture in a microservice sense.



Tutorials