Top 5 Apache Spark Use Cases

Top 5 Apache Spark Use Cases

Divya Sistla

Divya is a Senior Big Data Engineer at Uber. Previously she graduated with a Masters in Data Science with distinction from BITS, Pilani. She has over 8+ years of experience in companies such as Amazon and Accenture.

To live on the competitive struggles in the big data marketplace, every fresh, open source technology whether it is Hadoop, Spark or Flink must find valuable use cases in the marketplace. Any new technology that emerges should brag some kind of a new approach that is better than its alternatives.

The creators of Apache Spark polled a survey on “Why companies should use in-memory computing framework like Apache Spark?” and the results of the survey are overwhelming –

  • 91% use Apache Spark because of its performance gains.
  • 77% use Apache Spark as it is easy to use.
  • 71% use Apache Spark due to the ease of deployment.
  • 64% use Apache Spark to leverage advanced analytics
  • 52% use Apache Spark for real-time streaming.

Learn Apache Spark Projects Online

If you would like more information about Big Data careers, please click the orange "Request Info" button on top of this page.

Fast data processing capabilities and developer convenience have made Apache Spark a strong contender for big data computations. Apache Spark was the world record holder in 2014 “Daytona Gray” category for sorting 100TB of data. By sorting 100 TB of data on 207 machines in 23 minutes whilst Hadoop MapReduce took 72 minutes on 2100 machines. Fast data processing with spark has toppled apache Hadoop from its big data throne, providing developers with the Swiss army knife for real time analytics. Increasing speeds are critical in many business models and even a single minute delay can disrupt the model that depends on real-time analytics. In this blog, we will explore some of the most prominent apache spark use cases and some of the top companies using apache spark for adding business value to real time applications.

For the complete list of big data companies and their salaries- CLICK HERE

“Only large companies, such as Google, have had the skills and resources to make the best use of big and fast data. There are many examples…where anybody can, for instance, crawl the Web or collect these public data sets, but only a few companies, such as Google, have come up with sophisticated algorithms to gain the most value out of it. Spark was designed to address this problem. Spark brings the top-end data analytics, the same performance level and sophistication that you get with these expensive systems, to commodity Hadoop cluster. It runs in the same cluster to let you do more with your data.”- said Matei Zaharia, the creator of Spark and CTO of commercial Spark developer Databricks.

Apache Spark Use Cases

Apache Spark Use Cases

Apache Spark is the new shiny big data bauble making fame and gaining mainstream presence amongst its customers. Startups to Fortune 500s are adopting Apache Spark to build, scale and innovate their big data applications. Here are some industry specific spark use cases that demonstrate its ability to build and run fast big data applications -

Spark Use Cases in Finance Industry

Banks are using the Hadoop alternative - Spark to access and analyse the social media profiles, call recordings, complaint logs, emails, forum discussions, etc. to gain insights which can help them make right business decisions for credit risk assessment, targeted advertising and customer segmentation.

Your credit card is swiped for $9000 and the receipt has been signed, but it was not you who swiped the credit card as your wallet was lost. This might be some kind of a credit card fraud. Financial institutions are leveraging big data to find out when and where such frauds are happening so that they can stop them. They need to resolve any kind of fraudulent charges at the earliest by detecting frauds right from the first minor discrepancy. They already have models to detect fraudulent transactions and most of them are deployed in batch environment. With the use of Apache Spark on Hadoop, financial institutions can detect fraudulent transactions in real-time, based on previous fraud footprints. All the incoming transactions are validated against a database, if there a match then a trigger is sent to the call centre. The call centre personnel immediately checks with the credit card owner to validate the transaction before any fraud can happen.

Companies Using Spark in the Finance Industry

  • One of the financial institutions that has retail banking and brokerage operations is using Apache Spark to reduce its customer churn by 25%. The financial institution has divided the platforms between retail, banking, trading and investment. However, the banks want a 360-degree view of the customer regardless of whether it is a company or an individual. To get the consolidated view of the customer, the bank uses Apache Spark as the unifying layer. Apache Spark helps the bank automate analytics with the use of machine learning, by accessing the data from each repository for the customers. The data is then correlated into a single customer file and is sent to the marketing department.
  • Another financial institution is using Apache Spark on Hadoop to analyse the text inside the regulatory filling of their own reports and also their competitor reports. The firms use the analytic results to discover patterns around what is happening, the marketing around those and how strong their competition is.
  • A multinational financial institution has implemented real time monitoring application that runs on Apache Spark and MongoDB NoSQL database. To provide supreme service across its online channels, the applications helps the bank continuously monitor their client’s activity and identify if there are any potential issues.

Apache Spark ecosystem can be leveraged in the finance industry to achieve best in class results with risk based assessment, by collecting all the archived logs and combining with other external data sources (information about compromised accounts or any other data breaches).

Spark Use Cases in e-commerce Industry

Information about real time transaction can be passed to streaming clustering algorithms like alternating least squares (collaborative filtering algorithm) or K-means clustering algorithm. The results can be combined with data from other sources like social media profiles, product reviews on forums, customer comments, etc. to enhance the recommendations to customers based on new trends.

Companies Using Spark in e-commerce Industry

Shopify wanted to analyse the kinds of products its customers were selling to identify eligible stores with which it can tie up - for a business partnership. Its data warehousing platform could not address this problem as it always kept timing out while running data mining queries on millions of records. Shopify has processed 67 million records in minutes, using Apache Spark and has successfully created a list of stores for partnership.

Apache Spark at Alibaba

One of the world’s largest e-commerce platform Alibaba Taobao runs some of the largest Apache Spark jobs in the world in order to analyse hundreds of petabytes of data on its ecommerce platform. Some of the Spark jobs that perform feature extraction on image data, run for several weeks. Millions of merchants and users interact with Alibaba Taobao’s ecommerce platform. Each of these interaction is represented as a complicated large graph and apache spark is used for fast processing of sophisticated machine learning on this data.

Apache Spark at eBay

eBay uses Apache Spark to provide targeted offers, enhance customer experience, and to optimize the overall performance. Apache Spark is leveraged at eBay through Hadoop YARN.YARN manages all the cluster resources to run generic tasks. EBay spark users leverage the Hadoop clusters in the range of 2000 nodes, 20,000 cores and 100TB of RAM through YARN.


Spark Use Cases in Healthcare

As healthcare providers look for novel ways to enhance the quality of healthcare, Apache Spark is slowly becoming the heartbeat of many healthcare applications. Many healthcare providers are using Apache Spark to analyse patient records along with past clinical data to identify which patients are likely to face health issues after being discharged from the clinic. This helps hospitals prevent hospital re-admittance as they can deploy home healthcare services to the identified patient, saving on costs for both the hospitals and patients.

Apache Spark is used in genomic sequencing to reduce the time needed to process genome data. Earlier, it took several weeks to organize all the chemical compounds with genes but now with Apache spark on Hadoop it just takes few hours. This use case of spark might not be so real-time like other but renders considerable benefits to researchers over earlier implementation for genomic sequencing.

Companies Using Spark in Healthcare Industry

Apache Spark at MyFitnessPal

The largest health and fitness community MyFitnessPal helps people achieve a healthy lifestyle through better diet and exercise. MyFitnessPal uses apache spark to clean the data entered by users with the end goal of identifying high quality food items. Using Spark, MyFitnessPal has been able to scan through food calorie data of about 80 million users. Earlier, MyFitnessPal used Hadoop to process 2.5TB of data and that took several days to identify any errors or missing information in it.

Spark Use Cases in Media & Entertainment Industry

Apache Spark is used in the gaming industry to identify patterns from the real-time in-game events and respond to them to harvest lucrative business opportunities like targeted advertising, auto adjustment of gaming levels based on complexity, player retention and many more.

Few of the video sharing websites use apache spark along with MongoDB to show relevant advertisements to its users based on the videos they view, share and browse.

Companies Using Spark in Media & Entertainment Industry

Apache Spark at Yahoo for News Personalization

Yahoo uses Apache Spark for personalizing its news webpages and for targeted advertising. It uses machine learning algorithms that run on Apache Spark to find out what kind of news - users are interested to read and categorizing the news stories to find out what kind of users would be interested in reading each category of news.

Earlier the machine learning algorithm for news personalization required 15000 lines of C++ code but now with Spark Scala the machine learning algorithm for news personalization has just 120 lines of Scala programming code. The algorithm was ready for production use in just 30 minutes of training, on a hundred million datasets.

Apache Spark at Conviva

The largest streaming video company Conviva uses Apache Spark to deliver quality of service to its customers by removing the screen buffering and learning in detail about the network conditions in real-time. This information is stored in the video player to manage live video traffic coming from close to 4 billion video feeds every month, to ensure maximum play-through. Apache Spark is helping Conviva reduce its customer churn to a great extent by providing its customers with a smooth video viewing experience.

Apache Spark at Netflix

Netflix uses Apache Spark for real-time stream processing to provide online recommendations to its customers. Streaming devices at Netflix send events which capture all member activities and play a vital role in personalization. It processes 450 billion events per day which flow to server side applications and are directed to Apache Kafka.

Apache Spark at Pinterest

Pinterest is using apache spark to discover trends in high value user engagement data so that it can react to developing trends in real-time by getting an in-depth understanding of user behaviour on the website.

Spark Use Cases in Travel Industry

Companies Using Spark in Travel Industry

Apache Spark at TripAdvisor

TripAdvisor, a leading travel website that helps users plan a perfect trip is using Apache Spark to speed up its personalized customer recommendations. TripAdvisor uses apache spark to provide advice to millions of travellers by comparing hundreds of websites to find the best hotel prices for its customers. The time taken to read and process the reviews of the hotels in a readable format is done with the help of Apache Spark.

Apache Spark at OpenTable

OpenTable, an online real time reservation service, with about 31000 restaurants and 15 million diners a month, uses Spark for training its recommendation algorithms and for NLP of the restaurant reviews to generate new topic models. OpenTable has achieved 10 times speed enhancements by using Apache Spark. Spark has helped reduce the run time of machine learning algorithms from few weeks to just a few hours resulting in improved team productivity.

The spike in increasing number of spark use cases is just in its commencement and 2016 will make Apache Spark the big data darling of many other companies, as they start using Spark to make prompt decisions based on real-time processing through spark streaming. These are just some of the use cases of the Apache Spark ecosystem. If you know any other companies using Spark for real-time processing, feel free to share with the community, in the comments below.


Spark use-case with code:

Spark project 1: Create a data pipeline based on messaging using Spark and Hive
Problem: A data pipeline is used to transport data from source to destination through a series of processing steps. The data source could be other databases, api’s, json format, csv files etc. Final destination could be another process or visualization tools. In between this, data is transformed into a more intelligent and readable format.

Technologies used: AWS, Spark, Hive, Scala, Airflow, Kafka. 

Solution Architecture: This implementation has the following steps: Writing events in the context of a data pipeline. Then designing a data pipeline based on messaging. This is followed by executing the file pipeline utility. After this we load data from a remote URL, perform Spark transformations on this data before moving it to a table. Then Hive is used for data access. 


Spark Project 2: Building a Data Warehouse using Spark on Hive 
Problem: Large companies usually have multiple storehouses of data. All this data must be moved to a single location to make it easy to generate reports. A data warehouse is that single location. 

Technologies used:HDFS, Hive, Sqoop, Databricks Spark, Dataframes.

Solution Architecture: In the first layer of this spark project first moves data to hdfs. The hive tables are built on top of hdfs. Data comes through batch processing. Sqoop is used to ingest this data. Dataframes are used to store instead of RDD. In the 2nd layer, we normalize and denormalize the data tables. Then transformation is done using Spark Sql. This transformed data is moved to HDFS. In the final 3rd layer visualization is done. 

Become a Spark Projects Developer

Relevant Projects

Tough engineering choices with large datasets in Hive Part - 1
Explore hive usage efficiently in this hadoop hive project using various file formats such as JSON, CSV, ORC, AVRO and compare their relative performances

Create A Data Pipeline Based On Messaging Using PySpark And Hive - Covid-19 Analysis
In this PySpark project, you will simulate a complex real-world data pipeline based on messaging. This project is deployed using the following tech stack - NiFi, PySpark, Hive, HDFS, Kafka, Airflow, Tableau and AWS QuickSight.

Data processing with Spark SQL
In this Apache Spark SQL project, we will go through provisioning data for retrieval using Spark SQL.

Implementing Slow Changing Dimensions in a Data Warehouse using Hive and Spark
Hive Project- Understand the various types of SCDs and implement these slowly changing dimesnsion in Hadoop Hive and Spark.

Tough engineering choices with large datasets in Hive Part - 2
This is in continuation of the previous Hive project "Tough engineering choices with large datasets in Hive Part - 1", where we will work on processing big data sets using Hive.

Hive Project - Visualising Website Clickstream Data with Apache Hadoop
Analyze clickstream data of a website using Hadoop Hive to increase sales by optimizing every aspect of the customer experience on the website from the first mouse click to the last.

Movielens dataset analysis for movie recommendations using Spark in Azure
In this Databricks Azure tutorial project, you will use Spark Sql to analyse the movielens dataset to provide movie recommendations. As part of this you will deploy Azure data factory, data pipelines and visualise the analysis.

Airline Dataset Analysis using Hadoop, Hive, Pig and Impala
Hadoop Project- Perform basic big data analysis on airline dataset using big data tools -Pig, Hive and Impala.

Spark Project-Analysis and Visualization on Yelp Dataset
The goal of this Spark project is to analyze business reviews from Yelp dataset and ingest the final output of data processing in Elastic Search.Also, use the visualisation tool in the ELK stack to visualize various kinds of ad-hoc reports from the data.

PySpark Tutorial - Learn to use Apache Spark with Python
PySpark Project-Get a handle on using Python with Spark through this hands-on data processing spark python tutorial.