Event Date
Feb - 2017
07:00pm - 09:30pm PST
Feb - 2017
07:00pm - 09:30pm PST

What will you learn

  • What is Small file problem in Hadoop
  • How it arises (Batch and Streaming mode)
  • Solution (Streaming): Using flume
  • Solution (Streaming): Preprocessing and storing in a NoSQL database
  • Solution (Batch): Merging before storing in HDFS
  • Solution (Batch): Sequencefile
  • Solution (Batch): Compression
  • Solution (Batch): CombineFileInputFormat

Project Description

We have come to learn that Hadoop's distributed file system was engineered to favor fewer larger files over many small files. However, we mostly would not have control over how data come. Many data ingestion to data infrastructures come in small bits and whether we are implementing a data lake on HDFS or not, we will have to deal with this data inputs.

In this hackerday, we are going to be continuing the series on data engineering by discussing and implementing various ways to solve the Hadoop big data problem.

We will start by defining what it means, how inevitable this situation could arise, how to identify bottlenecks in a cluster owing to the small file problem and varieties of ways to solve them.




Senior Developer at Entelect
Cloudera Certified Spark and Hadoop Developer

I am passionate about software development, databases, data analysis and the android platform. My native language is java but no one has stopped me so far from learning and using angular and node.js. Data and data analysis is thrilling and so are my experiences with SQL on Oracle, Microsoft SQL Server, Postgres and MyS see more...

What is Hackerday?

Stay updated in technology trends by working on projects

Live online coding sessions led by industry experts

Build 2-4 projects a month each lasting 6 hours designed to teach you advanced concepts

Code in groups and connect with your community