Emerging Technology Resumes: How to make a lasting impact

Emerging Technology Resumes: How to make a lasting impact

A good hadoop big data resume might not be enough to get you selected but a bad hadoop big data resume is enough for rejection.Many big data professionals consider writing big data hadoop resume as an exercise in psychological warfare. Are you one among them? Do you want to move your big data hadoop resume from the slush pile to the "YES" pile ,then you must follow some important guidelines to ensure that your hadoop big data resume does not land into the "NO" pile of CV's.This article aims to provide you with some tips to make your big data hadoop resume stand out from the big stack and help you land the Hadoop job that you want. 

For the past couple of years, I have been training aspiring Big Data enthusiasts across the globe on the  Hadoop Stack. A couple of really common questions that pop up midway into the course or close to the end are, "How do I tailor my resume to land a job?" or "I am learning this for the first time, how do I showcase the learning to be able to get a job?"

While these are really valid and critical questions, the answer is rather complex in today's IT scenario. There are several articles by eminent and experienced recruiters and hiring managers on what they like or dislike about resumes. These articles are quite comprehensive, and clearly define the aesthetic hygiene, the flow and the relevance of a resume. Therefore, I am not going to talk about how to write a resume and get noticed. Good resources to get that information, is from LinkedIn Pulse articles and careercup.com among other sources.

Build hands-on projects in Big Data and Hadoop

Tips for Writing an Effective Big Data Hadoop Resume

Apart from tailoring your resume, there are 4 steps which you must take if you are trying to get a job in emerging technology domains, including but not restricted to Big Data, Mobile Development, Cloud computing , etc.

1. Carefully outline the roles and responsibilities:

The space of designation nomenclature has become really creative and innovative in the last few years. There is no way to generalize a Software Engineer or an ETL Architect in the industry today. Therefore it requires a bit of searching and introspection to zero in on the job profiles one wishes to apply for. Research and identify the roles and responsibilities and shortlist potential positions. The introspection part is needed to figure out if you have the necessary skills or the learning curve to take up the new role.

2. Make your resume highlight the required core skills:

Every designation that you will come across on job portals, will be searching for ‘Demi Gods’ amongst tech professionals. Multiple Programming Languages, Multiple Software Tools, Multiple Technology Platforms there is no end to the list. Identify the skills which you already have from the list of desired skills and highlight them on your profile. Try to figure out which are the most important skills for the role and make an attempt to learn about the skills.

Master Hadoop Skills through IBM Certified Online Hadoop Training

3. Document each and every step of your efforts:

This is possibly one of the most important areas where you should focus on. There are several online platforms which allow you to showcase your skills while you contribute and collaborate. Getting shortlisted for a job interview is much more than just because of your skills. Here is what I have seen work time and again for professionals in my network.

I.Active experimentation and blogging about the newly learned skills:

You could use "WordPress" or "Blogger" or send your blogs to manisha@dezyre.com and we will publish them. Add the blog links to your resume.

Become a Hadoop Developer By Working On Industry Oriented Hadoop Projects

II.Answer questions on forums:

If you have figured out certain pieces of working with new technologies actively search and help answer questions on forums like "Stack Overflow" on the same topic.

III.Maintain code base and collaborate on GitHub:

Maintain all your experimental code on GitHub and contribute to projects that interest you. Get some friends to work on the project with you. Mention the GitHub project link on your resume.

For the complete list of big data companies and their salaries- CLICK HERE

4. Purposefully Network:

Be genuine and connect with people in the technical domain where you are trying to get into. Engage in meaningful conversations and share your work. Collect feedback and be open to assist and consult for free.

The above steps should guide you to make the transition smooth. In case if you are looking for further guidelines or training on emerging technologies check www.dezyre.com or send an email to manisha@dezyre.com

How to learn Hadoop online?



Build hands-on projects in Big Data and Hadoop

Relevant Projects

Design a Hadoop Architecture
Learn to design Hadoop Architecture and understand how to store data using data acquisition tools in Hadoop.

Event Data Analysis using AWS ELK Stack
This Elasticsearch example deploys the AWS ELK stack to analyse streaming event data. Tools used include Nifi, PySpark, Elasticsearch, Logstash and Kibana for visualisation.

Tough engineering choices with large datasets in Hive Part - 1
Explore hive usage efficiently in this hadoop hive project using various file formats such as JSON, CSV, ORC, AVRO and compare their relative performances

Tough engineering choices with large datasets in Hive Part - 2
This is in continuation of the previous Hive project "Tough engineering choices with large datasets in Hive Part - 1", where we will work on processing big data sets using Hive.

Spark Project-Analysis and Visualization on Yelp Dataset
The goal of this Spark project is to analyze business reviews from Yelp dataset and ingest the final output of data processing in Elastic Search.Also, use the visualisation tool in the ELK stack to visualize various kinds of ad-hoc reports from the data.

Create A Data Pipeline Based On Messaging Using PySpark And Hive - Covid-19 Analysis
In this PySpark project, you will simulate a complex real-world data pipeline based on messaging. This project is deployed using the following tech stack - NiFi, PySpark, Hive, HDFS, Kafka, Airflow, Tableau and AWS QuickSight.

Yelp Data Processing using Spark and Hive Part 2
In this spark project, we will continue building the data warehouse from the previous project Yelp Data Processing Using Spark And Hive Part 1 and will do further data processing to develop diverse data products.

PySpark Tutorial - Learn to use Apache Spark with Python
PySpark Project-Get a handle on using Python with Spark through this hands-on data processing spark python tutorial.

Airline Dataset Analysis using Hadoop, Hive, Pig and Impala
Hadoop Project- Perform basic big data analysis on airline dataset using big data tools -Pig, Hive and Impala.

Hadoop Project-Analysis of Yelp Dataset using Hadoop Hive
The goal of this hadoop project is to apply some data engineering principles to Yelp Dataset in the areas of processing, storage, and retrieval.