The role will primarily responsible for the actual coding or programming of Hadoop applications.
The candidate will responsible for installing, configuring and maintaining multiple Hadoop clusters in the organization. As a Big Data Administrator you will work with development teams to optimize different Hadoop services and deploy and code in to multiple environment. This is a unique opportunity for a highly motivated individual to work on the next generation decision support system.
- 4+ years of Python or Java/J2EE development experience
- 1+ years of experience with Hadoop and big data projects
- 1+ years of experience with Hadoop Admin.
- Ability to write MapReduce jobs
- Ability to setup, maintain, and implement Kafka topics and processes
- Understanding and implementation of Flume processes
- Good knowledge of database structures, theories, principles, and practices.
- Understand how to develop code in an environment secured using a local KDC and OpenLDAP.
- Familiarity with and implementation knowledge of loading data using Sqoop.
- Knowledge and ability to implement workflow/schedulers within Oozie
- Experience working with AWS components [EC2, S3, SNS, SQS]
- Analytical and problem solving skills, applied to Big Data domain
- Proven understanding and hands on experience with Hadoop, Hive, Pig, Impala, and Spark
- Good aptitude in multi-threading and concurrency concepts