1-844-696-6465 (US)        +91 77600 44484        help@dezyre.com

Big Data Developer

Company Name: Perficient
Location: US-CO-Denver
Date Posted: 26th Jun, 2017
Description:
  • Participate in technical planning & requirements gathering phases including Design, code, test, troubleshoot, and document engineering software applications.
  •  Ensuring that technical software development process is followed on the project, familiar with industry best practices for software development.
  • Demonstrate the ability to adapt and work with team members of various experience level
  • End-to-End Cloud Data Solutioning and data stream design, experience with tools of the trade: Hadoop, Spark,Storm, Hive, Pig, AWS (EMR, Redshift, S3, etc.)/Azure (HD Insight) and Data Lake Design
  • Build Large Scale, Fault Tolerance Systems with components of scalability, and high data throughput with tools like Kafka, Storm, Flink, and Spark, and NoSQL platforms like HBase and DataStax
  • Build NoSQL solutions and capacity for understanding Graph, Key Value, Tuple Store, Columnar Memory, and in-memory functionality

 

Qualification:

 Required Qualifications:

  • 3+ years of experience in End-to-End Cloud Data Solutioning and data stream design, experience with tools of the trade: Hadoop, Storm, Hive, Pig, AWS (EMR, Redshift, S3, etc.)/Azure (HD Insight) and Data Lake Design
  • 3+ years of experience in Large Scale, Fault Tolerance Systems with components of scalability, and high data throughput with tools like Kafka, Storm, Flink, and Spark, and NoSQL platforms like HBase and DataStax
  • 3+ years of experience in building and managing hosted big data architecture, toolkit familiarity in: Hadoop with Oozie, Sqoop, Pig, Hive, Flume, HBase, Avro, HBase, Parkequet, Storm, Spark, NiFi
  • 3+ years of experience in Data Queue and Transactional management using tools like Kafka and ZooKeeper
  • 3+ years of experience with NoSQL solutions and capacity for understanding Graph, Key Value, Tuple Store, Columnar Memory, and in-memory functionality
  • 2+ years of experience with Team Foundation Server/JIRA/GitHub and other code management toolsets
  • Strong hands-on knowledge of/using solutioning languages like: Java, Scala, Ruby, Perl, Python
  • Hands-on experience with DevOps solutions like: Jenkins, Puppet/Chef/Ansible, Azure CloudFormation, Docker and MicroServices architectures
  • Operations support with a solid understanding of release, incident, and change management
  • Need someone who is a self-starter and team player, capable of working with a team of strategists, co-developers, and business/data analysts

Preferred Qualifications:

  • Participate and lead in design sessions, demos and prototype sessions, testing and training workshops with business users and other IT associates Ability to learn and develop skills as needed to support services outside of this job description and prove with prior examples
  • Hands-on experience with DevOps solutions like: Puppet, Cloudformation, Docker and some microservices
  • Familiarity with ETL and data transfer technologies like Talend, Informatica, SnapLogic and/or Nifi
  • Ability to work as a collaborative team, mentoring and training the junior team members
  • Working knowledge on building and working on/around data governance standards
  • Passion for working with open-source technologies as well as commercialized platforms

 Preferred Education/Skills:

  • Bachelor’s Degree with a minimum of 3+ years relevant experience or equivalent.
  • Preferred certification in any Big Data technology