1-844-696-6465 (US)        +91 77600 44484        help@dezyre.com

Big Data - Software Engineer

Company Name: JCPenney
Location: Plano, Texas
Date Posted: 22nd Nov, 2016
Description:

• Devise scalable, maintainable and reliable services that process very large quantities of structured and unstructured data
• Expertise in data loading, data mining & analysis, developing and fine tuning algorithms based on data insights in Big Data Platform
• Key deliverables include processing, integrating, monitoring and alerting modules that fit into a unified and reliable Big Data infrastructure
• Architect, application design and software development to build various algorithms to drive customized customer experience and also build commerce and catalog platform in digital channels
• Performance engineering of data storage and retrieval of data from Big Data to drive digital customer experience

Qualification:

Core Competencies & Accomplishments:

• Understanding of software engineering best practices, object oriented analysis & design, and design patterns & machine algorithms
• Expertise in building algorithms using NoSQL and Big data platform
• Expertise in data loading, data mining & analysis and building algorithms based on data insights in Big Data platforms.
• Expertise in Cassandra or Hadoop platform is critical. Hands on experience with data modeling, data loading, optimization, and warehousing techniques and technologies like IBM Netezza, Hive, Pig, NoSQL, MongoDB, Vertica, or Cassandrra
• Expertise in data loading, data mining & analysis and building algorithms based on data insights in Big Data platforms.
• Excellent ability to think in terms of how to “think” MapReduce, and write parallel algorithms using the MapReduce paradigm, and build applications that use chained MapReduce jobs.
• Expertise in Java and, Good understanding of the design of Hadoop and knowledge of bayesian statistic, K Mean Clustering, Neural Network
• Familiarity with the following Hadoop components:  Hadoop Common, HDFS, HBase and YARN
• Ability to troubleshoot problems with MapReduce applications, diagnose performance bottlenecks in MapReduce
• Knowledge of in-memory high-speed cluster computing technologies such as Apache Spark or Storm