Big Data Engineer

Company Name: Apple
Location: Austin, Texas
Date Posted: 08th Mar, 2018
Description:

Key Responsibilities

  • You will build systems that every iPhone, iPad and Mac have interacted with.
  • Apple’s engineering and operations teams will utilize your systems to build the next insanely great product. Do something amazing and be a critical part of a company that everyone recognizes and loves.
  • In this role, you will be managing a large-scale and highly-available Big Data infrastructure supporting multi-Petabytes of data and growing very rapidly.
  • You will provide support for both analytics and operational platforms. Hadoop, Spark, Kafka, and object stores are just a few technologies to name.
  • You will also be writing applications to answer complex analytical and real-time operational questions. You will be a key part of the design, architecture and building of a data platform over variety of Big Data technologies.
  • As a member of a cross-functional team, you'll have the opportunity to solve challenging big data engineering problems across a broad range of Apple manufacturing services.
  • You will be leading innovation by exploring, investigating, recommending, benchmarking and implementing data centric technologies for the platform. You will also be working closely with DevOps and other teams in SDS. 
Qualification:

Key Qualifications

  • Have a passion for Computer Science and Big Data technologies, and a flexible, creative approach to problem solving.
  • 3+ years of hands-on experience with the Hadoop stack (MapReduce Programming Paradigm, Hive, and Spark).
  • 3 - 5+ years experience in Java programming.
  • Have a solid track record of building large scale systems utilizing Big Data Technologies.
  • Experience building large-scale server-side systems with distributed processing algorithms.
  • Supporting Hadoop developers and assisting in troubleshooting and optimization of map reduce jobs.
  • Provide hardware architectural guidance, planning, estimating cluster capacity, and creating roadmaps for Hadoop cluster deployment.
  • Excellent experience in design and development of large scale applications.
  • Experience with Kafka is a plus.
  • Excellent problem solving and programming skills; proven technical leadership and communication skills
  • Aptitude to independently learn new technologies
  • Plus: Have made active contributions to open source projects such as Apache Hadoop, Spark, Kafka, etc.