Data Ingestion Engineer
Company Name: AT & T
Location: El Segundo , California
Date Posted: 03rd Jun, 2017
- Strong knowledge base in Java, J2EE, Microservice, Kafka, Hadoop, Spark, Ranger, Knox and big data technologies.
- Mentoring the Jr. Developers and achieve all the technology goals.
- Working with various cross -functional teams to deliver robust technology solutions.
- Reduce technical debt and provide inputs to the product backlog grooming sessions.
- Experience working with complex software in a parallel processing environment gained through a combination of academic studies and work experience
- Experience in the following technologies:
- Unix based OS (RHEL/OEL is mandatory)
- Hadoop , Spark, NiFi, Ranger, Knox
- General kafka architecture consists of kafka topic (partitions, segments, kafka-log, replication, assignments, ISR, partition leader, under replicated state, offsets), kafka broker, producer, consumers, kafka mirror-maker, monitoring kafka cluster
- Kafka console tools, such as: console consumer/producer, simple consumer, offset checker, dump log segment, OffsetShell), zookeeper,zknodes,zkCli, zookeeper ensemble (leader, follower, fault tolerance), monitoring zookeeper
- Shell/Bash scripting
- DevOps tools: Ansible, Docker
- Graphite/Grafana stack, Kafka Manager, Apache Ambari, tools for monitoring kafka lag (Burrow, Kafka Offset Monitor, Kafka Lag Monitor), tool to collect OS metrics like a Collectd
- TCP/IP and understanding how to troubleshooting a network connectivity between hosts, how to check a local and destination port availability, etc.
- AWS , rackscape cloud
- CI/CD (Jenkins, Ansible)
Good knowledge of the following technologies:
- Elastic Search
- Confluent Kafka Connect
- Hadoop, HDFS