Big Data DevOps Engineer
In this Big Data DevOps Engineer role, you will:
* Build and scale the data infrastructure for a global user base of 100s of millions.
* Have an essential role in developing and implementing data pipelines and machine learning train/test/use jobs
* Create tools to manage data pipelines and R&D projects
* Perform advanced troubleshooting and monitoring of the systems to ensure SLAs are met
* Deploy code as needed
* Work with our amazing data team
- * Ideally Spark experience (but experience with Hadoop and other parts of the Hadoop ecosystem will suffice if you're excited to learn)
- * Experience with YARN, Mesos, or other schedulers
- * The ability to work on systems at large scale
- * AWS experience, including but not limited to EC2, Cloudformation, S3
- * Can handle wrangling JVM-based systems
- * Systems knowledge within the Linux ecosystem
- * Knowledge of SQL (Redshift, Postgres) and NoSQL (DynamoDB, Mongo, Redis) databases both for transaction processing and analytics a strong plus
- What's in it for you:
- * Be a part of an early stage startup with incredible growth opportunities.
- * Comprehensive health coverage, competitive salary, 401(k) match and meaningful equity
- * Unlimited vacation and flexible working hours.
- * Daily catered lunches, endless snack supply, kombucha, cold brew and a variety of beers and wine on tap.
- * Basketball court, ping pong, yoga and fitness classes, and many other fun activities
- * Holiday celebrations, beach parties, happy hours and more.
- * Fully customized computer equipment to fit your needs.
- * Great amenities within a great building, and a colorful/creative work environment