Data Software Engineer

Postmates

(Bellevue, Washington)
Full Time
Job Posting Details
About Postmates
Postmates is transforming the way local goods move around a city by enabling anyone to get any product delivered in under an hour. Available for iPhone, Android and on the web, the on-demand logistics service connects customers with local couriers, who purchase and deliver goods from any restaurant or store in a city, 24/7.
Summary
As a Data Engineer, you’ll be part of a team responsible for the integrity and accessibility of all of Postmates business-critical data. You’ll contribute to our data-pipelines, our analytics tools, and our data science and machine learning infrastructure, as well as help design and scale our architecture to meet future needs. You’ll work with teams across the organization, making sure that engineers have the tools to generate and store data and that business and data science consumers have the information they need at their fingertips. We’re looking for engineers with a proven track record of shipping high-impact tools. We care much more that you understand how to build simple, clear, and reliable tools than you have experience with any given toolset or pattern. We love learning, and we expect that you will learn new things and teach us new things as we build out the Postmates data infrastructure.
Responsibilities
* Design and build reliable, easy to use data pipelines and data systems * Roll out new tools and features on existing big data storage, processing, and machine learning systems * Triage, identify, and fix scaling challenges * Perform cost-benefit analyses of short-term needs vs long-term data scaling and company growth * Educate product managers, analysts, and other engineers about how best to use our systems to answer hard business questions and make better decisions using data
Ideal Candidate
**Requirements** * You have a curiosity about how things work * You possess strong computer science fundamentals: data structures, algorithms, programming languages, distributed systems, and information retrieval. * You’ve built large-scale data pipeline and ETL tooling before, and have strong opinions about writing beautiful, maintainable, understandable code. * You’ve worked professionally with both streaming and batch data processing tools, and understand the tradeoffs. * You understand the challenges of working with schema-based and unstructured data, and enjoy the challenge of collecting data flexibly and accurately. * You have extensive experience with at least one RDBMS platform (Postgres, Transact-SQL, MySQL, etc.) * You are a strong communicator. Explaining complex technical concepts to product managers, support, and other engineers is no problem for you. * You love it when things work, you understand that things break, and when things do fail you dive in to understand the root causes of failure and fix whatever needs work. **Bonus Points** * A Masters degree (or higher) in a technical field (C.S., Math, Physics, Engineering…) * AWS development and operations experience (EMR, s3, data pipelines, etc.) * Experience with the Apache Ecosystem - Kafka, Spark, Storm, Zookeeper, Etc * Experience with Amazon Redshift data warehouse * A solid math and statistics background
Compensation and Working Conditions
Benefits Benefits included

Additional Notes on Compensation

Competitive salary and generous stock option plan. Medical, dental and vision insurance. We'll provide equipment you need to work efficiently and creatively. Paid parental leave, vacation time and sick time.

Questions

Answered by on
This question has not been answered
Answered by on

There are no answered questions, sign up or login to ask a question

Want to see jobs that are matched to you?

DreamHire recommends you jobs that fit your
skills, experiences, career goals, and more.