Senior Data Engineer

Slack, San Francisco

Slack's cloud-based collaboration tools and services are used worldwide.

Slack is looking for expert data engineers to join our Data Engineering team. In this role, you will be working cross-functionally with business domain experts, analytics, and engineering teams to design and implement our Data Warehouse model. You will design, implement and scale data pipelines that transform billions of records into actionable data models that enable data insights.  

You will lead initiatives to formalize data governance and management practices, rationalize our information lifecycle and key company metrics. You will provide mentorship and hands-on technical support to build trusted and reliable domain-specific datasets and metrics.

The candidate will have deep technical skills, be comfortable contributing to a nascent data ecosystem, and building a strong data foundation for the company. They will be a self-starter, detail and quality oriented, and passionate about having a huge impact at Slack.


  • Translate business requirements into data models that are easy to understand and used by different disciplines across the company. Design, implement and build pipelines that deliver data with measurable quality under the SLA
  • Partner with business domain experts, data analysts and engineering teams to build foundational data sets that are trusted, well understood, aligned with business strategy and enable self-service
  • Be a champion of the overall strategy for data governance, security, privacy, quality and retention that will satisfy business policies and requirements
  • Own and document foundational company metrics with a clear definition and data lineage
  • Identify, document and promote best practices


  • Bachelor's degree in Computer Science, Engineering or related field, or equivalent training, fellowship, or work experience
  • 5+ years of experience working in data architecture, data modeling, master data management, metadata management
  • A consistent track record of close collaboration with business partners and crafting data solutions to meet their needs
  • Very strong experience in scaling and optimizing schemas, performance tuning SQL and ETL pipelines in the OLTP, OLAP and Data Warehouse environments
  • Deep understanding of relational as well as NoSQL data stores, methods and approaches  (logging, columnar, star and snowflake, dimensional modeling)
  • Proficiency with object-oriented and/or functional programming languages is a big plus (e.g. Java, Scala, Python, Go)
  • Hands-on experience with Big Data technologies (e.g Hadoop, Hive, Spark)
  • Excellent written and verbal communication and interpersonal skills, able to effectively collaborate with technical and business partners
  • Excellent understanding of trade-offs
  • Demonstrated ability to navigate between big-picture and implementation details

About Slack

Empathy. Courtesy. Playfulness. Craftsmanship. Solidarity — these are some of the values we live by, as a company. We work by them, too: we’re building a platform and products we believe in — knowing there is real value to be gained from helping people, wherever they are, simplify whatever it is that they do and bring more of themselves to their work.

We’re building a strong, diverse team of curious, creative people who want to find a purpose in their work and support each other in the process. We work hard and we play to win… within normal business hours. And then we go home.

That balance is important: It enables us to truly do the best work of our lives. As a result, we create a place where all kinds of work happens — and happens well — all while working alongside people we respect and admire.

Want to learn more about Slack? Visit Slack's website.