Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Sequoia Capital, Benchmark, and Index Ventures to recently invest a combined $125 million in our Series D financing. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!
Customer Operations Engineers work at the intersection of our client services and engineering teams and drive customer success by helping identify and resolve critical business issues. In this role you’ll interact directly with our customers to provide software development and operations expertise, leveraging deep knowledge of best practices in the use of Apache Kafka, the broader Confluent Platform, as well as complementary systems. You will be hands on in fixing issues seen by Confluent customers and contributing fixes back to the open source community. Throughout all of these interactions, you’ll build strong relationships with customers, ensuring exemplary support and timely resolution to customer requests.
A typical week at Confluent in this role may involve:
Working with customers to resolve a wide range of issues with their Confluent deployments
Contributing to process development - we’re a small team, so we’re looking for people who want to help us lay the foundation for growing efficiently and with a best-in-class culture
Communicating with our core engineering team to provide real-time product feedback from the field
Improving product documentation and authoring knowledge base articles
Creating and reviewing product demos and internal tooling
Working closely with the team behind Apache Kafka!
Required skills and experience:
Excitement in learning about streaming data and becoming a domain expert in Apache Kafka
Experience in diagnosing, reproducing, and resolving customer issues
Desire to make customers successful through direct interaction
Two out of these three:
Experience troubleshooting applications running on Linux (resource contention, network bottlenecks, etc.)
Operational knowledge of Java applications (jstack,jmap, etc.)
Experience with at least one mainstream distributed system (e.g. Kafka, Hadoop, Cassandra, etc.)
Culture is a huge part of Confluent, we’re searching for the best people who not only excel at their role, but also contribute to the health, happiness and growth of the company. Inclusivity and openness are important traits, with regular company wide and team events. Here are some of the personal qualities we’re looking for:
Smart, humble and empathetic
Hard working, you get things done
Hungry to learn in a field which is ever evolving
Adaptable to the myriad of challenges each day can present
Inquisitive and not afraid to ask all the questions, no matter how basic
Ready to roll up your sleeves and help others, getting involved in projects where you feel you can add value
Strive for excellence in your work, your team and the company
Come and build with us. We are one of the fastest growing software companies in the market. A company built on the tenets of transparency, direct communication and inclusivity. Come meet the streams dream team and have a direct impact on how we shape Confluent.
Founded by the team that built Apache Kafka®, Confluent builds a streaming platform
that enables companies to easily access data as real-time streams
Every byte of data has a story to tell, something of significance that will inform the next thing to be done. In a data-driven enterprise, how we move our data becomes nearly as important as the data itself. With greater speed and agility, data’s value increases exponentially.
From its very early days, we have open-sourced Apache Kafka™ and have led it to impressive industry-wide adoption across several thousand companies. Now we are focusing on building a streaming platform to help other companies get easy access to enterprise data as real-time streams.