Uncubed
           

Principal Big Data Engineer

Uptake, 600 W Chicago Ave, Chicago, IL 60654

See jobs at Uptake


What​ ​we​ ​do

Uptake harnesses the power of underutilized data to empower businesses to make informed decisions. We partner with industry leaders to build a predictive analytics software platform that grows smarter in one industry because of what we learn in another. The result is a powerful platform that identifies problems before they happen, ultimately saving money, time and lives.

Why Work Here

Uptake is a values-driven organization, and we are excited about what we do. We’re flexible, honest, hardworking, and collaborative. As a team, we bring our diverse backgrounds, beliefs, and experiences together to solve tough, important problems. We support and challenge one another to bring out the best in each of us, and we might have a little fun along the way. We’re also proud to be one of Chicago’s best places to work in 2018 according to Forbes and Great Place to Work Institute.

We offer generous benefits including health, dental, vision, parental leave, 401K match, and unlimited vacation. We are lifelong learners, and our Uptake University program offers training and professional development on a wide variety of topics. We also have employee-led community groups including [email protected], [email protected], [email protected], [email protected], and many more. Learn more at https://www.uptake.com/careers.

What​ ​you’ll​ ​do:

As a Big Data Engineer, you’ll be responsible for the architecture of a complex analytics platform that is already changing the way large industrial companies manage their assets. A Big Data Engineer understands cutting-edge tools and frameworks, and is able to determine what the best tools are for any given task. You will enable and work with our other developers to use cutting-edge technologies in the fields of distributed systems, data ingestion and mapping, and machine learning, to name a few. We also strongly encourage Engineers to tinker with existing tools, and to stay up to date and test new technologies—all with the aim of ensuring that our existing systems don’t stagnate or deteriorate.

Responsibilities:

As a Big Data Engineer, your responsibilities may include, but are not limited to, the following:

● Build a scalable Big Data Platform designed to serve many different use-cases and requirements
● Build a highly scalable framework for ingesting, transforming and enhancing data at web scale
● Develop data structures and processes using components of the Hadoop ecosystem such as Avro, Hive, Parquet, Impala, Hbase, Kudu, Tez, etc.
● Establish automated build and deployment pipelines
● Implement machine learning models that enable customers to glean hidden insights about their data

Qualifications:

● Bachelor's degree in Computer Science or related field
● 6+ years of system building experience
● 4+ years of programming experience using JVM based languages
● A passion for DevOps and an appreciation for continuous integration/deployment
● A passion for QA and an understanding that testing is not someone else’s responsibility
● Experience automating infrastructure and build processes
● Outstanding programming and problem solving skills
● Strong passion for technology and building great systems
● Excellent communication skills and ability to work using Agile methodologies
● Ability to work quickly and collaboratively in a fast-paced, entrepreneurial environment
● Experience with service-oriented (SOA) and event-driven (EDA) architectures
● Experience using big data solutions in an AWS environment
● Experience with javascript or associated frameworks

Preferred​ ​skills:

We value these qualities, but they’re not required for this role:

● Masters or Ph.D. in related field
● Experience as an open source contributor
● Experience with Akka, stream processing technologies and concurrency frameworks
● Experience with Data modeling
● Experience with Chef, Puppet, Ansible, Salt or equivalent
● Experience with Docker, Mesos and Marathon
● Experience with distributed messaging services, preferably Kafka
● Experience with distributed data processors, preferably Spark
● Experience with Angular, React, Redux, Immutable.js, Rx.js, Node.js or equivalent
● Experience with Reactive and/or Functional programming
● Understanding of Thrift, Avro or protocol buffers

About Uptake

Did you know that as little as 1% of industrial data is being used today? With the rise of commoditized sensors, connected technology, massive storage capacity and growing processing power, every asset in every industry is capable of generating valuable data at incredible scale. This key information can answer the most critical questions across your operations and open the door to unprecedented business advantages. At Uptake, our purpose-built products ingest and analyze sensor and enterprise data, transforming it into actionable insights and immediate outcomes. Together with our customers, we drive real business value and set new standards for productive, secure, safe and reliable operations. We believe companies and people should love the technology they experience. We are field engineers, technologists and data scientists who deliver great software that is easy to use. Above statistic: McKinsey & Company

Want to learn more about Uptake? Visit Uptake's website.