Uncubed
           

Full Stack Developer

Cerebri AI, Austin TX or Toronto ON or Washington DC

Turn data into revenue


Cerebri AI CVX platform uses the best Artificial Intelligence ( AI ), Operation Research ( OR ), and software to provide what is required in our digital age: value a customer's commitment to a brand and related products. It then uses these insights to then drive the selling of product and services. We use AI to answer the fundamental questions of the digital age: Who talks to the customer? Who understands the customer? How do we do this at scale when we have millions of customers?
 
Cerebri AI CVX platform uses a 10-stage AI software pipeline manages & processes data intake thru to producing insights and actions & presenting them via our APIs, in our customers' systems, or our UX. Our AI pipeline's first five stages manage data intake, a crucial step in producing great insights. One customer journey ( CJ ) per customer means all models targeting CX and revenue KPIs and related next best actions ( NBAs ) use the same journeys.
 
We work with companies selling to over 200 million consumers and have 24 patents filed on the Cerebri AI platform. We now have 40 employees in three offices in Austin, Toronto, and Washington DC. Over 80% of the staff are in technical roles in data science and software engineering. 
 
How do we do this? We hire the best data scientists, mathematicians, and software developers and work as a cross-disciplinary team/gang/clan. We work hard, laugh hard, and impress our peers and clients. Because we can. And because we want to. To learn more, visit cerebriai.com. In the meantime, if you think you have what it takes, give us a spin and upload resume.
 
"Cerebri AI was recognized as 2019 Cool Vendor for Customer Journey Analytics by Gartner"

"Cerebri AI was named a 2019 Cool Vendor in Artificial Intelligence for Customer Analytics by Gartner"


Role: As a Full Stack Developer (Scala/Java), you will play an integral role in the development of our flagship AI product offerings for enterprise. You will be part of a small, focused team working in fast paced environment.

Responsibilities

  • Developing reactive applications that manage large datasets in conjunction with machine learning models trained against that data.
  • Maintaining automated test coverage against all code you produce.
  • Contributing to design discussions related to product.
  • Building and maintaining Continuous Integration (CI) pipelines to maximize efficiency and ensure quality in the development process.
  • Learning about the latest and greatest advancements in machine learning and data engineering while simultaneously looking for opportunities to apply them in our products.
  • Meeting hard product deliverable deadlines set in a rapidly evolving startup environment.

Qualifications

  • Excellent Java programming skills with two (2) years or more experience.
  • Experience in Python / Pyspark
  • Machine learning and/or ETL experience with Apache Spark.
  • Working knowledge of Relational databases (Postgres, Oracle), Distributed clusters (Hive, Ignite), Graph databases (OrientDB, Neo4j) etc.
  • Experience setting up automated tests that provide full code coverage and building/maintaining Continuous Integration (CI) pipelines (e.g. Jenkins, Travis CI, CircleCI).
  • Experience operating in a “full stack” type role, with the ability to be flexible with the tasks you work on day-to-day.
  • Familiarity with Agile methodology and Scrum framework for managing processes.
  • Proficiency in managing software projects in Git.
  • Excellent verbal and written communication skills.
  • Bachelor's Degree in Computer Science (or related area).

Nice to haves...

  • Experience with Scala, AngularJS, React, D3, Lightbend Reactive Platform (Play and Akka), HTML, CSS, Grunt,
  • Understanding of basic machine learning model configurations (e.g. Random Forest, Naïve Bayes, Neural Networks) and common API frameworks that can be used to deploy them (e.g. Spark MLlib, Python scikit-learn, Tensorflow).
  • Experience in deploying statistical models for use in applications
  • Familiarity with common neural network configurations and the problems they can be used to solve.
  • Experience with the Atlassian suite (JIRA, Confluence, BitBucket).
  • Any other related experience with Big Data, artificial intelligence, natural language processing, machine learning and/or deep learning, predictive analytics
Specified preferred location

About Cerebri AI

Cerebri AI provides AI and machine learning solutions to help enterprises grow top line revenues by giving them a 1:1 relationship with their customers. We do this by processing internal and external customer data, and by determining the dollar value a customer places on the “value” of a vendor, products, assets, etc. We also monetize a critical variable in any revenue situation, the customer’s ability to pay, so things such as up-selling opportunities can be clearly scoped and delivered. We call the results Customer Value Indexes (CVIs) for brands, vendors, assets and financing.

Cerebri AI

Want to learn more about Cerebri AI? Visit Cerebri AI's website.