Head of Security

OpenAI, San Francisco

Discovering and enacting the path to safe artificial general intelligence.

OpenAI is pushing artificial intelligence to unprecedented scale. We have a large cloud footprint and run some of the biggest Kubernetes clusters in the world. As our scale has grown, so has the surface area we need to protect. While advanced AI can benefit the world, in the wrong hands, it can also be used maliciously.

Your job will be to protect our work from those who seek to misuse it.

We’re looking for a Head of Security to build out security engineering efforts across our organization. You will define our security strategy for application, cloud, and corporate security, create models for both external and insider threats, hire a small but focused team, and work alongside them to protect our unique information assets.

We’re a small company and we want to stay small. We need a hands-on leader who is excited to work on our hardest technical problems and who executes effectively and relentlessly to keep our systems secure.

You will

  • Define OpenAI’s security roadmap
  • Get hands-on with implementation, design, and execution
  • Create threat models for both external and insider threats
  • Work directly with our research teams to protect our core assets
  • Lead audits for our internal security policies
  • Build and evangelize security policies and best practices

You may be a fit for this role if you have

  • 3+ years of operational experience leading security teams in a fast-paced environment.
  • Strong experience in protecting at least one cloud platform and a willingness to become an expert with Azure and our application infrastructure.
  • Deep knowledge of attack surfaces for enterprise systems and services.
  • Expertise in thinking through insider threat scenarios.
  • Ability to work productively across different groups to promote effective security across research and development teams, people operations, and executive leadership.
About OpenAI

We’re building safe Artificial General Intelligence (AGI), and ensuring it leads to a good outcome for humans. We believe that unreasonably great results are best delivered by a highly creative group working in concert. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

This position is subject to a background check for any convictions directly related to its duties and responsibilities. Only job-related convictions will be considered and will not automatically disqualify the candidate. Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.


Health, dental, and vision insurance for you and your family
Unlimited time off (we encourage 4+ weeks per year)
Parental leave
Flexible work hours
Lunch and dinner each day
401(k) plan

About OpenAI

OpenAI is a non-profit AI research company, discovering and enacting the path to safe artificial general intelligence. OpenAI's mission is to build safe AGI, and ensure AGI's benefits are as widely and evenly distributed as possible. We expect AI technologies to be hugely impactful in the short term, but their impact will be outstripped by that of the first AGIs. We're a non-profit research company. Our full-time staff of 60 researchers and engineers is dedicated to working towards our mission regardless of the opportunities for selfish gain which arise along the way. We focus on long-term research, working on problems that require us to make fundamental advances in AI capabilities. By being at the forefront of the field, we can influence the conditions under which AGI is created. As Alan Kay said, "The best way to predict the future is to invent it." We publish at top machine learning conferences, open-source software tools for accelerating AI research, and release blog posts to communicate our research. We will not keep information private for private benefit, but in the long term, we expect to create formal processes for keeping technologies private when there are safety concerns.


Want to learn more about OpenAI? Visit OpenAI's website.