Uncubed
           

Social Science Researcher

OpenAI, San Francisco

Discovering and enacting the path to safe artificial general intelligence.


As a social science researcher at OpenAI, you’ll be tasked with exploring the human side of AI safety algorithms that try to learn by asking humans questions.  Eventually these algorithms will involve both humans and AI, but we believe we can gain knowledge early by modeling the algorithms with humans alone. This will involve conducting experiments with humans that test whether the algorithms are likely to work with people, and improving the human aspects of algorithms to make them work better.

For more details, see AI safety needs social scientists.

A caveat: while we believe social scientists are likely to have an important role to play in long-term AI safety, this would be a new line of research involving significant uncertainty.  The safety algorithms being modeled may change over time (either due to knowledge gained on the ML side or the human side), and the research may need to shift accordingly. The specific research questions may be quite different from traditional social science work, though we expect skills and knowledge from other areas to be important. Candidates for this role should be cognizant of this uncertainty.  The flip side of this uncertainty is field building opportunity: this is a new area to be explored, mapped, and improved.

Requirements

  • Track record of rigorous experiments with humans in some field of social science (possibilities include experimental psychology, cognitive science, economics, political science, social psychology, etc.), as demonstrated by one or more first author publications or projects.
  • Statistics experience for experimental analysis and design.
  • Proven interest in long-term AI safety and AI alignment.

Responsibilites

  • Collaborate with machine learning researchers to invent and tune AI alignment algorithms that are well matched to the behavior of real people.
  • Conduct human-only experiments testing whether these algorithms perform well with humans, replacing AI agents with humans for modeling purposes.  This includes detailed experiment design, assembling and organizing people to participate in experiments, overseeing or building any necessary tool support for experiments, and analyzing results.
  • Plan future research in this area, anticipating issues that might occur due to known or guessed human biases or weaknesses
  • As ML capabilities progress, work with ML researchers to merge knowledge gained on the human side with the ML side.

About OpenAI

We’re building safe Artificial General Intelligence (AGI), and ensuring it leads to a good outcome for humans. We believe that unreasonably great results are best delivered by a highly creative group working in concert.

We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

Benefits
Health, dental, and vision insurance for you and your family
Unlimited time off (we encourage 4+ weeks per year)
Parental leave
Flexible work hours
Lunch and dinner each day
401(k) plan

About OpenAI

OpenAI is a non-profit AI research company, discovering and enacting the path to safe artificial general intelligence. OpenAI's mission is to build safe AGI, and ensure AGI's benefits are as widely and evenly distributed as possible. We expect AI technologies to be hugely impactful in the short term, but their impact will be outstripped by that of the first AGIs. We're a non-profit research company. Our full-time staff of 60 researchers and engineers is dedicated to working towards our mission regardless of the opportunities for selfish gain which arise along the way. We focus on long-term research, working on problems that require us to make fundamental advances in AI capabilities. By being at the forefront of the field, we can influence the conditions under which AGI is created. As Alan Kay said, "The best way to predict the future is to invent it." We publish at top machine learning conferences, open-source software tools for accelerating AI research, and release blog posts to communicate our research. We will not keep information private for private benefit, but in the long term, we expect to create formal processes for keeping technologies private when there are safety concerns.

OpenAI

Want to learn more about OpenAI? Visit OpenAI's website.