But the governance of the most powerful systems, as well as decisions regarding their deployment, must be subject to strong public oversight. We believe that people around the world should democratically decide the limits and defaults of AI systems. We don’t yet know how to design such a mechanism, but we plan to experiment with its development. We continue to believe that within these broad limits, individual users should have a lot of control over how the AI they use behaves.
Given the risks and difficulties, it is worth considering why we are building this technology.
At OpenAI we have two fundamental reasons. First, we believe it will lead to a much better world than we can imagine today (we are already seeing early examples of this in areas like education, creative work, and personal productivity). The world faces many problems that we will need much more help to solve; this technology can improve our societies, and everyone’s creative ability to use these new tools will surely amaze us. Economic growth and increased quality of life will be staggering.
Second, we believe that it would be unintuitive, risky, and difficult to stop the creation of superintelligence. Because the advantages are so enormous, the cost of building it is falling every year, the number of players building it is increasing rapidly, and it’s inherently part of the technological path we’re on, stopping it would require something like a global surveillance regime, and even that is not guaranteed to work. So we have to get it right.