Agent ai systems (ai systems that can pursue complex goals with limited direct supervision) are likely to be widely useful if we can responsibly integrate them into our society. While these systems have substantial potential to help people achieve their own goals more efficiently and effectively, they also create risks of harm. In this whitepaper, we suggest a definition of agent ai systems and the parts in the agent ai system lifecycle, and highlight the importance of agreeing on a set of basic responsibilities and security best practices for each of these parts. As our main contribution, we offer an initial set of practices to keep agent operations safe and responsible, which we hope can serve as building blocks in the development of agreed upon core best practices. We list the questions and uncertainties around the operationalization of each of these practices that must be addressed before they can be codified. We then highlight categories of indirect impacts arising from the large-scale adoption of agentic ai systems, which are likely to require additional governance frameworks.