Former National Security Agency Chief Retired Gen. Paul Nakasone to Join Board of artificial intelligence Company OpenAI announced Thursday afternoon. He will also serve on the board's “safety and security” subcommittee.
The high-profile addition is likely aimed at satisfying critics who think OpenAI is moving faster than would be prudent for its customers and possibly humanity, introducing models and services without properly assessing their risks or blocking them.
Nakasone brings decades of experience from the Army, US Cyber Command and NSA. Whatever his views on the practices and decision-making in these organisations, he certainly cannot be accused of lack of experience.
As OpenAI increasingly establishes itself as an ai provider not only to the tech industry but also to government, defense and large enterprises, this type of institutional knowledge is valuable both for itself and to appease shareholders. worried. (Without a doubt, the connections it brings in the state and military apparatus are also welcome).
“OpenAI's dedication to its mission closely aligns with my own values and experience in public service,” Nakasone said in a press release.
That certainly seems true: Nakasone and the NSA recently defended the practice of purchasing data of questionable provenance to feed their surveillance networks, arguing that there was no law against it. OpenAI, for its part, has simply taken, rather than bought, large amounts of data from the Internet, arguing when it detects it that there is no law against it. They seem to be on the same page when it comes to asking for forgiveness instead of permission, if they actually ask for it.
The OpenAI statement also says:
Nakasone's insights will also contribute to OpenAI's efforts to better understand how ai can be used to strengthen cybersecurity by quickly detecting and responding to cybersecurity threats. We believe that ai has the potential to offer significant benefits in this area for many institutions that are frequently targeted by cyberattacks, such as hospitals, schools, and financial institutions.
So this is also a new market game.
Nakasone will join the board's security committee, which is “responsible for making recommendations to the entire Board on critical security decisions for OpenAI projects and operations.” What this newly created entity actually does and how it will operate is still unknown, as several of the high-level people working in security (as it relates to ai risk) have left the company, and the committee itself is in the middle. of a period of 90 days. evaluation of company processes and safeguards.