“Given the discrepancy between their public comments and OpenAI’s reporting of actions, we requested information about OpenAI’s whistleblower and conflict of interest protections to understand whether federal intervention may be necessary,” Warren and Trahan wrote in a statement. Letter shared exclusively with The edge.
The lawmakers cited several instances where OpenAI's security procedures have been called into question. For example, they said that in 2022, an unreleased version of GPT-4 was being tested in a lab. technology/openai-culture-whistleblowers.html”>New version of Microsoft's Bing search engine in India Prior to receiving approval from OpenAI’s security board, they also recalled Altman’s brief ouster from the company in 2023, a result of the board’s concerns, in part, “about commercializing advances before understanding the consequences.”
Warren and Trahan's letter to Altman comes at a time when the company is beset by a long list of security concerns, which often contradict the company's public statements. For example, one anonymous source technology/2024/07/12/openai-ai-safety-regulation-gpt4/”>said The Washington Post After OpenAI rushed through security testing, the Superalignment team (which was partly responsible for security) was disbanded and a security executive resigned, claiming that “security culture and processes have taken a backseat to brilliant products.” Lindsey Held, a spokesperson for OpenAI, denied the claims in a statement. The Washington PostThe company's report “did not cut corners in our security process, although we recognize that the launch was stressful for our teams.”
Other lawmakers have also sought answers about the company's security practices. including a group of senators In July, the bill was led by Brian Schatz (D-Hawaii). Warren and Trahan called for more clarity on OpenAI's responses to that groupincluding the creation of a new “Integrity Line” for employees to report concerns.
Meanwhile, OpenAI appears to be on the offensive. In July, the company announced a partnership with Los Alamos National Laboratory to explore how advanced ai models can safely aid in bioscience research. Last week, Altman announced via x that OpenAI is collaborating with the US ai Safety Institute and emphasized that 20 percent of the company’s computing resources will be dedicated to security (a promise originally made to the now-defunct Superalignment team). In the same post, Altman also said that OpenAI has removed non-disparagement clauses for employees and provisions allowing for the cancellation of vested equity — a key theme in Warren’s letter.
Warren and Trahan asked Altman to provide information about how its new employee ai safety hotline was being used and how the company tracks reports. They also asked for “a detailed accounting” of all the times OpenAI products have “circumvented security protocols” and under what circumstances a product would be allowed to skip a security review. Lawmakers are also seeking information about OpenAI’s conflicts policy. They asked Altman whether he has been required to divest from any outside involvement and “what specific protections are in place to protect OpenAI from its financial conflicts of interest.” They asked Altman to respond by Aug. 22.
Warren also points out how explicit Altman has been in expressing his concerns about ai. Last year, in front of the Senate, Altman warned that ai capabilities could be “significantly destabilizing to public safety and national security” and emphasized the impossibility of anticipating every potential abuse or failure of the technology. These warnings seemed to resonate with lawmakers: In OpenAI’s home state of California, Sen. Scott Wiener is pushing a bill to regulate large language models, including restrictions that would make companies legally liable if their ai is used in harmful ways.