OpenAI seems to be in the news every day, and this time it’s for a double dose of security concerns. The first issue centers on ChatGPT’s Mac app, while the second suggests broader concerns about how the company is handling its cybersecurity.
Earlier this week, Swift engineer and developer Pedro José Pereira Vieito x.com/pvieito/status/1808051287088353563″ rel=”nofollow noopener” target=”_blank” data-ylk=”slk:dug into;elm:context_link;elmt:doNotAffiliate;cpos:1;pos:1;itc:0;sec:content-canvas”> The Mac app ChatGPT and found that it stored users' conversations locally in plain text rather than encrypting them. The app is only available on OpenAI's website, and since it's not available on the App Store, it doesn't have to comply with Apple's sandbox requirements. Vieito's work was covered by And after the exploit attracted attention, OpenAI released an update that added encryption to locally stored chats.
For non-developers, sandboxing is a security practice that prevents potential vulnerabilities and flaws from spreading from one application to others on a machine. And for non-security experts, storing local files in plain text means other applications or malware can easily view potentially sensitive data.
The second issue occurred in 2023, and its consequences have had a ripple effect that continues to this day. Last spring, a hacker managed to obtain information about OpenAI after illegally accessing the company's internal messaging systems. technology/openai-hack.html” rel=”nofollow noopener” target=”_blank” data-ylk=”slk:The New York Times;elm:context_link;elmt:doNotAffiliate;cpos:3;pos:1;itc:0;sec:content-canvas”> OpenAI technical program manager Leopold Aschenbrenner reportedly raised security concerns with the company's board, arguing that the attack involved internal vulnerabilities that foreign adversaries could exploit.
Aschenbrenner now says he was fired for disclosing information about OpenAI and raising concerns about the company's security. An OpenAI representative told The times “While we share his commitment to building a safe IAG, we disagree with many of the claims he has since made about our work,” he said, adding that his departure was not the result of whistleblowing.
Application vulnerabilities are something every tech company has experienced. Breaches committed by hackers are also sadly common, as are contentious relationships between whistleblowers and their former employers. However, between the widespread adoption of ChatGPT across services and how chaotic the company’s data management has been, these recent issues are starting to paint a more worrying picture of whether OpenAI can handle its data.