OpenAI has announced new details about why it took ChatGPT offline on Monday, now saying some users’ payment information may have been exposed during the incident.
According a company publication, a bug in an open source library called redis-py created a caching issue that may have shown some active users the last four digits and expiration date of another user’s credit card, along with their name and last name, email address and payment address. Users may also have seen snippets of others’ chat histories.
This isn’t the first time caching issues have caused users to see each other’s data; is famous, Christmas Day 2015, Steam users pages with information from other users’ accounts were served. There is some irony in the fact that OpenAI puts a lot of focus and research into discovering the potential safety and security ramifications of its AI, but got caught up in a well-known security issue.
The company says the payment information leak may have affected around 1.2 percent of ChatGPT Plus who used the service between 4 a.m. and 1 p.m. ET on March 20.
You were only affected if you were using the app during the incident.
There are two scenarios that could have caused payment data to be displayed to an unauthorized user, according to OpenAI. If a user went to the My Account > Manage Subscription screen, during the time period, they may have seen information from another ChatGPT Plus user who was actively using the service at the time. The company also says that some subscription confirmation emails sent during the incident went to the wrong person and included the last four digits of the user’s credit card number.
The company says it’s possible both things happened before the 20th, but it has no confirmation that it happened. OpenAI has reached out to users who may have had their payment information exposed.
as for as This all happened, apparently it came down to caching. The company has a complete technical explanation in his post, but the TL; DR is that it uses software called Redis to cache user information. Under certain circumstances, a canceled Redis request would result in corrupted data being returned for a different request (which should not have happened). Typically the app would get that data, say “this is not what I asked for” and throw an error.
But if the other person requested the same type of data (if they were looking to load their account page and the data was someone else’s account information, for example), the app decided all was well and showed it to them.
That’s why people were seeing other users’ payment information and chat history; they were getting cache data that was actually supposed to go to someone else, but didn’t because of a canceled request. That’s why it also affected only users who were active. People who were not using the app would not have their data cached.
What made matters worse was that, on the morning of March 20, OpenAI made a change to their server that accidentally caused a spike in Redis aborted requests, increasing the number of chances that the bug would return an unrelated cache to someone.
OpenAI says that the bug, which appeared in a very specific version of Redis, has now been fixed and that the people working on the project have been “fantastic collaborators.” It also says it’s making some changes to its own software and practices to prevent this sort of thing from happening again, including adding “redundant checks” to make sure the data being delivered actually belongs to the user requesting it and reducing the probability that your Redis cluster will fail under high loads.
While I’d say those controls should have been there in the first place, it’s nice that OpenAI has added them now. Open source software is essential to the modern web, but it also presents its own set of challenges; Because anyone can use it, bugs can affect a large number of services and businesses at once. And, if a malicious actor knows what software a specific company uses, they can potentially target that software to knowingly attempt to introduce an exploit. There are checks that make it harder to do so, but as companies like Google have shown, it’s better to work to make sure it doesn’t happen and be prepared for it if it does.