Facebook’s murky history of letting third-party apps help themselves with user data you might remember erupted in a major global privacy scandal in 2018 (aka Cambridge Analytica), driving up the stock price. of the company, leading to its founder being hauled in front of Congress, and ultimately, in mid-2019, to a $5BN settlement with the FTC over what were sometimes euphemistically reported as ‘privacy lapses’, seems to be coming back to haunt him through an unsealed legal discovery.
Internal documents in a related privacy lawsuit that surfaced late last month have prompted US Senate Select Committee on Intelligence chairmen Mark Warner and Marco Rubio to write a letter Meta’s Mark Zuckerberg asking new questions about what he and his company knew about the amount of user data the platform was leaking at the time. And what security implications such leaks may have.
The thing is, according to the disclosed documents, the company now known as Meta appears to have suspected developers in high-risk jurisdictions where authoritarian regimes are known to “collect data for intelligence and cyberespionage purposes,” including North Korea, Russia, China and Iran: They were among the thousands who also accessed the personal data of Facebook users through the same kind of friend data permissions path that the contract developer, GSR, mined the Cambridge data set Analytics.
“It appears from these documents that Facebook has known, since at least September 2018, that hundreds of thousands of developers in countries that Facebook characterized as “high risk,” including the People’s Republic of China (PRC), had access to significant amounts of sensitive user data,” they write.
“As Chairman and Vice Chairman of the Senate Select Committee on Intelligence, we have serious concerns about the extent to which this access could have enabled foreign intelligence activity, ranging from foreign malign influence to targeting and activity. counterintelligence,” the couple said. add, pressing Meta to answer a series of questions about how it acted after its internal audit found thousands of developers may have accessed user data in high-risk locations.
It’s fair to say that Meta doesn’t like dwelling on a data access/policy enforcement flaw scandal that led to its founder sitting in an elevated seat in Congress and being hounded with questions by angry US lawmakers. Quite possibly because it paid $5 billion to the FTC to make the whole scandal go away, a deal that conveniently gave its executives blanket immunity for any known or unknown privacy violations.
But the problem with Meta wanting the entire episode shelved as ‘forever resolved’ is that it has never actually answered all the questions lawmakers asked at the time. Nor in subsequent years, as additional details have emerged.
It hasn’t even released the results of the third-party app audit that Zuckerberg promised would take place in 2018. (Though we found out, indirectly, in 2021, that a deal he struck with the UK’s privacy watchdog included a gag clause preventing the commissioner from speaking publicly about the investigation).
Yet this as-yet-unpublished audit of third-party apps formed the cornerstone of Facebook’s PR response to the crisis at the time: a promised comprehensive accounting that successfully shielded Zuckerberg and his company from deeper scrutiny. Exactly when the pressure was greatest for him to explain how the information of millions of users was extracted from his platform by a developer with In good faith access its tools without the knowledge or consent of actual Facebook users.
The price tag for this shielding has probably been quite high, both to the reputation of Meta (which, after all, felt the need to undertake an expensive corporate rebrand and try to reframe its business in the new field of virtual reality) ; and also in future compliance costs (which obviously won’t just affect Meta) as various laws drafted in the years after the scandal seek to put new operating limits on the platforms. Limits that are often justified by a framework that foregrounds Big Tech’s perceived lack of accountability. (See, for example, the UK’s Online Safety Act, which even includes, in a recent addition, criminal sanctions for CEOs who breach the requirements OR the EU Digital Services Act and Digital Markets Act).
Still, Meta has been largely successful in avoiding the kind of deep scrutiny of its internal processes, policies, and decision-making that paved the way for Cambridge Analytica to take place under Zuckerberg’s watch, and potentially for dozens of others. Similar data thefts. , at least based on details that emerge through legal discovery.
This is why the specter of the comeback of Facebook’s failed accountability is a compelling visualization. (See also: A privacy dispute that Meta finally decided to settle last year, with a moment apparently saving Zuckerberg and former COO Sheryl Sandberg from having to appear in person after being subpoenaed to testify, for a sale price that was not disclosed.)
Whether anything substantial emerges from the latest visit by the ghost of the unresolved Facebook privacy scandals remains to be seen. But Meta now has a long new list of uncomfortable questions from lawmakers. And if he tries to duck substantive answers, his executives could face a new subpoena for a public questioning of the committee. (It’s never the crime, it’s the cover-up, etc, etc.)
This is what the Committee is asking Meta to respond to regarding the findings of the internal investigation:
1) The unsealed document notes that Facebook conducted separate reviews of developers based in the PRC. [People’s Republic of China] and Russia “given the risk associated with those countries.”
- What additional reviews were made on these developers?
- When was this additional review completed and what were the main conclusions?
- What percentage of the developers located in the PRC and Russia were able to definitively identify Facebook?
- What communications, if any, has Facebook had with these developers since their initial identification?
- What criteria does Facebook use to assess the “risk associated with” the operation in the PRC and Russia?
2) For developers identified as located within the People’s Republic of China and Russia, please provide a complete list of the types of information to which these developers had access, as well as the timeframes associated with such access.
3) Does Facebook have complete records on how often developers from high-risk jurisdictions accessed its APIs and the ways in which they accessed the data?
4) Please provide an estimate of the number of discrete Facebook users in the United States whose data was shared with a developer located in each country identified as a “high-risk jurisdiction” (broken down by country).
5) The internal document indicates that Facebook would establish a framework to identify “developers and applications determined to be potentially riskier[.]”
- How did Facebook establish this rubric?
- How many developers and apps based in China and Russia have reached this threshold? How many developers and apps in other high-risk jurisdictions have met this threshold?
- What were the specific characteristics of these developers that led to this determination?
- Did Facebook identify any developer as too risky to operate safely? If so, which one?
6) The internal document refers to its public commitment to “conduct a full audit of any application with suspicious activity.”
- How does Facebook characterize “suspicious activity” and how many apps triggered this full audit process?
7) Does Facebook have any indication that any developer access has enabled coordinated inauthentic activity, targeted activity, or any other malicious behavior by foreign governments?
8) Does Facebook have any indication that developer access allowed malvertising or other fraudulent activity by foreign actors, as revealed in public reports?
When asked for a response to lawmakers’ concerns, Meta spokesman Andy Stone did not respond to specific questions, including whether he will ultimately release the audit of the app; and whether you will commit to informing users whose information was compromised as a result of your developer platform features (so presumably that’s a ‘no’ and a ‘no’), by opting instead to send this short statement :
These documents are an artifact of a different product at a different time. Many years ago, we made substantial changes to our platform, closing developer access to key types of data on Facebook while reviewing and approving all apps that request access to sensitive information.