Industry Unbound: Friendly Academics And Performing Accountability

Published on

Excerpted from Industry Unbound: The Inside Story of Privacy, Data, and Corporate Power by Ari Ezra Waldman.

Get The Full Henry Singleton Series in PDF

Get the entire 4-part series on Henry Singleton in PDF. Save it to your desktop, read it on your tablet, or email to your colleagues

Q3 2021 hedge fund letters, conferences and more

Industry Unbound: Friendly Academics

The information industry also launders its arguments through seemingly independent academic research. The best example of this is in the field of AI and automated decision-making systems. AI is an umbrella term often imprecisely used to refer to set of technologies “best understood as a set of techniques aimed at approximating some aspect of human or animal cognition using machines.” For AI systems to work as promised (which they rarely do), their developers require enormous tranches of data for the purposes of training and improving accuracy. AI policy, therefore, is inextricably linked to privacy: the less regulation around the collection and use of personal data, the easier it will be to design, market, and deploy AI; the fewer guardrails around the use of AI, the more companies will extract our data. Scholars in law and the social sciences are interrogating AI and its use in social policy decisions. They have highlighted problems of bias, lack of accountability, structural injustice, and invasions of privacy. Other researchers, however, have become apologists for industry’s AI-related data grabs.

The MIT Media Lab, for example, not only cozied up to Silicon Valley billionaires to build potential funding streams, but also allegedly aligned some of its policy recommendations on data collection and AI with Big Tech’s antiregulatory posture. In the AI space, the information industry prefers a light (or no) touch approach to tweaking AI’s bias problem; they have co-opted the notion of “AI ethics” as a means of replacing public governance with their own internal structures. Industry also invests millions to create a narrative about bias, data privacy, and the role of government. They funded MIT Media Lab research that was eventually cited in reports published by Microsoft Corporation (NASDAQ:MSFT), Alphabet Inc (NASDAQ:GOOGL), IBM (NYSE:IBM), Facebook, Inc. (NASDAQ:FB), and Amazon.com, Inc. (NASDAQ:AMZN) calling for self-regulation. In response to industry pressure, Media Lab executives allegedly “water[ed] down” AI law and policy recommendations to the California legislature that were at odds with the research done by its own experts on the ground.

Amazon paid scholars at Antonin Scalia Law School at George Mason University to push their favored discourses. After the antitrust scholar Lina Khan wrote an article targeting Amazon as anticompetitive and calling for reinvigorating antitrust law to take on the internet giant, Amazon paid pro-corporate scholars to write an article lauding Amazon’s monopolistic practices as good for consumers. It’s hard to know how often this happens. But even when scholars acknowledge their corporate funding streams in footnotes, the discursive damage remains.

There is more to this story than a few grants, even if those grants add up to a several millions of dollars. Many scholars who receive Google, Facebook, or other Big Tech funds insist that the money doesn’t influence their writing. And for many, that is true. Merely receiving money from a target of research does not necessarily prove bias. But we also have evidence that the nonprofits that receive those funds make personnel and substantive decisions that accord with the views of their biggest tech donors. In 2017, for example, the entire competition team at a Google-funded think tank was fired after its leader pushed for more rigorous antitrust enforcement against Google.

The information industry is also far more subtle as it seeks influence over academic discourse about privacy, law, and technology. For instance, company representatives will often only speak with independent researchers about already public documents and reports and then only on “deep background,” thus limiting what academics can learn. This happened to me during research for another project, where a Google attorney and several colleagues were only willing to discuss Google Play’s published guidelines and only without attribution. I ended the call. Five academic colleagues experienced the same tactic, including one who was interested in Facebook content moderation who was told “only on the condition that you can’t quote anything we say.” This tactic may sound counterproductive to building friendly discourses in academia; not talking is a bad inculcative strategy. According to a former member of Google’s policy shop, the deep background strategy “gives us an opportunity to share our side of the story without pinning a quote on one person because quotes can always be misleading and edited to make us look bad.

Several companies incentivize friendly coverage by dangling research funds in front of junior scholars. Young researchers and PhD students are particularly susceptible to corporate influence because of the need to pay for their fellowships, publish to get tenure, the possibility that access will translate into undiscovered research, and the small stipends they receive from their institutions. The Facebook Fellowship, for example, includes full tuition and a $42,000 annual stipend, paid visits to Facebook, and all expenses paid participation in the annual Fellowship summit. Undoubtedly, merely participating in this program does mean that these junior scholars are unduly influenced by Facebook. But much of the Fellows’ already published research fits neatly within privacy-ascontrol ‘s logic of power. Among thirty-six Facebook Fellows in 2020, six classify their work in some way related to privacy. Along with several co-authors, these fellows have published academic articles arguing that the tool Mark Zuckerberg plans to use to create a “privacy-focused” Facebook – end-to-end encryption – is the “best way to protect” privacy on social media. In another article, researchers used studies to buttress the feasibility of privacy self-governance and argued users of messaging services are willing to trade security for convenience. Other Fellows researching AI and fairness issues have published articles proposing changes that companies can make themselves, perpetuating, even inadvertently, the self-regulatory ethos. I am not suggesting that all of these promising young scholars are part of a cabal of researchers plotting to spread pro-Facebook messages throughout academia. Switch it around: Facebook is choosing to provide ample support for research that perpetuates privacy-as-control’s logic and using its soft power to create friendly relationships among promising future academics.

To my knowledge and based on the experience of several academics in law, surveillance studies, and sociology, Facebook and Google often, though not always, require preapproval of all quotes and related text before an academic publishes scholarship based on interviews with company representatives, a grant from the company, or pursuant to officially sanctioned field work. This is a particularly pernicious tactic. One colleague I interviewed for this projecthas saved email conversations with Facebook in which company representatives required articles to be embargoed and precleared before submission. I did not get that farwith Facebook,which refused to participate in this research. Another colleague reported that some Facebook and Google grants they received had “no strings attached, it was money for a conference or a project or whatever, but they just gave it,” but “research money always came with questions and invitations and, ultimately, they wanted to know what my conclusions would be. I gave back the remaining funds much to [my university’s] chagrin.”

Corporate-friendly academic research makes its way back into the information industry, inculcating the rank and file along the way, because corporate researchers, public policy teams, and other lawyers and privacy professionals are encouraged to cite and refer to it in their work. Several privacy professionals and lawyers outside Silicon Valley showed me internal documents focusing both on the benefits ofprivacy notice, choice, and control and how to improve it, and they all rely on research from Facebook Fellows, Google-funded research projects, and encryption-centered work at Microsoft Research. Of course, independent academics are cited as well. But, often, those academics are cited for background; their proregulatory proposals are ignored. For example, Facebook cited work from Lorrie Cranor, Aleecia McDonald, Helen Nissenbaum, and me, among others, in its public report on improving notice and transparency. But when it came time to thinking about solutions, Facebook touted its own solutions, as if to suggest that the problem can be solved without the law getting involved.

Performing Accountability

Another common tactic I observed during my interviews is what Julie Cohen has called performing accountability, or doing something “designed to express a generic commitment to accountability without ... meaningful scrutiny of the underlying process.” Magicians use the word “misdirection” to describe a related phenomenon: while we’re distracted by some of the information industry ‘s cynical trappings of accountability, tech companies are busy undermining our privacy.

A paradigmatic example of this strategy comes from the related context of content moderation. Content moderation refers to how platforms regulate the material available on their sites. Sarah Roberts has called content moderation “a powerful mechanism of control” that has “grown up alongside, and in service to” private companies in the information industry. Platforms moderate content in order to achieve optimal engagement. Platforms develop their own rules for what is and what is not allowed to be posted and they apply those rules using both humans and algorithms.

Content moderation at Facebook has garnered outsized attention in the media, among policy makers, and among academics. Many high-profile content moderation controversies at Facebook have caused such a crisis of confidence that the company announced, with significant fanfare, that it was creating an oversight board to hear appeals of moderation decisions from the front lines. The idea is that an independent board would routinize, rationalize, and publicize content moderation decisions, generating trust and confidence.

Per a search on Lexis Nexis, 2,210 newspaper and magazine articles have been written about the Oversight Board between November 2018, when the board was announced, and May 18, 2020. Over 565 days, that’s an average of nearly four articles per day! Facebook pitched many of these stories to both leading outlets like the New York Times and the Wall Street Journal, as well as to general interest blogs and tech-focused news outlets. During this time, 189 law review and journal articles at least mentioning the board have been published. And that doesn’t even include the many hundreds that are being written, under submission, and soon-to-be published at this book went to press. A May 18, 2020, Google search for any content with the exact phrase “Facebook Oversight Board” yielded 236,000 results; millions more have all of those terms. Facebook has pushed out several reports and issued press releases dutifully picked up by news outlets and commentators. The company’s representatives have given countless talks at universities, as well as public and private fora, from Princeton University to the Aspen Institute.

This onslaught was evidently intentional, but the attention paid to the board is disproportionate to its power. The board says it will focus on “the most challenging content issues for Facebook, including ... hate speech, harassment, and protecting people’s privacy.” But, as the media scholar Siva Vaidhyanathan notes, “only in the narrowest and most trivial of ways does this board have any such power. The new Facebook review board will have no influence over anything that really matters in the world.” A quick look at the board’s Charter proves he’s correct.

The board can only hear appeals for content that has been removed, not the misleading and harmful content that remains. Nor can the board make binding decisions about the fate of specific pieces of content on a case-by-case basis. It is supposed to make its decisions within ninety days of a filed appeal, but can expedite certain decisions within thirty days. Its decisions have zero precedential value. It has no binding impact on policy, even on policies about content moderation! It cannot change the way Facebook is run or materially change how Facebook makes decisions about content. It won’t be able to address the fact that Facebook’s failure to remove misleading “deep fake” videos or “fake news” can sway an election. It can’t do much about the rampant hate speech and human rights violations that persist. It has absolutely no voice in Facebook’s continued misuse of user data. And it will play no role in corralling misinformation and harmful conspiracies rampant throughout the platform. The last two points are rather ironic: the board’s announcement came in reaction to a New York Times report about how Facebook publicly lied, deflected blame, and tried to cover up both its failure to recognize and police Russia’s use of the platform to interfere in the 2016 US presidential election and its flagrant misuse of user data in the Cambridge Analytica scandal.

Why, then, has the board received so much media and scholarly attention? The strategy is intentional, meant to distract us from everything Facebook isn’t doing. Joan Donovan, the research director of Harvard’s Shorenstein Center and an expert on media manipulation, called the board a distraction from “what really needs to happen, which is to design technology that doesn’t allow for the expansive amplification of disinformation and health misinformation.” Facebook isn’t changing its business. It isn’t taking down fake videos or limiting the lies from right-wing politicians. Nor is it changing the very financial model that makes it in the company’s interest to extract our data, leave up conspiracy theories, fake news, deep fakes, and other sensationalized content. Dipayan Ghosh, a fellow at Harvard and former Facebook executive, wrote that the board is “a commercial thing of convenience for the company both in its name and its function; it gives the impression that the board will provide true oversight by graduating the responsibility of determining what should constitute hate speech to an external party with public credibility, allowing the company to skate over the threat of a more rigorous regulatory policy that might emerge from relatively aggressive legislatures that might wish to target the firm’s business model itself.” The board’s impotence, then, is real, but its discursive effect is not. Facebook engaged in an aggressive strategy involving earned media and friendly academics to make a lot of noise about something that will barely be a blip on the screen of Facebook’s vast problems. Its full court press is nothing short of a concerted strategy to redirect us from its other failures.

The same strategy has been deployed to inculcate the values of privacy-as-control as obvious, common sense, and normatively good. Privacy self-regulation, notice-and-consent, and privacy policies are “commercial things of convenience,” as well: privacy policies give the impression that the information industry is doing something to protect our data and small, marginal changes after privacy scandals facilitate an escape from greater regulatory oversight. This misdirection has been so successful at inculcating a perception of accountability among members of the public that many people think a company with a privacy policy is promising to keep our data private! Whenever I questioned the privacy-invasive designs of several companies’ products, the usual response was to redirect my attention to the company’s cybersecurity work. We saw this in Chapter 1, where both privacy professionals and software engineers touted their encryption and state-of-the-art security techniques as proof that they cared about privacy. The privacy lawyers I interviewed highlighted their work improving transparency. In several instances, these lawyers would note that “no one is perfect” and that “we’re doing better” by showing how they have redesigned their notices to be more readable. These are, at their core, misdirections. Improving notice and strengthening cybersecurity are not bad ideas. But they don’t speak to design. Nor do they serve any purpose outside the privacy-as-control framework. Asking us to focus on notice not only perpetuates its legitimacy as a privacy practice, but it also keeps the rank and file focused on performances of privacy rather than pulling back the curtain on the industry’s legal and discursive crusades against it.