In January, academic-turned-regulator Lorrie Cranor gave a presentation and provided the closing remarks at PrivacyCon, a Federal Trade Commission event intended to “inform policymaking with research,” as she put it. Cranor, the FTC’s chief technologist, neglected to mention that over half of the researchers who presented that day had received financial support from Google — hardly a neutral figure in the debate over privacy. Cranor herself got an “unrestricted gift” of roughly $350,000 from the company, according to her CV.

Virtually none of these ties were disclosed, so Google’s entanglements at PrivacyCon were not just extensive, they were also invisible. The internet powerhouse is keenly interested in influencing a lot of government activity, including antitrust regulation, telecommunications policy, copyright enforcement, online security, and trade pacts, and to advance that goal, has thrown around a lot of money in the nation’s capital. Ties to academia let Google attempt to sway power less directly, by giving money to university and graduate researchers whose work remains largely within academic circles — until it gains the audience of federal policymakers, as at PrivacyCon.

Some research at the event supported Google’s positions. An MIT economist who took Google money, for example, questioned whether the government needed to intervene to further regulate privacy when corporations are sometimes incentivized to do so themselves. Geoffrey Manne, the executive director of a Portland-based legal think tank that relies on funding from Google (and a former Microsoft employee), presented a paper saying that “we need to give some thought to self-help and reputation and competition as solutions” to privacy concerns “before [regulators start] to intervene.” (Manne did not return a request for comment.) Other research presented at PrivacyCon led to conclusions the company would likely dispute.

The problem with Google’s hidden links to the event is not that they should place researchers under automatic suspicion, but rather that the motives of corporate academic benefactors should always be suspect. Without prominent disclosure of corporate money in academia, it becomes hard for the consumers of research to raise important questions about its origins and framing.

Google declined to comment on the record for this article.

How Tech Money Flows to Privacy Scholars

Google’s ties to PrivacyCon are pervasive enough to warrant interrogation. As a case study in how pervasive and well-concealed this type of influence has become, PrivacyCon is hard to beat.

Authors of a whopping 13 out of 19 papers presented at the conference and 23 out of 41 speakers have financial ties to Google. Only two papers included disclosure of an ongoing or past financial connection to Google.

Other tech companies are also financially linked to speakers at the event. At least two presenters took money from Microsoft,` while three others are affiliated with a university center funded by Amazon, Facebook, Google, Microsoft, and Twitter.

“Are we getting voices that have never received money from a company like Google?” — Paul Ohm, Georgetown
But Google’s corporate adversaries are helping to shine a spotlight on what their fellow travelers describe as Google’s particularly deep ties to academia. Those ties are a major focus of a new report from an entity called the Google Transparency Project, part of a charitable nonprofit known as the Campaign for Accountability. The Campaign for Accountability, in turn, receives major, undisclosed funding from Google nemesis and business software company Oracle, as well as from the Bill and Melinda Gates Foundation, which was set up by the co-founder and longtime CEO of Google rival Microsoft (the nonprofit says its funding sources have no bearing on its work to expose funding sources). The Intercept, meanwhile, operates with funding from eBay founder Pierre Omidyar. In other words, tech money even pervades the research into everything tech money pervades. But even accepting that, the report does highlight the extent to which Silicon Valley is widening its influence at the intersection of academia and government.

Take MIT professor Catherine Tucker, who in one PrivacyCon paper argued against proposed government regulations requiring genetic testing services to obtain a type of written permission known as “informed consent” from patients. Tucker added that such a requirement would deter patients from using the testing services and specifically cited one such service, 23andMe, a firm that Google has invested in repeatedly, most recently in October, and whose CEO is the ex-wife of Google co-founder Sergey Brin. Tucker did not disclose in the paper that she has received over $150,000 in grants from Google since 2009, plus another $49,000 from the Net Institute, a think tank funded in part by Google. Contacted by email, Tucker answered that she discloses “nearly two pages of grants from, and consulting work for, a variety of companies and other organizations, on my CV.”

Google has been appreciative of Tucker’s conference work. In a series of emails between Google and George Mason University law professor James Cooper for a 2012 policy conference, first reported by Salon, a Google representative went so far as to personally recommend the marketing professor as someone to invite:

Cooper did not return multiple requests for comment on this story. Reached for comment via email, Cranor replied that she lists “the funder(s) in the acknowledgments of the specific papers that their grant funded,” and that there “have also been press releases about most of the Google funding I received, so everything has been fully disclosed in multiple places.” Cranor added that “all of these grants are made to Carnegie Mellon University for use in my research,” and “I did not receive any of this money personally.” But it is surely worth noting that one of the press releases Cranor references says that “each funded project receives an individual Google sponsor to help develop the research direction and facilitate collaboration between Google and the research team.” Cranor did not reply when asked what role “an individual Google sponsor” has played in her research.

Nick Feamster, a Princeton professor, presented at PrivacyCon on internet-connected household objects and did not disclose that he’s received over $1.5 million in research support from Google. Over email, Feamster told The Intercept that any notion of a conflict “doesn’t even make any sense given the nature of the content we presented,” which included descriptions of security shortcomings in the Nest smart thermostat, owned by Google. “If they were really trying to exert the ‘influence’ that the [report] is trying to suggest, do you think they would have influenced us to do work that actually calls them out on bad privacy practices?”

Many other PrivacyCon speakers, like Omer Tene, an affiliate scholar at Stanford’s Center for Internet and Society, don’t seem ever to have received money from Google; rather, a department or organization they work for is funded in part by Google. On the CIS website, this is made plain:

We are fortunate to enjoy the support of individual and organizational donors, including generous support from Google, Inc. Like all donors to CIS, Google has agreed to provide funds as unrestricted gifts, for which there is no contractual agreement and no promised products, results, or deliverables. To avoid any conflict of interest, CIS avoids litigation if it involves Google. CIS does not accept corporate funding for its network neutrality-related work.

The CIS website also cites Microsoft as a funding source, along with the National Internet Alliance, a telecom lobbying group.

“Neither Google nor any of the other supporters has influenced my work,” Tene told me, referring to his long bibliography on personal data and online privacy.

But support at the institutional level may still influence individual behavior. Cooper, the George Mason staffer who reached out to Google for advice on a privacy conference in the screenshot above, works as the director of the program on economics and privacy at the university’s Law and Economics Center, which has received at least $750,000 from Google, as well as additional funds from Amazon, AT&T, and Chinese internet giant Tencent. A 2015 report in Salon detailed close ties between Google and Cooper, including emails indicating that Google was shopping an op-ed written by Cooper to newspapers, and other messages where Cooper asks Google for help crafting the content of a “symposium on dynamic competition and mergers”:

Cooper also wrote pro-Google academic papers, including this one for the George Mason Law Review entitled “Privacy and Antitrust: Underpants Gnomes, the First Amendment, and Subjectivity,” where he argues that privacy should not be included in any antitrust analysis. Cooper does not disclose Google’s funding of the [Law and Economics Center] in the article. Other pro-Google articles by Cooper, like this one from Main Justice, do include disclosure.

Cooper presented at this year’s PrivacyCon and did not disclose his relationship with Google. Cooper did not return a request for comment.

Among the PrivacyCon presenters who have benefited from non-Google generosity: Carnegie Mellon University’s Alessandro Acquisti and Columbia University’s Roxana Geambasu received $60,000 and $215,000 in Microsoft money, respectively, on top of financial ties to Google. Both co-authored and presented papers on the topic of targeted advertising. Acquisti’s papers, which did not disclose his funding sources, concluded that such marketing was not necessarily to the detriment of users. Geambasu (to her credit) produced data that contradicted Google’s claims about how targeting works and disclosed her financial relationship with the company. She also noted to The Intercept that “all my funding sources are listed in my resume,” located on her website.

The University of California, Berkeley’s International Computer Science Institute, which had an affiliated researcher presenting at PrivacyCon, counts on not just Google for its survival, but Microsoft, Comcast, Cisco, Intel, and Samsung. Two PrivacyCon submissions came out of Columbia’s Data Science Institute, which relies on Yahoo and Cisco. The Center for Democracy and Technology — which employs one PrivacyCon presenter and co-organized a privacy conference with Princeton in May — is made possible not just by Google but also an alphabet of startup and legacy tech money, according to IRS disclosures: Adobe, Airbnb, Amazon, AOL, Apple, all the way down to Twitter and Yahoo. Corporate gifts are often able to keep entire academic units functioning. The CMU CyLab, affiliated with four PrivacyCon presenters, is supported by Facebook, Symantec, and LG, among others.

Narrower Disclosure Standards in Academia

Contacted by The Intercept, academics who took money from tech companies and then spoke at PrivacyCon without disclosure provided responses ranging from flat denials to lengthy rationales. Some of the academics argued that just because their institution or organization keeps the lights on because of Silicon Valley money doesn’t mean they’re beholden to or even aware of these benefactors. But it’s harder to imagine, say, an environmental studies department getting away with floating in Exxon money, or a cancer researcher bankrolled by Phillip Morris. Like radon or noise pollution, invisible biases are something people both overstate and don’t take seriously enough — anyone with whom we disagree must be biased, and we’re loath to admit the possibility of our own.

Serge Egelman, the director of usable security and privacy at Berkeley’s International Computer Science Institute, argued that this is hardly an issue unique to Google:

I am a Google grant recipient, as are literally thousands of other researchers in computer science. Every year, like many other companies who have an interest in advancing basic research (e.g., Cisco, IBM, Comcast, Microsoft, Intel, etc.), Google posts grant solicitations. Grants are made as unrestricted gifts, meaning that Google has no control over how the money is used, and certainly cannot impose any restrictions over what is published or presented. These are grants made to institutions, and not individuals; this has no bearing on my personal income, but means that I can (partially) support a graduate student for a year. Corporate philanthropy is currently filling a gap created by dwindling government support for basic research (though only partially).

He also added that no matter who’s paying the bills, his research is independent and strongly in the public interest:

My own talk was on how Android apps gather sensitive data against users’ wishes, and the ways that the platform could be improved to better support users’ privacy preferences. All of my work in the privacy space is on protecting consumers’ privacy interests and this is the first time anyone has accused me of doing otherwise.

The list of people who spoke at PrivacyCon are some of the most active researchers in the privacy space. They come from the top universities in computer science, which is why it’s no surprise that their institutions have received research funding from many different sources, Google included. The question that you should be asking is, was the research that was presented in the public interest? I think the answer is a resounding yes.

Acquisti, the PrivacyCon presenter from CMU, is a professor affiliated with the university’s CyLab privacy think tank and shared a 2010 $400,000 Google gift with FTC technologist Cranor and fellow CMU professor Norman Sadeh before submitting and presenting two PrivacyCon papers sans disclosure, plus one presentation that included a disclosure. When the gift was given, the New York Times observed that “it is presumably in Google’s interest to promote the development of privacy-handling tools that forestall federal regulation.” Over email, Acquisti argued that disclosure is only necessary when it applies to the specific funding for a body of work being published or presented. That is, once you’ve given the talk or published a paper, your obligation to mention its financing source ends: “It would be highly misleading and incorrect for an author to list in the acknowledgements of an article a funding source that did NOT in fact fund the research activities conducted for and presented in that article.” In fact, Acquisti said, “It would be nearly fraudulent”:

It would be like claiming that funds from a certain source were used to cover a study (e.g. pay for a lab experiment) while they were not; or it would be like claiming that a research proposal was submitted to (and approved by) some grant committee at some agency/institution, whereas in fact that institution never even knew or heard about that research. … This is such a basic tenet in academia.

This line of reasoning came up again and again as I spoke to privacy-oriented researchers and academics — that papers actually should not mention funding directed to the researcher for other projects, even when such disclosure could bear on a conflict of interest, and that, for better or for worse, this deeply narrow standard of disclosure is just the way it is. And besides, it’s not as if researchers who enjoy cash from Google are necessarily handing favors back, right?

According to Paul Ohm, a professor of law and director at Georgetown University’s Center on Privacy and Technology, that’s missing the point: The danger of corporate money isn’t just clear-cut corruption, but subconscious calculus among academics about their research topics and conclusions and invisible influence that funding might cause. Ohm said he continually worries about “the corrupting influence of corporate money in scholarship” among his peers.

“I think privacy law is so poorly defined,” Ohm told The Intercept, “and we have so few clear rules for the road, that people who practice in privacy law rely on academics more than they do in most areas of the law, because of that it really has become a corporate strategy to interact with academics a lot.”

It’s exactly this threat that a disclosure is meant to counter — not an admission of any wrongdoing but a warning that it’s possible the work in question was compromised in some way, however slight. A disclosure isn’t a stop sign so much as one suggesting caution. That’s why Ohm thinks it’s wise for PrivacyCon (and the infinite stream of other academic conferences) to err on the side of too much disclosure — he goes as far as to say organizers should consider donor source diversity in a “code of conduct.”

“Let’s try to make sure we have at least one voice on every panel that didn’t take money,” Ohm said. “Are we getting voices that have never received money from a company like Google?” And ultimately, why not disclose? Egelman, from UC Berkeley’s International Computer Science Institute, told me he thought extra disclosures wouldn’t be a good way for “researchers [to] use valuable conference time.” Ohm disagrees: “I don’t think it’s difficult at the beginning of your talk to say, ‘I took funding in the broader research project of which this a part.’” In other words: Have you taken money from Google? Are you presenting to a room filled with regulators on a topic about which you cannot speak without the existence of Google at least looming overhead? It would serve your audience — and you — to spend 10 seconds on a disclosure. “Disclosure would have been great,” Ohm said of PrivacyCon. “Recent disclosure would have been great. Disclosure of related funding would have been great.” Apparently, only two other researchers agreed.