In January, academic-turned-regulator Lorrie Cranor gave a presentation and provided the closing remarks at PrivacyCon, a Federal Trade Commission event intended to “inform policymaking with research,” as she put it. Cranor, the FTC’s chief technologist, neglected to mention that over half of the researchers who presented that day had received financial support from Google — hardly a neutral figure in the debate over privacy. Cranor herself got an “unrestricted gift” of roughly $350,000 from the company, according to her CV.
Virtually none of these ties were disclosed, so Google’s entanglements at PrivacyCon were not just extensive, they were also invisible. The internet powerhouse is keenly interested in influencing a lot of government activity, including antitrust regulation, telecommunications policy, copyright enforcement, online security, and trade pacts, and to advance that goal, has thrown around a lot of money in the nation’s capital. Ties to academia let Google attempt to sway power less directly, by giving money to university and graduate researchers whose work remains largely within academic circles — until it gains the audience of federal policymakers, as at PrivacyCon.
Some research at the event supported Google’s positions. An MIT economist who took Google money, for example, questioned whether the government needed to intervene to further regulate privacy when corporations are sometimes incentivized to do so themselves. Geoffrey Manne, the executive director of a Portland-based legal think tank that relies on funding from Google (and a former Microsoft employee), presented a paper saying that “we need to give some thought to self-help and reputation and competition as solutions” to privacy concerns “before [regulators start] to intervene.” (Manne did not return a request for comment.) Other research presented at PrivacyCon led to conclusions the company would likely dispute.
The problem with Google’s hidden links to the event is not that they should place researchers under automatic suspicion, but rather that the motives of corporate academic benefactors should always be suspect. Without prominent disclosure of corporate money in academia, it becomes hard for the consumers of research to raise important questions about its origins and framing.
Google declined to comment on the record for this article.
Google’s ties to PrivacyCon are pervasive enough to warrant interrogation. As a case study in how pervasive and well-concealed this type of influence has become, PrivacyCon is hard to beat.
Authors of a whopping 13 out of 19 papers presented at the conference and 23 out of 41 speakers have financial ties to Google. Only two papers included disclosure of an ongoing or past financial connection to Google.
Other tech companies are also financially linked to speakers at the event. At least two presenters took money from Microsoft,` while three others are affiliated with a university center funded by Amazon, Facebook, Google, Microsoft, and Twitter.
“Are we getting voices that have never received money from a company like Google?” — Paul Ohm, Georgetown
Take MIT professor Catherine Tucker, who in one PrivacyCon paper argued against proposed government regulations requiring genetic testing services to obtain a type of written permission known as “informed consent” from patients. Tucker added that such a requirement would deter patients from using the testing services and specifically cited one such service, 23andMe, a firm that Google has invested in repeatedly, most recently in October, and whose CEO is the ex-wife of Google co-founder Sergey Brin. Tucker did not disclose in the paper that she has received over $150,000 in grants from Google since 2009, plus another $49,000 from the Net Institute, a think tank funded in part by Google. Contacted by email, Tucker answered that she discloses “nearly two pages of grants from, and consulting work for, a variety of companies and other organizations, on my CV.”
Google has been appreciative of Tucker’s conference work. In a series of emails between Google and George Mason University law professor James Cooper for a 2012 policy conference, first reported by Salon, a Google representative went so far as to personally recommend the marketing professor as someone to invite:
Cooper did not return multiple requests for comment on this story. Reached for comment via email, Cranor replied that she lists “the funder(s) in the acknowledgments of the specific papers that their grant funded,” and that there “have also been press releases about most of the Google funding I received, so everything has been fully disclosed in multiple places.” Cranor added that “all of these grants are made to Carnegie Mellon University for use in my research,” and “I did not receive any of this money personally.” But it is surely worth noting that one of the press releases Cranor references says that “each funded project receives an individual Google sponsor to help develop the research direction and facilitate collaboration between Google and the research team.” Cranor did not reply when asked what role “an individual Google sponsor” has played in her research.
Nick Feamster, a Princeton professor, presented at PrivacyCon on internet-connected household objects and did not disclose that he’s received over $1.5 million in research support from Google. Over email, Feamster told The Intercept that any notion of a conflict “doesn’t even make any sense given the nature of the content we presented,” which included descriptions of security shortcomings in the Nest smart thermostat, owned by Google. “If they were really trying to exert the ‘influence’ that the [report] is trying to suggest, do you think they would have influenced us to do work that actually calls them out on bad privacy practices?”
Many other PrivacyCon speakers, like Omer Tene, an affiliate scholar at Stanford’s Center for Internet and Society, don’t seem ever to have received money from Google; rather, a department or organization they work for is funded in part by Google. On the CIS website, this is made plain:
We are fortunate to enjoy the support of individual and organizational donors, including generous support from Google, Inc. Like all donors to CIS, Google has agreed to provide funds as unrestricted gifts, for which there is no contractual agreement and no promised products, results, or deliverables. To avoid any conflict of interest, CIS avoids litigation if it involves Google. CIS does not accept corporate funding for its network neutrality-related work.
The CIS website also cites Microsoft as a funding source, along with the National Internet Alliance, a telecom lobbying group.
“Neither Google nor any of the other supporters has influenced my work,” Tene told me, referring to his long bibliography on personal data and online privacy.
But support at the institutional level may still influence individual behavior. Cooper, the George Mason staffer who reached out to Google for advice on a privacy conference in the screenshot above, works as the director of the program on economics and privacy at the university’s Law and Economics Center, which has received at least $750,000 from Google, as well as additional funds from Amazon, AT&T, and Chinese internet giant Tencent. A 2015 report in Salon detailed close ties between Google and Cooper, including emails indicating that Google was shopping an op-ed written by Cooper to newspapers, and other messages where Cooper asks Google for help crafting the content of a “symposium on dynamic competition and mergers”:
Cooper also wrote pro-Google academic papers, including this one for the George Mason Law Review entitled “Privacy and Antitrust: Underpants Gnomes, the First Amendment, and Subjectivity,” where he argues that privacy should not be included in any antitrust analysis. Cooper does not disclose Google’s funding of the [Law and Economics Center] in the article. Other pro-Google articles by Cooper, like this one from Main Justice, do include disclosure.
Cooper presented at this year’s PrivacyCon and did not disclose his relationship with Google. Cooper did not return a request for comment.
Among the PrivacyCon presenters who have benefited from non-Google generosity: Carnegie Mellon University’s Alessandro Acquisti and Columbia University’s Roxana Geambasu received $60,000 and $215,000 in Microsoft money, respectively, on top of financial ties to Google. Both co-authored and presented papers on the topic of targeted advertising. Acquisti’s papers, which did not disclose his funding sources, concluded that such marketing was not necessarily to the detriment of users. Geambasu (to her credit) produced data that contradicted Google’s claims about how targeting works and disclosed her financial relationship with the company. She also noted to The Intercept that “all my funding sources are listed in my resume,” located on her website.
The University of California, Berkeley’s International Computer Science Institute, which had an affiliated researcher presenting at PrivacyCon, counts on not just Google for its survival, but Microsoft, Comcast, Cisco, Intel, and Samsung. Two PrivacyCon submissions came out of Columbia’s Data Science Institute, which relies on Yahoo and Cisco. The Center for Democracy and Technology — which employs one PrivacyCon presenter and co-organized a privacy conference with Princeton in May — is made possible not just by Google but also an alphabet of startup and legacy tech money, according to IRS disclosures: Adobe, Airbnb, Amazon, AOL, Apple, all the way down to Twitter and Yahoo. Corporate gifts are often able to keep entire academic units functioning. The CMU CyLab, affiliated with four PrivacyCon presenters, is supported by Facebook, Symantec, and LG, among others.
Contacted by The Intercept, academics who took money from tech companies and then spoke at PrivacyCon without disclosure provided responses ranging from flat denials to lengthy rationales. Some of the academics argued that just because their institution or organization keeps the lights on because of Silicon Valley money doesn’t mean they’re beholden to or even aware of these benefactors. But it’s harder to imagine, say, an environmental studies department getting away with floating in Exxon money, or a cancer researcher bankrolled by Phillip Morris. Like radon or noise pollution, invisible biases are something people both overstate and don’t take seriously enough — anyone with whom we disagree must be biased, and we’re loath to admit the possibility of our own.
Serge Egelman, the director of usable security and privacy at Berkeley’s International Computer Science Institute, argued that this is hardly an issue unique to Google:
I am a Google grant recipient, as are literally thousands of other researchers in computer science. Every year, like many other companies who have an interest in advancing basic research (e.g., Cisco, IBM, Comcast, Microsoft, Intel, etc.), Google posts grant solicitations. Grants are made as unrestricted gifts, meaning that Google has no control over how the money is used, and certainly cannot impose any restrictions over what is published or presented. These are grants made to institutions, and not individuals; this has no bearing on my personal income, but means that I can (partially) support a graduate student for a year. Corporate philanthropy is currently filling a gap created by dwindling government support for basic research (though only partially).
He also added that no matter who’s paying the bills, his research is independent and strongly in the public interest:
My own talk was on how Android apps gather sensitive data against users’ wishes, and the ways that the platform could be improved to better support users’ privacy preferences. All of my work in the privacy space is on protecting consumers’ privacy interests and this is the first time anyone has accused me of doing otherwise.
The list of people who spoke at PrivacyCon are some of the most active researchers in the privacy space. They come from the top universities in computer science, which is why it’s no surprise that their institutions have received research funding from many different sources, Google included. The question that you should be asking is, was the research that was presented in the public interest? I think the answer is a resounding yes.
Acquisti, the PrivacyCon presenter from CMU, is a professor affiliated with the university’s CyLab privacy think tank and shared a 2010 $400,000 Google gift with FTC technologist Cranor and fellow CMU professor Norman Sadeh before submitting and presenting two PrivacyCon papers sans disclosure, plus one presentation that included a disclosure. When the gift was given, the New York Times observed that “it is presumably in Google’s interest to promote the development of privacy-handling tools that forestall federal regulation.” Over email, Acquisti argued that disclosure is only necessary when it applies to the specific funding for a body of work being published or presented. That is, once you’ve given the talk or published a paper, your obligation to mention its financing source ends: “It would be highly misleading and incorrect for an author to list in the acknowledgements of an article a funding source that did NOT in fact fund the research activities conducted for and presented in that article.” In fact, Acquisti said, “It would be nearly fraudulent”:
It would be like claiming that funds from a certain source were used to cover a study (e.g. pay for a lab experiment) while they were not; or it would be like claiming that a research proposal was submitted to (and approved by) some grant committee at some agency/institution, whereas in fact that institution never even knew or heard about that research. … This is such a basic tenet in academia.
This line of reasoning came up again and again as I spoke to privacy-oriented researchers and academics — that papers actually should not mention funding directed to the researcher for other projects, even when such disclosure could bear on a conflict of interest, and that, for better or for worse, this deeply narrow standard of disclosure is just the way it is. And besides, it’s not as if researchers who enjoy cash from Google are necessarily handing favors back, right?
According to Paul Ohm, a professor of law and director at Georgetown University’s Center on Privacy and Technology, that’s missing the point: The danger of corporate money isn’t just clear-cut corruption, but subconscious calculus among academics about their research topics and conclusions and invisible influence that funding might cause. Ohm said he continually worries about “the corrupting influence of corporate money in scholarship” among his peers.
“I think privacy law is so poorly defined,” Ohm told The Intercept, “and we have so few clear rules for the road, that people who practice in privacy law rely on academics more than they do in most areas of the law, because of that it really has become a corporate strategy to interact with academics a lot.”
It’s exactly this threat that a disclosure is meant to counter — not an admission of any wrongdoing but a warning that it’s possible the work in question was compromised in some way, however slight. A disclosure isn’t a stop sign so much as one suggesting caution. That’s why Ohm thinks it’s wise for PrivacyCon (and the infinite stream of other academic conferences) to err on the side of too much disclosure — he goes as far as to say organizers should consider donor source diversity in a “code of conduct.”
“Let’s try to make sure we have at least one voice on every panel that didn’t take money,” Ohm said. “Are we getting voices that have never received money from a company like Google?” And ultimately, why not disclose? Egelman, from UC Berkeley’s International Computer Science Institute, told me he thought extra disclosures wouldn’t be a good way for “researchers [to] use valuable conference time.” Ohm disagrees: “I don’t think it’s difficult at the beginning of your talk to say, ‘I took funding in the broader research project of which this a part.’” In other words: Have you taken money from Google? Are you presenting to a room filled with regulators on a topic about which you cannot speak without the existence of Google at least looming overhead? It would serve your audience — and you — to spend 10 seconds on a disclosure. “Disclosure would have been great,” Ohm said of PrivacyCon. “Recent disclosure would have been great. Disclosure of related funding would have been great.” Apparently, only two other researchers agreed.
It’s a very important and informative story, Mr Biddle. I remember reading some information at the Project a couple of weeks ago. I cannot expect to remember everything that interests me, so thank you for reminding me.
What is the matter with these people who think that they are above corruption or influence, or even bad ideas? Google is one of the scariest companies around.
Most unfortunate is that the people who write the laws and the people who influence them are mostly short-term thinkers.
When the breaking point comes, I do not know.
Ralph Nader was on Democracy Now! this morning. He’s having an event next week in DC – what I got from part of the interview is that he is essentially telling us that the commons are ours and we have to take them. Whatever it is that the well-educated and well-connected have confiscated or normalized isn’t the end-all and be-all.
OT – How about that Dakota Access injunction?
Is it time to campaign the masses to stop using Google?
PrivacyCon
Well that name about sums it up doesn’t it.
I would like to see the same kind of disclosure from journalists regarding Google and the Upskirt Economy.
For example, this might be a start for the Intercept:
This website–
– Allows third party javascript and cookies from the following companies: Google, Facebook, Twitter, etc. Third party computer code and cookies are both used to track you across the internet and to create very large data profiles containing your surfing, spending, email, etc. habits. We receive benefits from allowing these third parties to run this computer code and create cookies on your computer from our domain.
– Uses analytics software and personal information provided for free by Google, Facebook, etc. Analytics is our window into your data profile. The Intercept uses information derived from tracking your online behavior and information collected in your data profile. (I would like to see a dollar value assigned to all free services provided)
– Uses free marketing services provided by Facebook, Google etc. for promotion and marketing. ($$$ value)
– Purchases ($$$ value) worth of services from Facebook, Google etc.
– Total percentage of traffic that comes from Facebook, Google etc. broken down by company. (This is a very important data point. You can kind of call it the Freedom Factor)
That is just a very basic start but gives you the idea about the kinds of disclosure that should be going on.
Any idiot can practice ethics. But an ethicist is, by definition, a professional, which is to say, someone who receives money from a stakeholder in the question being considered. This is why ethics is a synonym of profit, though it might sometimes connote a slightly longer-term average window.
Unless people decide that they don’t respect professionals more than amateurs, nor people who make a good living at something over bums and dilettantes, I imagine the ethics will continue to be whatever gets the biggest players the most money.
There has been a lot of discussion in the high tech industry that we are rapidly approaching the point when Artificial Intelligence renders humanity obsolete. What to do with washed up humanity is a difficult ethical question. I, for one, am glad that Google is using professional ethicists to resolve it. Do you simply discard it like a worn out pair of jeans? Or do you put it out to pasture like a retired race horse? In the end, the question is a moot one, however, since it will ultimately be answered by the Artificial Intelligences themselves.
But I’ve been keeping my own score card on humanity’s progress, and the interim results aren’t looking too good.
The only synonym of ethics even remotely related to profit I could fine is value.
Perhaps you are confusing Antonyms with Synonyms?
You mean synonyms like war and peace?
I’ve not read it. *When I retire I’m going to read both war and peace.
Edward Snowden is a hero!
#esiah
At academic conferences, at least in the real sciences, speakers routinely put up a beginning or end slide acknowledging their sources of funding and thank them publiclc. I can’t imagine that those such as Egelman who want to prevent this will do their academic credentials any good by being exposed in your pages.
Keep up the good work of publishing the names and affiliations of those who pretend to be academics but hide their prejudicial funding sources from peer review.
This quote is entirely taken out of context. The full quote was:
“All of my grant support is plainly listed on my CV, which can be found on the first hit when one searches for my name. When presenting at conferences, it is not customary to name the sponsoring agencies, because the authors are expected (and often required) to place that information in the corresponding publication’s acknowledgments section; nor do researchers use valuable conference time—conference speakers have 10-15 minutes to summarize a 10-15 page paper—to preface their talks with a list of sponsors who they have ever received support from. That is, I do not mention NSF, AFRL, or the Center for Long Term Cybersecurity, as part of any talks either, again, because all of the funding I have ever received is plainly listed on my CV. Once the work that Google funded is published, I will of course acknowledge them in that publication.”
If I am required to preface every talk not just with the funders of the research being presented (even those are explicitly listed in that research), but also with a list of funders who have ever sponsored unrelated research, where does that end? Should I also spend several minutes listing the funders of colleagues? Anyone who has ever given to my institution? Should I be required to name individual alumni?
I misread the title of this article as “Government Piracy Conference”. It seemed strange, because even in relatively young countries like the United States, governments have had centuries to perfect their art. A piracy conference shouldn’t be necessary, except as a paid vacation.
Privacy is a more complicated matter. People should have enough privacy that their data will be sufficiently valuable so that corporations such as Google can make good profits by selling it. But you don’t want so much privacy that it ends up killing Google’s business model altogether. So a happy medium is best, with your personal data belonging to a reasonably exclusive club of high tech companies.
The question of transparency, and whether academics should reveal that Google is paying them to alter their conclusions, is also interesting. Transparency is nice in principle, but it also is a violation of Google’s privacy. Why is it anybody else’s business if Google pays for research results? No one, not even Google, should be forced to reveal embarrassing facts that cause others to question their integrity. That is why the Fifth Amendment provides protection against self incrimination. Google, as a corporation, is also a person, and therefore entitled to privacy rights under the Fifth Amendment.
Then again, Google will probably have more privacy that the average citizen, because they don’t have Google to spy on them.
The “average citizen” in our stressed-out society may become Google’s greatest concern. . .
http://www.latimes.com/local/lanow/la-me-ln-google-attacks-charges-20160706-snap-story.html
The only solution is more comprehensive and invasive mass surveillance to identify and neutralize such paranoid individuals before they go off the deep end. . . but this, in turn, will create more such individuals, necessitating even more aggressive pre-crime efforts by the organs of state security . . . this will not end well.
I can see how you would make that mistake.
*I suspect one day the world will look back on this time as the golden age of piracy on the high seas of the Internet. They will make movies about the Fuzzy bears, the Gucci 2s and that swashbuckler of the cinema cap’n “Snowden”.
The problem if anyone can be honest is that we have a very dishonest government, and political system which pins one group against the other – just watch the news. A house divided will fall.
America was started to escape oppression by the King, notably heavy taxation. We are back where it all started and the Democrats represent the King. The Democrats represent money and Wall St (always have).
I hope Julian Assange delivers on his promise and shows the world American’s love to elect criminals.
Google and Microsoft CEO’s are Pakis. Huma Abedin is Paki. San Bernadino shooters were Pakis. Khirz Kahn is Paki. I am wondering what’s happening to us that we have so many Pakis in key positions. Whatever we may say of the Chinese, at least they have not managed to occupy key positions.
General Defective is a honky jerk who can’t even get his ethnic slurs right.
Sundar Pichai was born in Tamil Nadu, nearly as far from Pakistan as you can get on the subcontinent. If you had half a brain, you’d know from his name that his family was almost certainly Hindu.
Satya Nadella was born in Hyderabad (the one in India). Nowhere near Pakistan. Since his first name is the Sanskrit word for “truth,” it would be a pretty good guess that his family is ethnically Hindu, as well.
Huma Abedin is an American, born in Michigan. Her father was born in New Delhi. Her mother was born in a part of India that was later part of Pakistan, but there was no Pakistan at the time.
Whatever else we may say of the Euro-Americans, far too many of them are ignorant, xenophobic cretins.
Without going into too much specifics they are still Paki type folks, and it is our mistake allowing them to occupy all the key positions in the government and sensitive industries. Your love for Pakis shows overflowing kindness and unfathomable idiocy, and you are quite entitled to the public exhibition of all your qualities.
Where are the people? Show me the peoples …
https://www.youtube.com/watch?v=FDOoSOjMG0U
I liked the part where one academic you contacted told you the question you should be asking in place of the far more discomfiting one you were asking. That part was really good.
It should be called the Panopticon Conference.
Military drone conferences have less hypocrisy on display.
Eminent French daily LeMonde.fr publishes some gems concerning the ethics of technology, such as today’s:
http://www.lemonde.fr/idees/article/2016/09/15/sur-internet-l-invisible-propagande-des-algorithmes_4998063_3232.html
“On the Internet the invisible propaganda of algorithms”–too frank (no pun intended) to expect from corporate English-language media, however.
Fascinating article:
However, not understanding French, I used Google Translate to render the text into crude English, which I then rather heavily edited to produce the above paragraph. . . leaving me with a sense of internal conflict. Google is bad for tracking everyone; Google is good for allowing one to read articles published in French, Arabic, Russian, etc. A use at your own risk situation?
Well, some tricks for using Google include clearing all cookies and browsing history and using the “verbatim” search option, and running NoScript on your web browser to avoid the Google tracking script (although this may eliminate many webpage functionalities, it is quite an education in third-party advertising, facebook and google tracking, etc.). Some disagree. . .
http://www.makeuseof.com/tag/adblock-noscript-ghostery-trifecta-evil-opinion/
So, Cthulhu approves the use of NoScript?
Google Translate has helped me understand an email chain written _in German_, in order to diagnose a problem in an integrated circuit–remarkable, given that I am illiterate _in German_. From some of its shoddier translated phrasings, I suggested improvements based upon my prior knowledge, which immediately propagated themselves throughout email chain.
In French, however, I can read with just a dictionary’s help, and concur with your Google Translation above. I hesitated to _recommend_ using Google Translate to render an article critical of Google, meanwhile expecting a curious reader to do so. To paraphrase a friend’s assessment of the merits of marriage over bachelorhood: GT is better than “not GT”, for me, whatever its problems.
Thanks for educating me regarding NoScript. My feeble defense thus far has been clearing my browser history frequently, feeding GT tiny bits lacking context, and _weighing all evidence_. Given the modern Internet’s propensity for confirmation bias, I prefer the surprising and unexpected.
Wikipedia article on “Claude Shannon”:
“The book, co-authored with Warren Weaver, The Mathematical Theory of Communication, reprints Shannon’s 1948 article and Weaver’s popularization of it, which is accessible to the non-specialist. Warren Weaver pointed out that the word information in communication theory is not related to what you do say, but to what you could say. That is, information is a measure of one’s freedom of choice when one selects a message.”
“If they were really trying to exert the ‘influence’ that the [report] is trying to suggest, do you think they would have influenced us to do work that actually calls them out on bad privacy practices?”
They are a business. They spend money when they think it will benefit the business. The conceivable benefits are:
1) Good PR. Funding people advocating for privacy helps offset Google’s reputation for being a sneaky, rights-eroding data thief.
2) It will influence the payee. Even if it only leads to the softening of an adjective or two, it is still influence. The implied threat of withdrawn patronage will do that. The effect is also likely to be cumulative, as the “capture” process is subtle and gradual.
“That munching sound you hear is capitalism eating everything.”
Having been around this area since before the web, a huge amount of security and privacy work has been in finding flaws in other people’s work. This is not fundamentally antagonistic to that work, but rather the opposite. Finding flaws is a way of improving quality of this work. It’s only temporarily antagonistic.
What would be actually antagonistic to this work is to question why companies need to have personal information in the first place. There has been plenty of work, much called peer-to-peer, on how to design services and appliances that don’t need a “benevolent” dictator and provider of mainframe services that necessitate storing personal information.
Great report, but my word, what a terrible header graphic. It caused physical discomfort in my eyes. I doubt it would trigger an epileptic, but it seems kinda borderline.
It’s a truly awful animated GIF to leave in a stationary position on the page.
It may not trigger any seizures (although it might), but I’d be willing to bet that it wouldn’t take long to trigger nausea in lots of folks.
Lose it, TI.
I second Doug Salzmann and Dan.
Thank you for noting.
great report – dovetails into some recently revealed stuff about “targeted advertising”, cover story? Maybe google has a contract to supply the NSA?
http://www.theregister.co.uk/2016/09/14/google_location_location_location/
Google’s become an obsessive stalker and you can’t get a restraining order.
She is only demanding explanations because there is no chance of any of the guilty parties going to jail. If, by chance, someone is thrown in prison you can bet it will be a low level employee who is being offered up as a scapegoat.
It is surprising to me that there should even be an issue about disclosure of sponsorship. I guess that’s because I am accustomed to reading journals like Physical Review and Reviews of Modern Physics, wherein articles are virtually always concluded with an Acknowledgements section citing the source of funding and thanking contributors not included in the list of authors. It is easy enough to do, and there is no reason why everyone should not do it.
Companies like Google should be congratulated for their sponsorship of research, even if it is self-serving. But that does not detract from the need for transparency on the part of authors whose work they sponsored.
That’s a high percentage, comparable to pharmaceutical involvement in academic research programs and to what you might see at a conference on pain management (opiate boosters) or psychiatric treatment (antidepressant boosters).
This is a huge problem across American academic institutions; the corporate-academic intertwining dates back to something called Bayh-Dole Act of 1980 and many expansions which allowed universities to exclusively license taxpayer-financed research results to corporations; computer tech and pharmaceuticals have been in bed with academic researchers ever since.
Notably, climate science didn’t have this problem for years because it was tied into weather forecast development, which the funding entities wanted to be accurate. It wasn’t until climate science forecasts of future warming became prominent in the 1980s that the fossil fuel lobby started pouring millions into academics and recruiting dishonest denialists like Richard Lindzen, Willie Soon, etc. to try and block the adoption of limits on fossil fuel use. Hence, you have ratios of 99 out of 100 climate scientists who aren’t fossil-fuel financed denialists.
In contrast, with computer tech and pharmaceuticals, the paid-off industry consultants have been in academia right from the beginning, which I bet accounts for that 23 out of 41 ratio of speakers.
P.S. This is interesting, and also a good argument for using NoScript with your browser:
https://www.inverse.com/article/10253-the-ftc-s-internet-privacycon-points-out-the-elephant-in-the-room
We’ll let your anti-pain med prejudices slide for the moment. ;^)
The problem is profiteering and the elaborately-developed form of profiteering that runs the world — capitalism. It’s not only incompatible with fairness, equality and democracy, it’s incompatible with the survival of civilization.
NoScript is an excellent idea. Also, try Self-Destructing Cookies.
As Ha-Joon Chang notes, capitalism is “the worst economic system – except for all the others.”
23 Things They Didn’t Tell You About Capitalism:
https://www.goodreads.com/author/quotes/95227.Ha_Joon_Chang
My post was really about the need for academics to be independent of both state and corporate interests; when American academics get in bed with private interests, be it Google or Purdue Pharma or Exxon, they distort science to serve the corporate agenda, even if that corporate agenda is disastrous for the country.
However, this isn’t just a capitalist problem: the same is true for state interests; look at Lysenko in the Soviet Union, who distorted agricultural science in the name of communist ideology (and to gain Stalin’s approval); the result was decades of agricultural disaster for the Soviet Union.
P.S. For a very good video on that helps explain how Lysenko in the communist Soviet Union mirrors the behavior of corporate-financed academics in the United States, see:
http://www.bbc.co.uk/programmes/b00bw51j
The Soviet Union was an example of state capitalism, not socialism or communism, the labels not withstanding.
And note that I referred to capitalism as a form of profiteering, which (along with population and unrestrained technological development) will almost certainly destroy civilization as we know it. And likely worse.
You’re confusing capitalism with greed. They aren’t the same.
Regardless of the socio-economic label–capitalism, communism, socialism, et cetera–hyper-ambitious people always want more than they need, will use, or can even count. They’re comfortable with taking it–perhaps even prefer to take it–at other’s expense. They wield enormous control, including the flow of information, so they can and do create and reinforce values, beliefs and “zeitgeists” that are most beneficial to themselves.
That’s why it’s disconcerting that almost unimaginably powerful tech or tech-related corporations like Microsoft, Google, Amazon, and et cetera have taken ownership of research into tech/privacy issues. We look to scientific research for objective information, but obviously, we are misguided.
For what it’s worth, I’ve spent a lifetime studying not only research, but also research methods and researchers themselves. I think that most people would be disturbed to learn how much “scientific research” is corrupted by the researchers’ personal interests: funding, salaries, political beliefs, et cetera, et cetera. Very little of the most important research–in this case, about privacy and technology–is truly objective. Even if the methods are valid and reliable, the results and conclusions aren’t used beneficiently.
Naturally, neither the researchers nor their funders–whether they’re corporations, non-profit/charitable organizations, or governments–want this to become common knowledge.