Trust Lab was founded by a team of well-credentialed Big Tech alumni who came together in 2021 with a mission: Make online content moderation more transparent, accountable, and trustworthy. A year later, the company announced a “strategic partnership” with the CIA’s venture capital firm.
Trust Lab’s basic pitch is simple: Globe-spanning internet platforms like Facebook and YouTube so thoroughly and consistently botch their content moderation efforts that decisions about what speech to delete ought to be turned over to completely independent outside firms — firms like Trust Lab. In a June 2021 blog post, Trust Lab co-founder Tom Siegel described content moderation as “the Big Problem that Big Tech cannot solve.” The contention that Trust Lab can solve the unsolvable appears to have caught the attention of In-Q-Tel, a venture capital firm tasked with securing technology for the CIA’s thorniest challenges, not those of the global internet.
“I’m suspicious of startups pitching the status quo as innovation.”
The quiet October 29 announcement of the partnership is light on details, stating that Trust Lab and In-Q-Tel — which invests in and collaborates with firms it believes will advance the mission of the CIA — will work on “a long-term project that will help identify harmful content and actors in order to safeguard the internet.” Key terms like “harmful” and “safeguard” are unexplained, but the press release goes on to say that the company will work toward “pinpointing many types of online harmful content, including toxicity and misinformation.”
Though Trust Lab’s stated mission is sympathetic and grounded in reality — online content moderation is genuinely broken — it’s difficult to imagine how aligning the startup with the CIA is compatible with Siegel’s goal of bringing greater transparency and integrity to internet governance. What would it mean, for instance, to incubate counter-misinformation technology for an agency with a vast history of perpetuating misinformation? Placing the company within the CIA’s tech pipeline also raises questions about Trust Lab’s view of who or what might be a “harmful” online, a nebulous concept that will no doubt mean something very different to the U.S. intelligence community than it means elsewhere in the internet-using world.
No matter how provocative an In-Q-Tel deal may be, much of what Trust Lab is peddling sounds similar to what the likes of Facebook and YouTube already attempt in-house: deploying a mix of human and unspecified “machine learning” capabilities to detect and counter whatever is determined to be “harmful” content.
“I’m suspicious of startups pitching the status quo as innovation,” Ángel Díaz, a law professor at the University of Southern California and scholar of content moderation, wrote in a message to The Intercept. “There is little separating Trust Lab’s vision of content moderation from the tech giants’. They both want to expand use of automation, better transparency reports, and expanded partnerships with the government.”
How precisely Trust Lab will address the CIA’s needs is unclear. Neither In-Q-Tel nor the company responded to multiple requests for comment. They have not explained what sort of “harmful actors” Trust Lab might help the intelligence community “prevent” from spreading online content, as the October press release said.
Though details about what exactly Trust Lab sells or how its software product works are scant, the company appears to be in the business of social media analytics, algorithmically monitoring social media platforms on behalf of clients and alerting them to the proliferation of hot-button buzzwords. In a Bloomberg profile of Trust Lab, Siegel, who previously ran content moderation policy at Google, suggested that a federal internet safety agency would be preferable to Big Tech’s current approach to moderation, which consists mostly of opaque algorithms and thousands of outsourced contractors poring over posts and timelines. In his blog post, Siegel urges greater democratic oversight of online content: “Governments in the free world have side-stepped their responsibility to keep their citizens safe online.”
Even if Siegel’s vision of something like an Environmental Protection Agency for the web remains a pipe dream, Trust Lab’s murky partnership with In-Q-Tel suggests a step toward greater governmental oversight of online speech, albeit very much not in the democratic vein outlined in his blog post. “Our technology platform will allow IQT’s partners to see, on a single dashboard, malicious content that might go viral and gain prominence around the world,” Siegel is quoted as stating in the October press release, which omitted any information about the financial terms of the partnership.
Unlike typical venture capital firms, In-Q-Tel’s “partners” are the CIA and the broader U.S. intelligence community — entities not historically known for exemplifying Trust Lab’s corporate tenets of transparency, democratization, and truthfulness. Although In-Q-Tel is structured as an independent 501(c)3 nonprofit, its sole, explicit mission is to advance the interests and increase the capabilities of the CIA and fellow spy agencies.
Former CIA Director George Tenet, who spearheaded the creation of In-Q-Tel in 1999, described the CIA’s direct relationship with In-Q-Tel in plain terms: “CIA identifies pressing problems, and In-Q-Tel provides the technology to address them.” An official history of In-Q-Tel published on the CIA website says, “In-Q-Tel’s mission is to foster the development of new and emerging information technologies and pursue research and development (R&D) that produce solutions to some of the most difficult IT problems facing the CIA.”
Siegel has previously written that internet speech policy must be a “global priority,” but an In-Q-Tel partnership suggests some fealty to Western priorities, said Díaz — a fealty that could fail to take account of how these moderation policies affect billions of people in the non-Western world.
“Partnerships with Western governments perpetuate a racialized vision of which communities pose a threat and which are simply exercising their freedom of speech,” said Díaz. “Trust Lab’s mission statement, which purports to differentiate between ‘free world governments’ and ‘oppressive’ ones, is a worrying preview of what we can expect. What happens when a ‘free’ government treats discussion of anti-Black racism as foreign misinformation, or when social justice activists are labeled as ‘racially motived violent extremists’?”