After years of backlash over controversial government work, Google technology will be used to aid the Trump administration’s efforts to fortify the U.S.-Mexico border, according to documents related to a federal contract.
In August, Customs and Border Protection accepted a proposal to use Google Cloud technology to facilitate the use of artificial intelligence deployed by the CBP Innovation Team, known as INVNT. Among other projects, INVNT is working on technologies for a new “virtual” wall along the southern border that combines surveillance towers and drones, blanketing an area with sensors to detect unauthorized entry into the country.
In 2018, Google faced internal turmoil over a contract with the Pentagon to deploy AI-enhanced drone image recognition solutions; the capability sparked employee concern that Google was becoming embroiled in work that could be used for lethal purposes and other human rights concerns. In response to the controversy, Google ended its involvement with the initiative, known as Project Maven, and established a new set of AI principles to govern future government contracts.
The employees also protested the company’s deceptive claims about the project and attempts to shroud the military work in secrecy. Google’s involvement with Project Maven had been concealed through a third-party contractor known as ECS Federal.
Contracting documents indicate that CBP’s new work with Google is being done through a third-party federal contracting firm, Virginia-based Thundercat Technology. Thundercat is a reseller that bills itself as a premier information technology provider for federal contracts.
The contract was obtained through a FOIA request filed by Tech Inquiry, a new research group that explores technology and corporate power founded by Jack Poulson, a former research scientist at Google who left the company over ethical concerns.
Not only is Google becoming involved in implementing the Trump administration’s border policy, the contract brings the company into the orbit of one of President Donald Trump’s biggest boosters among tech executives.
Documents show that Google’s technology for CBP will be used in conjunction with work done by Anduril Industries, a controversial defense technology startup founded by Palmer Luckey. The brash 28-year-old executive — also the founder of Oculus VR, acquired by Facebook for over $2 billion in 2014 — is an open supporter of and fundraiser for hard-line conservative politics; he has been one of the most vocal critics of Google’s decision to drop its military contract. Anduril operates sentry towers along the U.S.-Mexico border that are used by CBP for surveillance and apprehension of people entering the country, streamlining the process of putting migrants in DHS custody.
CBP’s Autonomous Surveillance Towers program calls for automated surveillance operations “24 hours per day, 365 days per year” to help the agency “identify items of interest, such as people or vehicles.” The program has been touted as a “true force multiplier for CBP, enabling Border Patrol agents to remain focused on their interdiction mission rather than operating surveillance systems.”
It’s unclear how exactly CBP plans to use Google Cloud in conjunction with Anduril or for any of the “mission needs” alluded to in the contract document. Google spokesperson Jane Khodos declined to comment on or discuss the contract. CBP, Anduril, and Thundercat Technology did not return requests for comment.
However, Google does advertise powerful cloud-based image recognition technology through its Vision AI product, which can rapidly detect and categorize people and objects in an image or video file — an obvious boon for a government agency planning to string human-spotting surveillance towers across a vast border region.
According to a “statement of work” document outlining INVNT’s use of Google, “Google Cloud Platform (GCP) will be utilized for doing innovation projects for C1’s INVNT team like next generation IoT, NLP (Natural Language Processing), Language Translation and Andril [sic] image camera and any other future looking project for CBP. The GCP has unique product features which will help to execute on the mission needs.” (CBP confirmed that “Andril” is a misspelling of Anduril.)
The document lists several such “unique product features” offered through Google Cloud, namely the company’s powerful machine-learning and artificial intelligence capabilities. Using Google’s “AI Platform” would allow CBP to leverage the company’s immense computer processing power to train an algorithm on a given set of data so that it can make educated inferences and predictions about similar data in the future.
Google’s Natural Language product uses the company’s machine learning resources “to reveal the structure and meaning of text … [and] extract information about people, places, and events,” according to company marketing materials, a technology that can be paired with Google’s speech-to-text transcription software “to extract insights from audio conversations.”
Although it presents no physical obstacle, Anduril’s “virtual wall” system works by rapidly identifying anyone approaching or attempting to cross the border (or any other perimeter), relaying their exact location to border authorities on the ground, offering a relatively cheap, technocratic, and less politically fraught means of thwarting would-be migrants.
Proponents of a virtual wall have long argued that such a solution would be a cost-effective way to increase border security. The last major effort, known as SBInet, was awarded to Boeing during the George W. Bush administration, and resulted in multibillion-dollar cost overruns and technical failures. In recent years, both leading Democrats and Republicans in Congress have favored a renewed look at technological solutions as an alternative to a physical barrier along the border.
Anduril surveillance offerings consist of its “Ghost” line of autonomous helicopter drones operated in conjunction with Anduril “Sentry Towers,” which bundle cameras, radar antennae, lasers, and other sophisticated sensors atop an 80-foot pole. Surveillance imagery from both the camera-toting drones and sensor towers is ingested into “Lattice,” Anduril’s artificial intelligence software platform, where the system automatically flags suspicious objects in the vicinity, like cars or people.
INVNT’s collaboration with Anduril is described in a 2019 presentation by Chris Pietrzak, deputy director of CBP’s Innovation Team, which listed “Anduril towers” among the technologies being tested by the division that “will enable CBP operators to execute the mission more safely and effectively.”
And a 2018 Wired profile of Anduril noted that one sentry tower test site alone “helped agents catch 55 people and seize 982 pounds of marijuana” in a 10-week span, though “for 39 of those individuals, drugs were not involved, suggesting they were just looking for a better life.” The version of Lattice shown off for Wired’s Steven Levy appeared to already implement some AI-based object recognition similar to what Google provides through the Cloud AI system cited in the CBP contract.
The documents do not spell out how, exactly, Google’s object recognition tech would interact with Anduril’s technology. But Google has excelled in the increasingly competitive artificial intelligence field; creating a computer system from scratch capable of quickly and accurately interpreting complex image data without human intervention requires an immense investment of time, money, and computer power to “train” a given algorithm on vast volumes of instructional data.
“We see these smaller companies who don’t have their own computational resources licensing them from those who do, whether it be Anduril with Google or Palantir with Amazon,” Meredith Whittaker, a former Google AI researcher who previously helped organize employee protests against Project Maven and went on to co-found NYU’s AI Now Institute, told The Intercept.
“This cannot be viewed as a neutral business relationship. Big Tech is providing core infrastructure for racist and harmful border regimes,” Whittaker added. “Without these infrastructures, Palantir and Anduril couldn’t operate as they do now, and thus neither could ICE or CBP. It’s extremely important that we track these enabling relationships, and push back against the large players enabling the rise of fascist technology, whether or not this tech is explicitly branded ‘Google.’”
Anduril is something of an outlier in the American tech sector, as it loudly and proudly courts controversial contracts that other larger, more established companies have shied away from. The company also recruited heavily from Palantir, another tech company with both controversial anti-immigration government contracts and ambitions of being the next Raytheon. Both Palantir and Anduril share a mutual investor in Peter Thiel, a venture capitalist with an overtly nationalist agenda and a cozy relationship with the Trump White House. Thiel has donated over $2 million to the Free Forever PAC, a political action group whose self-professed mission includes, per its website, working to “elect candidates who will fight to secure our border [and] create an America First immigration policy.”
Luckey has repeatedly excoriated Google for abandoning the Pentagon, a decision he has argued was driven by “a fringe inside of their own company” that risks empowering foreign adversaries in the race to adopt superior AI military capabilities. In comments last year, he dismissed any concern that the U.S. government could abuse advanced technology and criticized Google employees who signed a letter protesting the company’s involvement in Project Maven over ethical and moral concerns.
“You have Chinese nationals working in the Google London office signing this letter, of course they don’t mind if the United States has good military technology,” said Luckey, speaking at the University of California, Irvine. “Of course they don’t mind if China has better technology. They’re Chinese.”
As The Intercept previously reported, as Luckey publicly campaigned against Google’s withdrawal from the Project Maven, his company quietly secured a contract for the very same initiative.
Anduril’s advanced line of battlefield drones and surveillance towers — along with its eagerness to take defense contracts now viewed as too toxic to touch by rival firms — has earned it lucrative contracts with the Marine Corps and Air Force, in addition to its Homeland Security work. In a 2019 interview with Bloomberg, Anduril chair Trae Stephens, also a partner at Thiel’s venture capital firm, dismissed the concerns of American engineers who complain. “They said, ‘We didn’t sign up to develop weapons,’” Stephens said, explaining, “That’s literally the opposite of Anduril. We will tell candidates when they walk in the door, ‘You are signing up to build weapons.’”
Palmer Luckey has not only campaigned for more Silicon Valley integration with the military and security state, he has pushed hard to influence the political system. The Anduril founder, records show, has personally donated at least $1.7 million to Republican candidates this cycle. On Sunday, he hosted President Donald Trump at his home in Orange County, Calif., for a high-dollar fundraiser, along with former German ambassador Richard Grenell, Kimberly Guilfoyle, and other Trump campaign luminaries.
Anduril’s lobbyists in Congress also pressed lawmakers to include increased funding for the CBP Autonomous Surveillance Tower program in the DHS budget this year, a request that was approved and signed into law. In July, around the time the program funding was secured, the Washington Post reported that the Trump administration deemed Anduril’s virtual wall system a “program of record,” a “technology so essential it will be a dedicated item in the homeland security budget,” reportedly worth “several hundred million dollars.”
The autonomous tower project awarded to Anduril and funded through CBP is reportedly worth $250 million. Records show that $35 million for the project was disbursed in September by the Air and Marine division, which also operates drones.
Anduril’s approach contrasts sharply with Google’s. In 2018, Google tried to quell concerns over how its increasingly powerful AI business could be literally weaponized by publishing a list of “AI Principles” with the imprimatur of CEO Sundar Pichai.
“We recognize that such powerful technology raises equally powerful questions about its use,” wrote Pichai, adding that the new principles “are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions.” Chief among the new principles were directives to “Be socially beneficial,” “Avoid creating or reinforcing unfair bias,” and a mandate to “continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm.”
The principles include a somewhat vague list of “AI applications we will not pursue,” such as “Technologies that cause or are likely to cause overall harm,” “weapons,” “surveillance violating internationally accepted norms,” and “technologies whose purpose contravenes widely accepted principles of international law and human rights.”
It’s difficult to square these commitments to peaceful, nonsurveillance AI humanitarianism with a contract that places Google’s AI power behind both a military surveillance contractor and a government agency internationally condemned for human rights violations. Indeed, in 2019, over 1,000 Google employees signed a petition demanding that the company abstain from providing its cloud services to U.S. immigration and border patrol authorities, arguing that “by any interpretation, CBP and ICE are in grave violation of international human rights law.”
“This is a beautiful lesson in just how insufficient this kind of corporate self-governance really is,” Whittaker told The Intercept. “Yes, they’re subject to these AI principles, but what does subject to a principle mean? What does it mean when you have an ethics review process that’s almost entirely non-transparent to workers, let alone the public? Who’s actually making these decisions? And what does it mean that these principles allow collaboration with an agency currently engaged in human rights abuses, including forced sterilization?”
“This reporting shows that Google is comfortable with Anduril and CBP surveilling migrants through their Cloud AI, despite their AI Principles claims to not causing harm or violating human rights,” said Poulson, the founder of Tech Inquiry.
“Their clear strategy is to enjoy the high profit margin of cloud services while avoiding any accountability for the impacts,” he added.