Eyal Weizman, an Israeli-born British architect who uses visual analysis to investigate war crimes and other forms of state violence, was barred from traveling to the United States this week for an exhibition of his work after being identified as a security risk by an algorithm used by the Department of Homeland Security.
Weizman, a professor at Goldsmiths, University of London, where his human rights research agency Forensic Architecture is based, frequently travels to the U.S. to lecture and exhibit work. Last year, Forensic Architecture was selected to take part in the Whitney Biennial and produced an investigation — in collaboration with Intercept co-founder Laura Poitras — of how a Whitney board member profited from the manufacture of tear gas used against civilians in a dozen countries, including at the U.S.-Mexico border.
But last week, as he prepared to travel to Miami for the opening of a major survey exhibition of Forensic Architecture’s work, “True to Scale,” at the Museum of Art and Design at Miami Dade College, Weizman was informed by email that he had been removed from a visa-waiver scheme and would not be permitted to board his flight.
“The revocation notice stated no reason, and the situation gave me no opportunity to appeal or to arrange for an alternative visa,” Weizman explained in a statement read at the Miami opening on Wednesday night.
“It was also a family trip. My wife, professor Ines Weizman, who was scheduled to give talks in the U.S. herself and our two children traveled a day before I was supposed to go,” the architect added. “They were stopped at JFK airport in New York, where Ines was separated from our children and interrogated by immigration officials for two and a half hours before being allowed entry.”
When he visited the U.S. Embassy in London to find out what happened, Weizman said in an interview, an officer told him that his authorization to travel had been revoked because “the algorithm” had identified an unspecified security threat associated with him.
As my colleague Sam Biddle reported in 2018, Homeland Security paid an automated machine learning firm, DataRobot, $200,000 that year to test “predictive models to enhance identification of high-risk passengers” at foreign airports, with the aim of developing a software algorithm capable of making real-time predictions in less than one second about who is a potential terrorist.
Weizman told The Intercept that the officer at the U.S. Embassy in London who denied him a visa said that he had no idea what triggered his rejection by the algorithm. “I don’t see what the system flagged up, but you need to help me figure it out,” the officer said. He then asked Weizman to supply the embassy with additional information, including 15 years of his travel history and “the names of anyone in my network whom I believed might have triggered the algorithm.”
Weizman refused to provide that information, given the fact that his work investigating state-sponsored human rights violations “means being in contact with vulnerable communities, activists and experts, and being entrusted with sensitive information.”
As Weizman pointed out in our phone conversation, the very nature of the type of open-source investigation he helped to pioneer — the painstaking work of collaborating online to verify and evaluate testimony and visual evidence of human rights abuses shared by witnesses — requires researchers “to create very varied and very diverse sets of networks,” putting them in contact with sources in places like Syria, Gaza, or Pakistan that a computer might be trained to view with suspicion. And, to a computer looking for suspect patterns of behavior, the interactions of a person posing a genuine security threat might be indistinguishable from the activities of an open-source investigator or a journalist.
“If there is an associative algorithm, it looks for relations between people and things: travel, patterns of communication,” Weizman said. “If it is associative, our open-source research will always be vulnerable to this sort of policing, where it’s not what you do, it’s the pattern of what you do that gets policed.”
A perfect example, Weizman said, was the kind of research his team did on the chemical attack in the Syrian town of Douma in 2018, which led to sophisticated architectural modeling of the site of the attack. (That work was also used by the New York Times in its visual investigation of the attack, which concluded that it had been carried out by the Syrian government.)
“We go online, we look at the effects of chlorine,” Weizman said. “After we look at that, we go to certain channels on YouTube that are operated by people on the ground, close to the resistance, who are uploading video. Sometimes we are DMing with people in Syria. We don’t necessarily go to Syria, but we are in touch with refugees and activists.”
Weizman discussed his approach to the investigation of the attack in Douma in an Intercept video report last year.
For another project, a new investigation of a 2015 shipwreck off the coast of Greece that took the lives of at least 43 migrants, mainly Syrian refugees, Weizman said his team contacted witnesses to the tragedy without ever asking those people if anyone else on the boat, or in their extended networks of friends, relations, or acquaintances back in the war zone, might have had militant links. “Who wants to ask when you are in touch with survivors of a shipwreck in the Aegean, who was on the boat?” Weizman said.
“This much we know: We are being electronically monitored for a set of connections — the network of associations, people, places, calls, and transactions — that make up our lives,” Weizman said in his statement to the opening in Miami he was unable to attend. “These networks are the lifeline of any investigative work.”
“We are being electronically monitored for a set of connections — the network of associations, people, places, calls, and transactions — that make up our lives.”
The lack of information about why exactly the computer software had barred Weizman from traveling to the U.S. led to speculation that the algorithm might have been gamed by false information provided to American authorities by his critics.
Another prominent open-source researcher, Eliot Higgins of Bellingcat, suggested in an email on Thursday that Weizman could also have been the victim of anonymous complaints to U.S. authorities from a cohort of online activists angered by his work implicating forces loyal to Syrian President Bashar al-Assad in chemical attacks. “Forensic Architecture certainly got their attention between their Douma and Khan Sheikhoun work, so they might have wanted to cause him problems,” Higgins speculated. “It’s rather like swatting.” Swatting is a form of harassment associated with online communities in which false reports to law enforcement are used to target victims.
As Higgins noted, the technique has been deployed previously by pro-Assad Twitter trolls and bloggers against reporters and activists who have documented war crimes by Syrian and Russian government forces.
In January, Oz Katerji, a British journalist who covered the war in Syria, reported that he was contacted by the counterterrorism unit of London’s Metropolitan Police force after allegations about him were phoned in to a confidential hotline. “They told me they believe the reports against me are baseless and malicious in intent, and there is no case against me,” Katerji wrote on Twitter. “They did however confirm to me that this false report was a result of the vindictive Islamophobic online trolling campaign against me for my reporting on Syria.”
Katerji also posted a screenshot showing that Vanessa Beeley, a pro-Assad blogger boosted by Russian state television, had encouraged her followers to report him and a list of other journalists and news outlets critical of Assad — including the BBC, Channel 4 News, and The Guardian — to the authorities for supposedly violating the U.K. Terrorism Act.
Katerji added that “one of the allegations against me was that I am a supporter of the White Helmets, a Syrian humanitarian medical organization partially funded by the U.K. Foreign and Commonwealth Office.”
As Muhammad Idrees Ahmad, a lecturer in digital journalism at the University of Stirling in Scotland, pointed out in an interview, there is evidence that similar efforts have been successful in the U.S. as well.
In 2016, Raed Al Saleh, the head of Syria Civil Defense — a Western-backed rescue organization, known for its White Helmets, that searches for survivors of bombings in rebel-held parts of Syria — was denied entry into the United States when he was due to accept an award in Washington. Video recorded by the rescue group was, for much of the conflict in Syria, one of the only sources of information about the impact of Syrian and Russian airstrikes on rebel-held territory. That led to an orchestrated campaign, featured heavily on Russian state channels, to discredit the group by claiming that it was an arm of the Islamist rebel groups.
After Saleh was barred from the U.S., another pro-Assad blogger boasted on Twitter that he had reported him and the group giving him the award to Homeland Security “and organized others to do so as well.”
Despite his exclusion from the U.S. Weizman explained that, in conjunction with the exhibition of Forensic Architecture investigations in Miami, his group plans to train local activists to apply its techniques to investigate “human rights violations in the Homestead detention center in Florida … where migrant children have been held in what activists describe as ‘regimented, austere, and inhumane conditions.'”
The ban on Weizman’s travel was denounced on Thursday by Margaret Huang, the executive director of Amnesty International USA. “Stopping Eyal Weizman from entering the United States does a grave disservice to human rights documentation efforts,” Huang said in a statement. “It would be ludicrous to suggest that Eyal Weizman poses a security threat, and it’s an embarrassment for the U.S. to bar him.”
“Invoking the results of an algorithm cannot disguise the spurious nature of this visa decision, and, in fact, it heightens our concerns about how the decision was taken,” Huang added. “This is ideological exclusion via algorithm, a troubling indicator of the bias and irrationality of the high-tech security state.”