Perhaps because it lies at the perfect nexus of genuinely-very-complicated and impossibly-confounded-by-marketing-buzzword-speak, the term “AI” has become a catchall for anything algorithmic and sufficiently technologically impressive. AI, which is supposed to stand for “artificial intelligence,” now spans applications from cameras to the military to medicine.
The race is on, and if America doesn’t start taking AI seriously, we’re going to find ourselves the losers in an ever-widening Dystopia Gap.
One thing we can be sure about AI — because we are told it so often and at so increasingly high a pitch — is that whatever it actually is, the national interest demands more of it. And we need it now, or else China will beat us there, and we certainly wouldn’t want that, would we? What is “there,” exactly? What does it look like, how would it work, and how would it change our society? Irrelevant! The race is on, and if America doesn’t start taking AI seriously, we’re going to find ourselves the losers in an ever-widening Dystopia Gap.
A piece on Politico this week by Luiza Ch. Savage and Nancy Scola exemplifies the mix of maximum alarm and minimum meaning that’s become so typical in our national (and nationalist) discussion around artificial intelligence. “Is America ceding the future of AI to China?” the article asks.
We’re meant to take this possibility as not only very real but as an unquestionably bad thing. One only needs to tell the public that the country risks “ceding” control of something — literally anything — to the great foreign unknown for our national eyes to grow wide.
“The last time a rival power tried to out-innovate the U.S. and marshaled a whole-of-government approach to doing it, the Soviet Union startled Americans by deploying the first man-made satellite into orbit,” the article says. “The Sputnik surprise in 1957 shook American confidence, galvanized its government and set off a space race culminating with the creation of NASA and the moon landing 50 years ago this month.”
Our new national dread, the article continues, is “whether another Sputnik moment is around the corner” — in the form of an AI-breakthrough from the keyboards of Red China instead of Palo Alto.
Forget that Sputnik was not actually a “surprise” for the powers that be, or that Sputnik itself was basically a beeping aluminum beach ball — “barely more than a radio transmitter with batteries,” the magazine Air & Space once said. There’s a bigger problem here: Framing the Cold War as a battle of innovators conveniently avoids mentioning that the chief innovation in question wasn’t Sputnik or the Space Shuttle or any peacetime venture, but the creation of an arsenal for instant global nuclear holocaust at the press of a button.
Sure, yes, it’s doubtful we could have “marshaled a whole-of-government approach” to space travel without having first “marshaled a whole-of-government approach” to rocket-borne atomic genocide, but to highlight the eventual accomplishments of NASA without acknowledging that it entailed a very close dance with a worldwide apocalypse is ahistoric and absurd. To use this comparison to goad us into another nationalist tech race with a global military power is outright dangerous — if only because the victory remains completely undefined. How would we “beat” China, exactly? Beat them at what, exactly? Which specific problems do we hope to use AI to fix? At a point in history when cities are beginning to scrutinize and outright ban “AI” technologies like facial recognition, are we sure the fixes aren’t even worse than the problems? Nationalists caught in an arms race have no time to answer questions like these or any others; they’ve got a race to win!
All anyone can manage to do is bark that we need more, more, more AI, more investments, more R&D, more collaborations, more ventures, more breakthroughs, simply more AI. Maybe we’ll worry about what we needed all of this for in the first place once we’ve beaten China there. Or maybe an algorithm will explain it to us, along with the locations of all our family members and a corresponding score that quantifies their social utility and biometric trustworthiness.
The Politico piece is full of worried voices cautioning that we can’t let Americans fall behind in the global invasive-surveillance race, completely unable to explain why this would be a bad thing. “The city of Tianjin alone plans to spend $16 billion on AI — and the U.S. government investment still totals several billion and counting,” despairs Elsa Kania of the technology and national security program at the Center for a New American Security. “That’s still lower by an order of magnitude.” Amy Webb, a New York University business school professor, told Politico, “We are being outspent. We are being out-researched. We are being outpaced. We are being out-staffed.”
Of course, it’s not just these researchers, nor is it just Politico: The necessity of absolute American dominance in an extremely unpredictable, deeply hazardous, and altogether hard to comprehend field has made the great leap from think-tank anxiety nightmare to political talking point. At the first Democratic presidential debate, South Bend, Indiana, Mayor Pete Buttigieg sounded the alarm:
China is investing so they could soon be able to run circles around us in artificial intelligence, and this president is fixated on the relationship as if all that mattered was the balance on dishwashers. We have a moment when their authoritarian model is an alternative to ours because ours looks so chaotic because of internal divisions. The biggest thing we have to do is invest in our own domestic competitiveness.
In the same breath as he states this technology is being used to bolster authoritarianism abroad, Buttigieg urges a renewed national investment in that very technology at home.
So, too, have the likes of Facebook and Google used threat of Chinese competition in the digital-panopticon sector as a bulwark against government regulation, warning that if limits were placed on what these companies build and how they use it, Chinese engineers will get there first. Similarly, virtual reality prodigy-turned-defense contractor Palmer Luckey, whose surveillance firm Anduril leans heavily on machine learning, earlier this year bemoaned American tech companies’ slight unwillingness to commit the full force of their AI engineering talent to the U.S. military in the wake of Google’s Project Maven controversy.
Just this week, Luckey put down the nationalist dog whistle and explicitly called for an American AI program modeled on the nuclear arms race: “If we had not been the leader, we would not have dictated the rules,” the 26-year-old recalled to CNBC.
Anduril investor and fellow Trump backer Peter Thiel echoed Luckey’s sentiments in recent public remarks, going so far as to claim that Google’s AI work had already been compromised by Chinese spies. For some, the militarism of “beating China at AI” is implied with a wink and a nod; for others, it’s the entire game.
Rarely does anyone explain exactly why we should ever want to beat China in this particular field, one that’s helped the government there build incredibly powerful systems of social control, civil liberty annihilation, and minority oppression — areas where the U.S. is still competitive, sure, but perhaps falling behind. A February report by Bloomberg notes that in Tianjin — where Elsa Kania worries we’re being outspent on AI by an “order of magnitude” — it “will soon be hard to go anywhere … without being watched.” Second place sounds more than fine.
Even moderate voices find themselves hopelessly caught in the pro-AI fervor, the rush to develop this technology for its own sake. New America’s Justin Sherman has written numerous articles about why framing AI development as an “arms race” is wrongheaded — but only because it leaves out all the other potentially frightening and draconian gifts a nationalist AI sprint could produce. “Competing AI development in the United States and China needs to be reframed from the AI arms race rhetoric, but that doesn’t mean AI development itself doesn’t matter,” Sherman wrote in March. “In fact, the opposite is true.”
Sherman highlights a couple of nonweapon AI applications we ought to not leave to the Chinese, like the potential to use self-teaching software to detect cancer — though he provides only a glancing admission that “many legal and ethical issues plague AI in healthcare (e.g., data privacy, AI bias).” It’s hard to square the belligerent drumbeat of AI nationalism with a calm, composed approach to making sure these technologies are only developed and deployed within a rigorous ethical framework, after all. Moving fast and breaking things is the American way.
Self-improving software that detects, categorizes, and predicts far better and faster than any humans ever could is an inherently fraught, socially perilous technology. It demands careful consideration.
Speed is the real threat here, and speed is exactly what’s demanded every time a Buttigieg or Sandberg warns we’re falling behind. Self-improving software that detects, categorizes, and predicts far better and faster than any humans ever could is an inherently fraught, socially perilous technology. It demands careful consideration, even if that means glacial “innovation.”
Given the deceptive, reckless, and at times downright vampiric way the likes of Facebook and Google already behave, who could possibly think that the “many legal and ethical issues” Sherman worries about could be properly addressed in the middle of a race? Are we really ready to grapple with Amazon once it’s been handed the mantle of Sputnik and Apollo 11?
Careful consideration demands a slower pace — and a slower pace means, yes, potentially losing a race to the bottom against a national adversary that clearly has no qualms making the bottom as technologically impressive as possible. Rather than clamoring for a dead sprint toward some sort of national AI supremacy, defined however and by whomever, our time might be better spent worrying in earnest about what lies at the finish line.