The US has unveiled an unusual new approach to saber-rattling with China: emotion predictions.
According to Reuters, military commanders in the Pacific have developed a tool to forecast how the Chinese government will react to American actions in the region.
A US defense official said the software calculates “strategic friction,” but provided little detail about how it works.
We tried to dig up further information — but unearthed more questions than answers.
How would the tool work?
The tool reportedly analyses data since early 2020 and assesses the impact of historical actions. It then predicts which future activities will upset China, up to four months in advance.
It’s hard to envision how such a system would work in practice. How is the training data validated? What information does it use to reach a decision? How is it built into the workflow of the military? Could such a system really provide accurate predictions?
We asked Peter Lee, Professor of Applied Ethics and the Director of Security and Risk Research at the University of Portsmouth, for his thoughts.
Professor Lee, whose research focuses on AI-enabled weapon systems, was dubious about the tool’s potential:
I’m cautiously skeptical about the extent to which AI can be introduced to weapon systems in an ethical and legal way. But I’m much more skeptical about the ability of AI to predict major political responses from China’s political leaders.
Lee points to the example of China’s actions on the last day of COP 26. Few people predicted that the nation would insist on rewording the final statement from an agreement to “phase out” the use of coal-burning power plants to just “phase down” those plants instead.
“At a stroke, it allowed itself to pretty much do anything with coal for 50 years,” he said. “The political maneuver worked because it was unpredictable and timed perfectly to give the whole conference no choice but to accede to the demand immediately or lose the final agreed statement entirely.”
Trusting some Magic Eight Ball to envision such moves seems optimistic at best, particularly if it’s only analyzing data from 2020 until today.
What would it be used for?
The software reportedly predicts responses to actions including arms sales, congressional visits to Taiwan, and US-backed military activity.
The official quoted by Reuters said demand for the tool was sparked by incidents such as China’s response to the US and Canada sending a warship through the Taiwan Strait in October.
The US surely didn’t need a “tool” to realize that this action would antagonize Beijing. Didn’t its throng of human analysts could predict this response?
Deputy Secretary of Defense Kathleen Hicks suggested the tool would play a supplementary role.
“With the spectrum of conflict and the challenge sets spanning down into the grey zone,” she said, per Reuters.
“What you see is the need to be looking at a far broader set of indicators, weaving that together and then understanding the threat interaction.”
I hope the US isn’t delegating its understanding of other nations to a machine — or letting AI determine military action. At best (or worst), I expect it would inform decisions taken by military bodies or civil leadership.
Why has the US announced its existence?
Tensions between the US and China are mounting. The nations are increasingly at odds over Taiwan, trade, technology, and governance. US Secretary of State Antony J. Blinken has described the relationship as “the biggest geopolitical test of the 21st century.”
The unveiling of this tool adds another element to the discord. When military powers divulge a new system, it’s because they want their rivals to know about its existence.
According to Reuters, the Pentagon wants to move budget dollars toward a military that can deter China and Russia. The promotion of the predictive tool could form part of this strategy.
However, we can only speculate about the motivations behind disclosing the software. The US may hope to intimidate China with the tech, add a veneer of data-driven “objectivity” to its decisions, or shape a deception campaign. It would certainly provide a new approach to plausible deniability.
Whatever the intentions, the next time that the US incurs the wrath of China, questions will be asked. Did they ignore the software? Did it make inaccurate predictions? Did they choose the antagonistic action?
What could possibly go wrong?