Syrian surprise: How AI might factor in future surprise attacks
Supporters of the Syrian opposition celebrate in Paris following the rebel takeover of Damascus on Sunday. File Photo by Mohammed Badra/EPA-EFE
Syrian president Bashar al-Assad was overthrown Sunday after more than 24 years in power. Assad, 59, fled to Russia as Syrian rebels seized control in Damascus, ending the five-decade rule established by his father, Hafez al-Assad.
Both the Syrian opposition’s offensive in Damascus and the near-lightning speed with which the Assad regime collapsed were surprises in the extreme. Despite nearly 1,000 U.S. troops in eastern Syria and presumably border-to-border surveillance of the country by outside powers including Russia and Iran, all failed to detect the attack or anticipate the outcome. How?
The history of surprise attacks includes many instances where the instigators succeeded in at least initially fooling and gaining the upper hand on the target. On June 22, 1941, Adolf Hitler and Nazi Germany invaded the Soviet Union, despite the Molotov-Ribbentrop Pact. The campaign’s early success so astonished Hitler that he was reportedly catatonic for days. Advertisement Advertisement
Nearly six months later, the Imperial Japanese Navy launched its infamous sneak attack on Pearl Harbor early in the morning of Dec. 7, 1941. The attack targeted the U.S. Navy’s battleship fleet, with seven ships sunk or badly damaged. Only the USS Tennessee escaped with minor battle damage.
A litany of other surprise attacks supports a record of initial success. The Arab-Israeli wars all began with surprise attacks: in 1948, 1956, 1967, 1973, and more recently, on Oct. 7th, 2024. Only the “Six-Day War” initiated by Israel in 1967 was successful in the long term, with Israel retaining Syria’s Golan Heights and Jordan’s West Bank.
In 1982, Argentina surprised Britain by invading and occupying the Falkland Islands and South Georgia, deep in the South Atlantic. Britain responded by launching Operation Corporate, which routed the Argentine army after a long and testing 8,000-mile transit from home waters. To a much lesser degree, the U.S. invasion of Grenada the next year came as a surprise to British Prime Minister Margaret Thatcher.
More examples include North Korea, goaded on by Soviet leader Josef Stalin, attacking and capturing the South Korean capital of Seoul in June 1950 with no warning. Moreover, China’s intervention that November caught the United States and General Douglas MacArthur completely unaware. Advertisement
The Soviet Union marched into Hungary in 1956 and Czechoslovakia in 1968, surprising the global community. While Russia’s invasion of Ukraine in 2022 was far from a surprise, its annexation of Crimea in 2014 came as a shock.
One conclusion might be that predicting or anticipating a surprise attack has proven impossible. And while all of these examples succeeded in exploiting surprise, it hardly assured overall victory, as Nazi Germany and fascist Japan learned. Still, what, if anything, might be done to improve the ability to predict or anticipate surprise?
The U.S. military has long relied on “war games” to anticipate how battles might unfold. Today, war games no longer are confined to the military. Also called “red teaming,” the private sector has made full use of these techniques in planning and analyzing options for prevention and response.
What might make a profound change in the ability of war games to analyze, evaluate, compare and predict attacks is Artificial Intelligence (AI). AI has two potentially revolutionary advantages: First, along with quantum computing, AI might simultaneously model orders of magnitude and more scenarios than can be done today. This alone expands the universe for evaluation. And AI can prioritize these scenarios according to probabilities. Advertisement
Second and even more dramatic, AI produces answers and solutions that human intelligence cannot. Simply put, AI operates along lines that are contrary to and different from how humans think and reason. Humans still do not fully understand how AI arrives at its results and does its “thinking.” While AI has a potential dark side, as long as humans control ultimate decision-making authority, this need not be an unacceptable risk.
Of course, without sufficient data and information, AI cannot be assumed to function as a crystal ball or tarot deck. Separating reliable data and good information from massive amounts of random noise is a non-trivial challenge. And another potential problem looms: groupthink.
Today, many members of Congress view China as the greatest threat to U.S. security. If AI derived a set of contradictory conclusions about the Chinese threat, would groupthink ignore or reject these findings? The same problem would have applied to the George W. Bush administration invading Iraq in 2003 to destroy weapons of mass destruction (WMD), which ultimately did not exist. If AI had established Iraq no longer possessed WMD, would that have impacted groupthink at the time?
Trillions are being spent on AI. The Department of Defense has a long relationship with AI and Machine Learning, setting up its first AI-specific office in 2018. Whether the DoD or the private sector is in charge, someone must lead these efforts to use AI in thinking about the future. Advertisement
Harlan Ullman is UPI’s Arnaud de Borchgrave Distinguished Columnist, a senior advisor at Washington’s Atlantic Council, the prime author of “shock and awe” and author of “The Fifth Horseman and the New MAD: How Massive Attacks of Disruption Became the Looming Existential Danger to a Divided Nation and the World at Large.” Follow him @harlankullman. The views and opinions expressed in this commentary are solely those of the author.