AI Should Assist, Not Replace, U.S. Diplomats

The work of diplomats on the ground in foreign countries can be made easier with artificial intelligence. But the diplomats themselves cannot be replaced.

BY MAHVASH SIDDIQUI


The Mosul Dam in Iraq, the fourth-largest dam in the Middle East, circa 2017.
U.S. Army

When the Islamic State of Iraq and Syria (ISIS) seized Iraq’s Mosul Dam in 2014, the consequences could have been catastrophic. Analysts warned that a structural collapse might unleash a flood wave capable of costing millions of lives downstream. At that moment, as the sole environment, science, technology, and health (ESTH) officer in country, and with no U.S. Army Corps of Engineers hydrologists on the ground, I became the de facto water-security adviser to the senior U.S. commander in Iraq.

Interpreting piezometer readings and translating technical risk into operational guidance is not typically in a diplomat’s job description. Yet that was my reality. I relied on long-unused physics and calculus training to brief the commander daily on water manipulation and structural risk. A misread signal would carry enormous consequences.

In those moments, I often wished for an AI system capable of rapidly processing complex engineering data and converting it into usable assessments. Such a tool would not have replaced human judgment but could have reduced uncertainty, allowing decision-makers to focus on strategy and safety. AI tools designed to translate specialized scientific data into operational language could serve as force multipliers in crisis environments where expertise is scarce and time is critical.

AI as a Predictive Tool

In 2013, before ISIS dominated international headlines, I saw firsthand how fragmented signals can obscure emerging threats. Based on conversations with Iraqi counterparts, local contacts, and regional reporting, our team in Baghdad relayed unclassified warnings to the National Security Council about an influx of foreign fighters entering Iraq. Local Iraqi and Syrian news outlets were documenting the trend, but international coverage was largely absent.

The prevailing assumption in Washington, D.C., was that Iraq was stabilizing, reinforcing perceptions of safety that helped justify a reduced external presence. The warning signs existed; they were dispersed, however, across local sources and languages.

AI systems capable of aggregating and translating local reporting at scale could have synthesized those indicators into a clearer early warning picture. Predictive analytics might not have prevented the crisis, but it could have accelerated awareness and sharpened policy attention by anticipating destabilization before it became visible on the ground. The lesson remains relevant: The challenge is rarely a lack of information—it is the inability to assemble it quickly enough.

Such a tool would not have replaced human judgment but could have reduced uncertainty, allowing decision-makers to focus on strategy and safety.

I saw the operational value of such tools while serving in the Iran Threat Directorate at the Global Engagement Center in early 2021. Our team used AI-supported analysis to map disinformation networks targeting Afghanistan and identified an 800 percent surge in coordinated narratives amplified by Iranian, Russian, and Chinese actors. The scale and speed of the activity would have been nearly impossible to quantify manually, but AI allowed us to measure how malign influence campaigns were shaping public perception in real time.

Yet technology alone does not guarantee action; institutional resistance prevented the operational response we proposed. Tools are only as effective as the institutions prepared to act on what they reveal.

AI is particularly well suited to computational tasks and large-scale pattern recognition. It can track disinformation flows, identify early indicators of instability, and automate repetitive administrative processes. Drafting templates, managing cable formats, and processing standardized reporting are logical areas for efficiency gains. Reducing procedural burdens would allow officers to invest more time in analysis, negotiation, and relationship-building—the work that defines diplomacy.

There is, however, a boundary that must remain firm.

The Human Element


The author (second from right) meeting with university students in London to discuss the possible impacts of Brexit on science and technology research, March 2016.
Courtesy of Mahvash Siddiqui

AI cannot replace the eyes and ears of diplomats on the ground. It cannot replicate trust built through years of engagement or interpret the emotional dynamics of a negotiation. While serving in pre-Brexit U.K., I met hundreds of people across professions, regions, and social classes. Many average Britons expressed frustration that European Union (EU) labor migration was straining public services and increasing job competition—concerns that were rarely reflected in the London press and yet would later lead to passage of Brexit.

Those face-to-face conversations revealed a political undercurrent that data alone could not have captured, and we used this information to caution State leadership to prepare for Brexit-related economic and political ripples to our trans-Atlantic relationship. Diplomacy depends on presence, curiosity, and empathy. Algorithms cannot walk into a pub, listen to a room, or detect social tension before it appears in polling data.

The national security implications of AI adoption are also significant. As AI platforms increasingly rely on private sector infrastructure, governments must confront difficult questions about data stewardship. Diplomatic reporting and analytic frameworks represent decades of institutional memory. Even unclassified systems contain sensitive patterns that, if aggregated or compromised, could expose vulnerabilities or distort policy. Concentrating diplomatic knowledge in proprietary private sector platforms creates dependencies that may not align with long-term American public interests.

U.S. government personnel operate under rigorous vetting and constitutional obligations to serve America first. Private firms such as Palantir and others, regardless of technical sophistication, are accountable primarily to shareholders. Their incentives and partnerships—sometimes with foreign actors—are not synonymous with national security priorities. A breach, acquisition, or shift in corporate direction could have consequences far beyond routine contractor risk. As AI systems become embedded in diplomatic workflows, the risks associated with external control of core infrastructure grow accordingly.

There is also a cognitive dimension. Overreliance on automated systems can dull analytical instincts and cerebral acuity. Good diplomats question assumptions, synthesize ambiguity, and exercise judgment under pressure. AI should sharpen those skills, not replace them. Systems that handle computation and data management should elevate human reasoning rather than encourage passive acceptance of machine outputs.

Careless use risks centralizing sensitive knowledge, weakening institutional memory, and encouraging misplaced confidence in automated conclusions.

Next Steps

The policy implication of such concerns is not to reject AI but to carefully shape its role. Thoughtful adoption can extend the reach of diplomats, accelerate analysis, and reduce administrative friction. Careless use risks centralizing sensitive knowledge, weakening institutional memory, and encouraging misplaced confidence in automated conclusions. Strategic judgment and diplomatic engagement must remain human responsibilities, with technology supporting statecraft, not redefining it.

U.S. diplomacy has always relied on officers willing to operate at the edge of their expertise. AI can serve as a technical partner in moments of crisis, augmenting our ability to respond to complex threats. But it cannot replace the relational foundation of diplomacy or the ethical accountability carried by public servants.

In an era of rapid technological change, preserving the human core of foreign policy is not nostalgia. It is a security imperative. Our diplomats remain the nation’s interpreters of a complex world. AI can help us work faster and smarter, but it cannot see, feel, or understand on America’s behalf. Ensuring that it remains an assistant—not a substitute—is essential to the resilience and credibility of U.S. diplomacy.

Mahvash Siddiqui has served for more than 20 years as a Foreign Service officer in Germany, the United Kingdom, Iraq, Qatar, and India. Her roles have ranged from public diplomacy officer to acting consul general to alternate permanent representative to the International Maritime Satellites Organization and the International Maritime Organization. The views expressed in this article are those of the author and not of the U.S. government.

 

When sharing or linking to FSJ articles online, which we welcome and encourage, please be sure to cite the magazine (The Foreign Service Journal) and the month and year of publication. Please check the permissions page for further details.

Read More...