The Foreign Service Journal, May-June 2026

50 MAY-JUNE 2026 | THE FOREIGN SERVICE JOURNAL FOCUS AI IN DIPLOMACY Foreign Service Journal: What risks related to AI are we not seeing sufficiently? What are the consequences if we fail to account for those risks? Kelsey D. Atherton: Automation makes what it automates opaque and then invisible. When the product works as intended, from automating a summary of an email to successfully navigating a vehicle down a city street, we don’t notice if there are mistakes in the process. But when accidents happen or errors emerge, opacity in their creation and a lack of awareness from the human employing the automated process about the error carry outsized risk. One of the more novel threats is unexplained emergent behavior, where an automated process not only fails but fails differently than the way a human might be expected to fail. Safeguards, safe evaluation of responses under adversarial conditions via red-teaming, and tools for forensic investigation are all needed to manage, reduce, and mitigate the odds of expected and unanticipated error. Without those doctrines and practices of accountability, we can expect AI-led automation to fail in new and novel ways, and be caught by surprise, without adequate preparation or guidance on how to proceed. The FSJ interviewed a military technology journalist and expert on foreign policy to learn about the risks and rewards of AI in our industry. Kelsey D. Atherton is the chief editor at the Center for International Policy, where he commissions, edits, and publishes a journal on progressive foreign policy. Previously he worked for more than a decade as a military tech journalist, writing in outlets such as Popular Science, Slate, and The New York Times. His grandfather, Ambassador Alfred LeRoy Atherton Jr., was a career Foreign Service officer. At the Intersection of AI and Foreign Policy A Q&A with Kelsey D. Atherton FSJ: What is the Pentagon doing on AI that is novel and innovative? Where have they failed? What lessons could the State Department learn from our colleagues in uniform? KDA: Before I left the military technology beat full time in November 2023, the most compelling use of AI was profoundly boring—automated predictive maintenance and assessment, like using robots and data collection to predict where ships are rusting and need extra help. One of the flashier ideas explored is synthetic training data, using AI to generate and iterate novel battlefield scenarios, ones that are important for a machine to recognize while likely to be lacking, or certainly lacking at high fidelity. I think the biggest lesson from the Pentagon is to look at where off-the-shelf AI is already capable (e.g., data extraction, summary, coding) while also within the bounds of what a human can review, to ensure that the process is right. This is the boring work of logistics and personnel management, looking to ensure that systems are sustained and repaired before they break, which is equally important for everything from aircraft carriers to embassies. I would also include a major word of caution, especially on experimenting with AI just to say you’ve done it. It remains to be seen if and when the Pentagon’s full embrace of AI tools leads to

RkJQdWJsaXNoZXIy ODIyMDU=