The Foreign Service Journal, May-June 2026

THE FOREIGN SERVICE JOURNAL | MAY-JUNE 2026 51 error that reflects known errors in the civilian version of the tools, like reflecting racial bias in word association, or underemphasizing information based on where in a document it’s presented, but I would say that’s a “when,” not an “if.” FSJ: How might AI change the job of defense attachés, political officers, and arms-control diplomats overseas? KDA: AI might lead defense attachés, political officers, or arms-control diplomats to believe they have more and more accurate information in hand than they actually do. The authoritative tone of AI reinforces this. While enterprise versions of AI products promise fewer errors, the genuine limits of large language models (LLMs) mean that lots of the text-analysis AI is doing is simple word association rather than processing and summarizing text the way a diplomat might. With generative AI, especially as the fidelity of images improves, it is becoming easier to produce convincing fakes of everything from satellite footage to videos and photos. People working with sensitive information must learn to identify “tells” in AI imagery or other data and should become familiar with ways to verify images as genuine versus generated or modified. There is also a danger in relying on LLMs as a translation and summarization tool. With automated communication tools in abundance, there’s a real danger of meaning lost in translation services, and an added danger of the tool potentially being corrupted to intentionally produce miscommunication. FSJ: One of the most important things we do as American diplomats is speak on behalf of the U.S. government overseas. Our authenticity is what gives us our credibility. However, it’s increasingly easy for anyone (diplomats and government officials included) to use AI to churn out content nonstop. Will such AI- generated content overwhelm audiences and lead to a loss of authentic voices? What implications could that have for governments and authorities who need to cut through the noise to reach audiences on topics involving safety and security? KDA: AI slop, and the fondness of AI-slop imagery as the in-house style of the Trump administration specifically, risks drowning out real information and authentic human experience. While AI slop will likely always be some part of the information ecosystem now, the best way to communicate is still in person. For events, it’s helpful to create and store your own recordings as a check against AI distortions. And generally, when it comes to meeting people where they’re at, “touch grass diplomacy”—getting out in the real world and meeting your interlocutors face-toface—can be a breath of fresh air. FSJ: A plethora of AI services and platforms is out there, and it can be hard to keep track of which AI platform, account, or service is needed for a specific task. How are the best private companies mitigating this AI traffic jam? KDA: I think the best way to manage competing services is to incorporate them with in-house IT and have program managers track and check in with staff if the AI tools are delivering the capability promised, or if they’re just another box that needs checking and interferes with existing process. A good starting point would be to see how processes have been/are done before implementing an AI tool, and then check in three to six months after the adoption of a tool to see what has changed, if anything. FSJ: What’s the deal with Claude? Can you explain the significance of the disagreement between the Pentagon and Claude’s creator, Anthropic? KDA: As best I understand it, Claude is the name of the reclusive AI firm Anthropic’s AI tool, a sort of high-end cousin to ChatGPT or their gutter relative Grok. All three are built on large language models and neural networks, where iterative training and inference based on word association plays out in a functional “black box” until the program spits out results that effectively match patterns, often to the point of impressing people as though they are interacting with a sentient being. (In this instance, I’d argue the human users are failing the Turing Test of artificial intelligence, more than the AI tools are passing it.) As for Anthropic, Claude, and the Pentagon: Claude is aimed at enterprise users, businesses, and bureaucracies, including the Pentagon. As noted in a February 27 response to Secretary Pete Hegseth designating Anthropic a “supply chain risk,” the impasse was reached after a request by Anthropic that their model not be used for “the mass domestic surveillance of Americans and fully autonomous weapons.” Notably, other lawful uses were allowed and permitted by the company. As The Wall Street Journal reported on February 28, Central Command “uses the tool for intelligence assessments, target identification and simulating We can expect AI-led automation to fail in new and novel ways, and be caught by surprise, without adequate preparation or guidance on how to proceed.

RkJQdWJsaXNoZXIy ODIyMDU=