THE FOREIGN SERVICE JOURNAL | MAY-JUNE 2026 33 entirely. That’s where the real security risk lies—not in sanctioned experimentation within FAN, but in work happening on personal devices and consumer accounts because official tools don’t meet practitioners’ needs. The State Department’s new self-service AI sandboxes managed by the Center for Analytics—Funhouse and Proving Ground—point in the right direction. They enable diplomatic practitioners to build solutions that stay close to the mission, while enterprise provides guardrails for stability and reuse. These opportunities have been circulated in department notices and internal cables over the last year, including instructions on how to gain access to them. The choice of different platforms for experimentation is a real benefit for practitioners as well. The strategic question isn’t whether to centralize—it’s what to centralize. Invest in platforms that enable practitioner innovation within secure boundaries. Invest in governance and accelerated education. When a team builds something that works, celebrate it, harden it, make it available departmentwide. State’s Center for Analytics is already doing this. Their awards and recognition programs surface promising practitioner innovations and give them visibility. That’s enterprise complementing edge innovation, not competing with it. Where to Start Find your people. The practitioners I spoke with didn’t learn AI from training courses; they learned from each other. Find someone in your bureau who’s already experimenting—not necessarily the loudest AI enthusiast, but someone whose work has gotten noticeably better. Ask what’s working and watch how they use it. The knowledge that matters here is tacit; it transfers through observation and conversation, not documentation. The Center for Analytics is building support for this via networks of AI champions embedded in bureaus, pairing local credibility with enterprise support. Write down how you actually work. Start a document that captures how you like to write, what you care about, and what workflows you like to use (e.g., the briefing structure you always use, the three questions you ask before every meeting, how you approach issue research). This is the tacit knowledge that makes you effective, and it’s exactly what AI needs to work for you rather than generically. When you give an AI tool that context, it stops producing generic output and starts producing output shaped by your judgment. I keep multiple documents: key workflows, key facts, and my own work preferences. Your vault of documents can and will evolve with you, getting better each time. Parts of that document also become a thing you can share with colleagues when they ask how you do what you do. Build something, then share it. That document is a start— but the real momentum comes from building. Pick one repetitive task (e.g., meeting prep, talking points, cable summaries) and make something that helps. One task, one tool, one win. The confidence compounds. When it works, share it with a colleague. When they improve it, share that version further. Staff are stretched thin, and learning a new tool feels like a risk when you’re not sure it’ll pay off. A working example from a trusted peer changes that calculus. How Will AI Change Diplomacy? Today, it isn’t a matter of whether AI will change diplomacy— it already has. The real question is whether the people closest to the work will shape these changes. The structural constraints are real but addressable. Data quality remains the hardest problem—AI tools are only as good as the information they can draw on, and practitioners still build workarounds for systems that don’t talk to each other. Shared data platforms (e.g., contact directories, cable archives, lessons learned repositories) would make every AI interaction more useful. Without them, we’re asking people to use sophisticated tools with one hand tied behind their back. One of the surest ways to improve data quality is for people to actually use the systems connected to it. Data gets better when people see value in keeping it up to date as a shared resource. Security concerns, while legitimate, are already addressed: The three platforms discussed in this article—FAN, Funhouse, and Proving Ground—all meet the State Department’s existing security standards. The more difficult question is what happens with the time AI gives back. The common critique is fair: In most bureaucracies, efficiency gains don’t liberate people—they generate more tasking. Time saved on cable drafting becomes time spent on additional reporting requirements. The productivity dividend gets captured by the institution, not the practitioner. But that outcome is a choice, not an inevitability. Whether freed-up time flows toward the core work of diplomacy—the relationship-building, the strategic thinking, the judgment calls that no algorithm can replicate—or whether it simply feeds the machine’s appetite for more output, that is fundamentally what is at stake with AI adoption. The issue is not whether it will be adopted, but how and by whom the gains are used. That’s what the bureaucratic incentive issue, too, is really about—who benefits and toward what end. n
RkJQdWJsaXNoZXIy ODIyMDU=