Leading from the Edge: How Diplomats Are Actually Using AI

Both individual diplomats and teams at State are building artificial intelligence into their workflows with tools they already have at hand.

BY PAUL KRUCHOSKI

Last fall, 10 teams of diplomats gathered for a six-hour hackathon. Their challenge: build AI-powered tools to solve real training problems. By day’s end, they had produced 10 deployable solutions, including custom chatbots for negotiation training, a historical diplomacy tool, and an interactive briefing companion for officers heading to post.

This shows something important about AI adoption in U.S. diplomacy right now: The most consequential innovation is happening at the edge of the State Department, closest to the work. None of these innovations required developers. All of them run on tools the department already has.

When I wrote about artificial intelligence for The Foreign Service Journal in June 2024, I argued that the problem wasn’t technology but culture, that we risked fragmented systems and limited utility if we didn’t address how we share data and collaborate across organizational boundaries. Two years later, this prediction is still partially right. The cultural barriers remain; but practitioners found ways around them anyway.

The Ground Truth

Across the department, diplomats are building AI into their workflows with tools they already have. Instructional designers use Google NotebookLM to create onboarding materials that let new officers interact with procedural guidance conversationally rather than hunting through static documents. One strategic communications team runs a full AI-augmented workflow: brainstorming campaign concepts with large language models (LLMs), generating visual materials, distributing through established channels, and then measuring impact with media analysis tools that track sentiment and reach in near-real time.

Beyond the teams, individual practitioners are building personal AI workflows combining tools for research synthesis, translation, and meeting preparation. One officer described using three AI systems to analyze reactions to a major policy speech, synthesize cable traffic, and draft strategic recommendations—several days of work compressed into an afternoon. If AI tools can shrink several days of analytical work into an afternoon, the productivity implications scale quickly. Even the modest 5.4 percent time savings reported by AI users in a St. Louis Federal Reserve Bank study translates to meaningful capacity—capacity that could address backlogs, reduce overtime demands, or focus outward on meaningful diplomatic influence.

Moreover, the tools enabling these gains aren’t custom builds with seven-figure price tags—they’re capabilities the State Department already has. One of the best investments is Gemini (Google’s AI assistant) within the Foreign Affairs Network’s Google Workspace environment, FAN.gov, which offers communication, collaboration, and productivity applications. The costs are modest: An undiscounted Google Workspace Enterprise Plus license runs $36 per month. Compare that to a fully loaded overseas Foreign Service position exceeding $350 per hour—or even a domestic Civil Service role at a fraction of that. Saving even a few hours of time every month suggests that widespread adoption of tools would be incredibly cost efficient.

I’m seeing similar dynamics at the Department of Defense. The highest-impact interventions aren’t new multiyear, multimillion-dollar custom applications—they’re prompt engineering, workflow integration, helping practitioners develop judgment about when and how to use what’s already available.

Leading from the Edge

How do we help people use what they have? The examples above suggest an answer—and it’s not the one the department usually reaches for.

A typical enterprise software approach emphasizes replication: Deploy standardized platforms across the entire organization, ensure fidelity to design, measure outcomes consistently. The other approach emphasizes adaptation: Innovation grows when practitioners remix ideas and tools for local contexts, building solutions tailored to problems they understand intimately.

In fact, both have their place. Replication works for standardized processes. Adaptation works better when success depends on tacit knowledge (i.e., local context, relationships, and workflows that can’t be specified in a platform design). Today, AI adoption in diplomacy at the State Department is mostly the second kind. Knowing how to use AI well—crafting effective prompts, recognizing when output needs verification, integrating tools into established workflows—develops through participation and peer learning, not platform deployment. The tools that get adopted are the ones that fit how people work.

The examples I cite in this article are being used more and more widely. Staff going out to multiple embassies are already using the briefing companion. A new StateChat-based AI assistant for writing Civil Service position descriptions started in one office and is being adapted by several others—this is “spreading,” not “scaling.” Tracking progress is challenging for spreading approaches, unlike scaled applications where it is simple to monitor the number of users. Consistent with what one would expect in nascent practice, these tools are not for high-stakes use.

This points to a different role for department leadership: not controlling where innovation happens but creating conditions where it can happen safely at the edge. Govern the boundaries, resource the capability, learn from what works, document its use, and help it spread.

Platforms vs. Products

There’s a reasonable counterargument here. Without enterprise coordination, you get fragmentation and security risks. Better to invest in centralized AI tools—purpose-built solutions with proper security review, consistent interfaces, and clear accountability.

But this conflates two different entities. Centralized tools are finished products: headquarters controls what they do and how they work. Centralized platforms are shared environments established by headquarters where practitioners can build, test, and share their own solutions within appropriate guardrails. Platforms beat products here. The hackathon teams built 10 deployable solutions in hours—not because Google built the perfect diplomatic AI tool, but because the platform let them build what they needed.

The State Department’s new self-service AI sandboxes managed by the Center for Analytics—Funhouse and Proving Ground—point in the right direction.

Centralized tools struggle with adoption because they can’t anticipate every workflow. And when the enterprise is slow to provide options, practitioners don’t wait. They find commercial tools on their own, outside State’s environment entirely. That’s where the real security risk lies—not in sanctioned experimentation within FAN, but in work happening on personal devices and consumer accounts because official tools don’t meet practitioners’ needs.

The State Department’s new self-service AI sandboxes managed by the Center for Analytics—Funhouse and Proving Ground—point in the right direction. They enable diplomatic practitioners to build solutions that stay close to the mission, while enterprise provides guardrails for stability and reuse. These opportunities have been circulated in department notices and internal cables over the last year, including instructions on how to gain access to them. The choice of different platforms for experimentation is a real benefit for practitioners as well.

The strategic question isn’t whether to centralize—it’s what to centralize. Invest in platforms that enable practitioner innovation within secure boundaries. Invest in governance and accelerated education. When a team builds something that works, celebrate it, harden it, make it available department-wide. State’s Center for Analytics is already doing this. Their awards and recognition programs surface promising practitioner innovations and give them visibility. That’s enterprise complementing edge innovation, not competing with it.

Where to Start

Find your people. The practitioners I spoke with didn’t learn AI from training courses; they learned from each other. Find someone in your bureau who’s already experimenting—not necessarily the loudest AI enthusiast, but someone whose work has gotten noticeably better. Ask what’s working and watch how they use it. The knowledge that matters here is tacit; it transfers through observation and conversation, not documentation. The Center for Analytics is building support for this via networks of AI champions embedded in bureaus, pairing local credibility with enterprise support.

Write down how you actually work. Start a document that captures how you like to write, what you care about, and what workflows you like to use (e.g., the briefing structure you always use, the three questions you ask before every meeting, how you approach issue research). This is the tacit knowledge that makes you effective, and it’s exactly what AI needs to work for you rather than generically. When you give an AI tool that context, it stops producing generic output and starts producing output shaped by your judgment. I keep multiple documents: key workflows, key facts, and my own work preferences. Your vault of documents can and will evolve with you, getting better each time. Parts of that document also become a thing you can share with colleagues when they ask how you do what you do.

Build something, then share it. That document is a start—but the real momentum comes from building. Pick one repetitive task (e.g., meeting prep, talking points, cable summaries) and make something that helps. One task, one tool, one win. The confidence compounds. When it works, share it with a colleague. When they improve it, share that version further. Staff are stretched thin, and learning a new tool feels like a risk when you’re not sure it’ll pay off. A working example from a trusted peer changes that calculus.

How Will AI Change Diplomacy?

Today, it isn’t a matter of whether AI will change diplomacy—it already has. The real question is whether the people closest to the work will shape these changes.

The structural constraints are real but addressable. Data quality remains the hardest problem—AI tools are only as good as the information they can draw on, and practitioners still build workarounds for systems that don’t talk to each other. Shared data platforms (e.g., contact directories, cable archives, lessons learned repositories) would make every AI interaction more useful. Without them, we’re asking people to use sophisticated tools with one hand tied behind their back. One of the surest ways to improve data quality is for people to actually use the systems connected to it. Data gets better when people see value in keeping it up to date as a shared resource.

Security concerns, while legitimate, are already addressed: The three platforms discussed in this article—FAN, Funhouse, and Proving Ground—all meet the State Department’s existing security standards.

The more difficult question is what happens with the time AI gives back. The common critique is fair: In most bureaucracies, efficiency gains don’t liberate people—they generate more tasking. Time saved on cable drafting becomes time spent on additional reporting requirements. The productivity dividend gets captured by the institution, not the practitioner.

But that outcome is a choice, not an inevitability. Whether freed-up time flows toward the core work of diplomacy—the relationship-building, the strategic thinking, the judgment calls that no algorithm can replicate—or whether it simply feeds the machine’s appetite for more output, that is fundamentally what is at stake with AI adoption. The issue is not whether it will be adopted, but how and by whom the gains are used. That’s what the bureaucratic incentive issue, too, is really about—who benefits and toward what end.

Paul Kruchoski is a director at Guidehouse, where he helps organizations modernize operations and leverage emerging technologies. A former member of the Senior Executive Service, he served for 16 years at the State Department. His roles included director of the Office of Policy, Planning, and Resources for Public Diplomacy (R/PPR), director of the Public Diplomacy Research and Evaluation Unit, and deputy director of the Bureau of Educational and Cultural Affairs’ Collaboratory Innovation Unit. He is a term member of the Council on Foreign Relations and a recipient of the State Department’s Sean Smith Award for Innovation in the Use of Technology.

 

When sharing or linking to FSJ articles online, which we welcome and encourage, please be sure to cite the magazine (The Foreign Service Journal) and the month and year of publication. Please check the permissions page for further details.

Read More...