The Foreign Service Journal, May-June 2026

32 MAY-JUNE 2026 | THE FOREIGN SERVICE JOURNAL can shrink several days of analytical work into an afternoon, the productivity implications scale quickly. Even the modest 5.4 percent time savings reported by AI users in a St. Louis Federal Reserve Bank study translates to meaningful capacity—capacity that could address backlogs, reduce overtime demands, or focus outward on meaningful diplomatic influence. Moreover, the tools enabling these gains aren’t custom builds with seven-figure price tags—they’re capabilities the State Department already has. One of the best investments is Gemini (Google’s AI assistant) within the Foreign Affairs Network’s Google Workspace environment, FAN.gov, which offers communication, collaboration, and productivity applications. The costs are modest: An undiscounted Google Workspace Enterprise Plus license runs $36 per month. Compare that to a fully loaded overseas Foreign Service position exceeding $350 per hour—or even a domestic Civil Service role at a fraction of that. Saving even a few hours of time every month suggests that widespread adoption of tools would be incredibly cost efficient. I’m seeing similar dynamics at the Department of Defense. The highest-impact interventions aren’t new multiyear, multimillion-dollar custom applications—they’re prompt engineering, workflow integration, helping practitioners develop judgment about when and how to use what’s already available. Leading from the Edge How do we help people use what they have? The examples above suggest an answer—and it’s not the one the department usually reaches for. A typical enterprise software approach emphasizes replication: Deploy standardized platforms across the entire organization, ensure fidelity to design, measure outcomes consistently. The other approach emphasizes adaptation: Innovation grows when practitioners remix ideas and tools for local contexts, building solutions tailored to problems they understand intimately. In fact, both have their place. Replication works for standardized processes. Adaptation works better when success depends on tacit knowledge (i.e., local context, relationships, and workflows that can’t be specified in a platform design). Today, AI adoption in diplomacy at the State Department is mostly the second kind. Knowing how to use AI well—crafting effective prompts, recognizing when output needs verification, integrating tools into established workflows—develops through participation and peer learning, not platform deployment. The tools that get adopted are the ones that fit how people work. The examples I cite in this article are being used more and more widely. Staff going out to multiple embassies are already using the briefing companion. A new StateChat-based AI assistant for writing Civil Service position descriptions started in one office and is being adapted by several others—this is “spreading,” not “scaling.” Tracking progress is challenging for spreading approaches, unlike scaled applications where it is simple to monitor the number of users. Consistent with what one would expect in nascent practice, these tools are not for high-stakes use. This points to a different role for department leadership: not controlling where innovation happens but creating conditions where it can happen safely at the edge. Govern the boundaries, resource the capability, learn from what works, document its use, and help it spread. Platforms vs. Products There’s a reasonable counterargument here. Without enterprise coordination, you get fragmentation and security risks. Better to invest in centralized AI tools—purpose-built solutions with proper security review, consistent interfaces, and clear accountability. But this conflates two different entities. Centralized tools are finished products: headquarters controls what they do and how they work. Centralized platforms are shared environments established by headquarters where practitioners can build, test, and share their own solutions within appropriate guardrails. Platforms beat products here. The hackathon teams built 10 deployable solutions in hours—not because Google built the perfect diplomatic AI tool, but because the platform let them build what they needed. Centralized tools struggle with adoption because they can’t anticipate every workflow. And when the enterprise is slow to provide options, practitioners don’t wait. They find commercial tools on their own, outside State’s environment The State Department’s new self-service AI sandboxes managed by the Center for Analytics—Funhouse and Proving Ground—point in the right direction.

RkJQdWJsaXNoZXIy ODIyMDU=