US(AI)D: What Launching AI Tools for Communicators Taught Us

USAID tested the use of AI tools with a team of communications specialists around the world. The pilot’s originators present the takeaways.

BY VICTORIA MITCHELL AVDIU AND MICHELLE (STALEY) SWAIN

When we launched ChatGPT Team access to USAID’s development, outreach, and communications (DOC) specialists in September 2024, something we had never tried before, it felt new and ripe with possibility. This was an unprecedented opportunity to test the use of artificial intelligence tools across a wide swathe of people performing the same job functions. We didn’t have any specific outcomes in mind; we just hoped it might make these specialists’ lives a bit easier.

At the time, at least 190 DOC specialists were in more than 100 countries. Unlike a State Department public diplomacy officer, who explains to foreign audiences how American history, values, and traditions shape U.S. foreign policy, the DOC specialist focuses on programmatic impact. A State PD officer might explain why the U.S. prioritizes global health, while a DOC specialist would show how a specific clinic funded by USAID changed a community. More than half of these specialists were locally employed staff members; they were often organized in small teams of three or fewer; and they had extremely limited budgets.

AI seemed promising for specific tasks: drafting, brainstorming, editing, and exploring ideas using nonsensitive content, searching through lengthy documents to summarize content and discover insights, compiling talking points, and drafting social media posts. We hoped it could save valuable time for more creative assignments.

Planned to run for one full year, from October 2024 to September 2025, the pilot was cut short after just four months with the abrupt dismantling of USAID and the pending loss of almost all positions at the agency. Though this prevented us from evaluating long-term effects of the project, we did come away with some useful observations and lessons learned.

What We Built

The Legislative and Public Affairs DOC Team at USAID addressed the agency’s most complex communications challenges by training and connecting DOC specialists worldwide, advising missions on communications structure, championing priorities with Washington stakeholders, and protecting USAID’s brand equity. In September 2024, in response to requests for support and tools to assist with their heavy workload, we used remaining, expiring funds to acquire a limited number of yearlong ChatGPT Team subscriptions to share with the global DOC network on a pilot basis.

We notified the entire network, shared preliminary ideas for how ChatGPT could help, and asked those interested to fill out a simple online form. We provided clear rules for not using it for anything sensitive or unavailable for public consumption. Users had to agree to the rules of use before receiving a seat. A couple of months after the 82 seats were assigned, we scheduled a community of practice call to share what we had learned.

Participants saw possibilities for minimizing time-consuming tasks and creating an AI personal assistant.

The senior DOC specialist with USAID/Benin, who had demonstrated command of the tool and a strategic eye for implementation, addressed the discussion. She described how she had begun by uploading USAID’s style guide, her country’s Country Development Cooperation Strategy (a non-SBU version), and USAID’s branding and marking manual into her workspace. She then “taught the system” about the role of the DOC specialist. Finally, she shared examples of how the tool had helped her review partner-generated content submissions, synthesize talking points, simplify complex language into easy-to-understand phrases, and more.

Others shared how AI had helped them refine language to fit within character counts for social media platforms. The tool proved useful for brainstorming ideas or understanding options when facing an unfamiliar topic or task. It provided rough translations that could then be verified by human translators. Media monitoring, editing, and text refinement were other common applications. Participants saw possibilities for minimizing time-consuming tasks and creating an AI personal assistant.

A Cautious Approach

But at the same time, some practitioners were vocal about their ethical and environmental concerns, a tension that ran throughout the pilot. Overall, they were cautious in their use of AI chatbots in late 2024. The tools felt too new to overly trust; usage over time typically builds comfort. Some of the caution can be attributed to the fact that most participants had not been trained in how to use these tools or what their capabilities were.

Moreover, these tools are trained primarily on English-language internet text, which means they inherit the assumptions, cultural frameworks, and blind spots of the communities most represented in that data, which are predominantly Western, English-speaking, and relatively affluent. For foreign affairs professionals working across diverse cultural contexts, this is a significant limitation.

In addition, environmental concerns surfaced frequently. USAID supported projects focused on protecting the environment and on sustainable development, so we pondered how staff should handle the ethical dilemma of participating in a system that profoundly impacts fresh water and energy supplies. On the one hand, USAID missions had limited resources that AI could directly, positively impact. If staff could spend less time on routine administrative tasks that consume disproportionate time relative to their impact on mission goals, they would have more time for partnership and positive change in the communities they served. On the other hand, reports on AI’s negative environmental impact were, and continue to be, stark.

Over the course of the pilot, there were also a couple of instances in which teams developed AI-generated images that were not in line with branding and marking guidelines. They were ultimately allowed to use those images, but for internal materials only. We hypothesize that constrained budgets might have pushed teams toward increased AI photography usage over time. Editing photography is time-consuming after all. But some emerging AI editing software includes the morally gray capability of changing the gender or race of individuals in photos. The DOC role existed, in part, to connect with local audiences, so leaning on these capabilities seems improbable.

AI does not exist without ethical considerations. DOC specialists actively debated this during our community of practice discussion. We agreed on cautious, responsible usage. If AI could directly increase efficiency so you could spend time elsewhere, it was worth the exercise. It was wise, however, not to overdepend on or use the system indiscriminately.

The Bigger Picture

Where does human judgment and nuance matter most? Only a human, and particularly one who has grown up in the country, attended school there, and speaks the language, can truly understand the cultural nuances and political implications of certain communications. This is why the DOC specialists were so critical to communicating USAID’s and the U.S. government’s impact to local audiences.

Based on our experience, AI can be helpful for brainstorming ideas, refining and editing text, ensuring documents remain within word count, drafting content, producing rough translations, conducting media monitoring, and even assisting with public speaking practice. AI can do much more than rewrite an email; it is fundamentally shifting how strategic communications work. Communicators used to spend weeks on landscape analysis that AI can conduct in seconds. We also tested some AI tools to serve as a public speaking coach for rehearsing important speeches.

If we could give one piece of advice to Foreign Service communicators about AI, it would be this: Keep people in the driver’s seat.

But if you are advising an embassy colleague who’s considering using an AI chatbot for communications work, we would recommend several guardrails. Do not input any sensitive or personally identifiable material unless you are using an internal system developed for more sensitive information. Ensure that settings are enabled so that the model is not trained using your material.

We developed a simple framework: low-stakes, repetitive, time-consuming tasks can be delegated to AI, with a review by human eyes of course. Complex tasks where you will be refining the output are suited for partnering with AI. High-stakes decisions, sensitive communications, or anything with reputational risk should remain human only.

We also worry about the impact on entry-level professionals. In communications, much of the learning happens through the unglamorous work of drafting and redrafting press releases, formatting talking points, and compiling media lists. These tasks teach newcomers how to think through the process, not just produce the product. If that work is delegated to AI, junior professionals may arrive at senior roles without the foundational understanding of what goes into the outputs they are overseeing. And while AI does open possibilities for professionals at all levels to focus on more creative, strategic work, we suspect the more likely outcome is that communications teams will simply be expected to do more with less, paradoxically increasing pressure rather than relieving it.

One Piece of Advice

If we could give just one piece of advice to Foreign Service communicators about AI, it would be this: Keep people in the driver’s seat. AI tools will never replace the value of judgment, accountability, and human relationships.

Learn how to use these tools, keep up to date on changes, and experiment when you have downtime. The worst time to learn what AI can and cannot do is when facing a crisis or tight deadline. By experimenting today with routine drafts, background research, or internal documents, you will develop the judgment to know when to trust it, when to double-check it, and when to set it aside entirely. That instinct will serve you well when the stakes are high.

Our program ended before we could see its full impact. The lessons we learned, however, remain relevant. AI can meaningfully support Foreign Service communications work, but only when deployed with clear-eyed awareness of its limitations. The DOC specialists who participated in our pilot understood this instinctively. They brought healthy skepticism, ethical concerns, and a deep appreciation for local knowledge that no language model can replicate.

As AI adoption accelerates across the Foreign Service, human judgment remains the most valuable tool of all.

Victoria Mitchell Avdiu is a writer, executive coach, and retired USAID Foreign Service officer. During her 16-year diplomatic career, she served in eight countries, including as USAID country director for Belarus. Her writing has appeared in The Washington Post. She currently lives in Dijon, France.

 

Michelle (Staley) Swain is a consulting director at Reingold, a Foreign Service family member, and a former institutional support contractor with USAID, specializing in strategic communications. Michelle and her FS husband are currently enjoying their first overseas posting, in Mumbai.

 

When sharing or linking to FSJ articles online, which we welcome and encourage, please be sure to cite the magazine (The Foreign Service Journal) and the month and year of publication. Please check the permissions page for further details.

Read More...