Proving Public Diplomacy Programs Work

Speaking Out

BY JAMES RIDER

Last year, the Advisory Commission on Public Diplomacy, a bipartisan committee established in 1948 to assess and appraise the United States’ PD activities, released a report, “Data-Driven Public Diplomacy: Progress Toward Measuring the Impact of Public Diplomacy and International Broadcasting Activities.” Like many similar reports over the years, the ACPD study is generally optimistic about the success of the State Department’s public diplomacy programs. It further assumes that recent advances in data collection and analytics will help us better demonstrate their success, by proving their impact.

At the same time, the report takes a hard look at the current state of public diplomacy evaluation, making it clear that “progress toward” measuring the impact of public diplomacy is not the same thing as actually being able to measure it.

The uncomfortable truth that this report and others like it highlight is that after more than 70 years of institutionalized public diplomacy activities, we still can’t empirically verify the impact of most of our programs.

A consequence of this failing was highlighted by the State Department in its 2013 inspection of the Bureau of International Information Programs. The Office of the Inspector General’s findings raised serious questions about the lack of an overall public diplomacy strategy at the department: The absence of a departmentwide PD strategy tying resources to priorities directly affects IIP’s work. Fundamental questions remain unresolved [emphasis added]. What is the proper balance between engaging young people and marginalized groups versus elites and opinion leaders? Which programs and delivery mechanisms work best with which audiences? What proportion of PD resources should support policy goals, and what proportion should go to providing the context of American society and values? How much should PD products be tailored for regions and individual countries, and how much should be directed to a global audience?

These questions are relevant for everyone involved in public diplomacy work, not just IIP. I believe that the main reason we are still left with so many “unresolved fundamental questions” about the nature of our work is because of our continued inability to measure the impact of our programs. It is impossible to accurately allocate resources to priorities when you don’t actually know what works.

But why haven’t we been able to measure our impact? A review of recent studies suggests some answers.

We Do Not Value Evaluation

One reason has to do with the longstanding deficiencies of public diplomacy measurement and evaluation regimens. An astonishing fact highlighted in the advisory commission’s report is that in 2013 the Bureau of Educational and Cultural Affairs (ECA, the PD bureau that manages our best-known educational and exchange programs) allocated only .25 percent of its budget for program evaluation. The percentage allocated by other PD bureaus and offices was not much higher.

For comparison, the report notes that the industry average for evaluation spending is 5 percent. The University of Southern California’s “Resource Guide to Public Diplomacy Evaluation” says that evaluation experts recommend that “8-10 percent of the budget of any program should be invested in evaluation,” and that the Gates Foundation spends “a reported 15 percent on performance measurement.”

While the commision’s report does stress that PD leadership has started to pay more attention to measurement and evaluation, Vice President Joe Biden’s oft-quoted admonition to federal agencies—“Don’t tell me what you value; show me your budget, and I’ll tell you what you value”—seems appropriate here. By budgetary metrics, we do not value evaluation.

Moreover, the evaluations we do carry out often lack rigor. The report notes that many public diplomacy evaluations tend to focus more on outputs than outcomes, exaggerate results, and seem to be designed less as a tool for improving or discontinuing certain programs and more as an exercise in “placating Congress.”

In Search of the Holy Grail

One reason we haven’t been able to satisfactorily measure public diplomacy’s impact is that doing so is extremely difficult, if not impossible. In fact, many public diplomacy scholars refer to evidence of the impact of PD as “the holy grail” of their profession.

The evaluation guide mentioned above lists many of the problems that make PD programs so difficult to measure. Here are two of the most intractable factors:

■ PD work involves intangibles. Documenting verifiable changes in awareness, perceptions and attitudes requires an investment of considerable time, effort and skill. Doing so over a long period of time amplifies the challenge considerably.

■ Results may not be directly attributable to PD intervention. It is often difficult to draw a straight line of causation between a PD program and its desired result. Time, external events and other actors complicate the cause-effect equation.

Related to these problems is the difficulty in establishing appropriate program objectives in the first place. While the PD training department at the Foreign Service Institute has done a great job teaching officers how to design “SMART” (Specific, Measurable, Attainable, Relevant and Time-Bound) objectives, most objectives that meet those stringent criteria are measureable in terms of output (people trained, people reached, number of participants) rather than the impact we are ultimately looking for (understanding acquired, minds changed, etc.). Output is merely what you did. Impact is what you achieved.

It is no doubt because of these challenges that many PD officers traditionally do not value measurement and evaluation. Why spend time and resources on an evaluation whose results will be, at best, indeterminate?

Grander Objectives, Larger Target Audiences

Another major challenge in assessing the impact of public diplomacy programs is that we have increasingly set grander and more ambitious goals for our foreign policy in general, and our PD programs specifically.

Over the years, PD work has become about much more than just increasing understanding of the United States and its values. Many PD programs are about trying to instill our values in other societies, remaking other cultures in our image. Reflecting this change in scope, today’s PD programs are increasingly in line with integrated country strategies (ICS). Practitioners try to “move the needle” on common ICS objectives like strengthening democratic norms and institutions, encouraging entrepreneurship and economic reform, and empowering girls and women.

Take, for example, the Young Southeast Asian Leaders Initiative and the Young African Leaders Initiative. Both programs target tens of thousands of 18- to 35-year-olds across large regions, with the objective of creating young leaders (through leadership training and professional development), then empowering them to bring about fundamental changes in their societies (through grants and other funding).

Ironically, as public diplomacy programs have become more strategically focused, they’ve also become harder to manage and evaluate.

As a result of their participation in these programs, these youth are expected to start businesses, advance women’s rights, bring about democratic reforms, create initiatives to protect the environment and implement many other noble social projects in their home countries.

Compare these grand objectives with the relatively modest aims of one of our longest-running public diplomacy programs, the International Visitor Program (now the International Visitor Leadership Program). Created in the 1940s, the IVLP has the objective of “increasing mutual understanding” among a relatively narrow target audience of “up-and-coming leaders and elites” through a one-time guided tour of the United States.

Ironically, as public diplomacy programs have become more strategically focused, they’ve also become harder to manage and evaluate. Measuring an “increase in understanding” among a small defined group of elites and tracking them into the future is difficult, but not impossible. But evaluating and attributing the impact of new businesses, democratic reform efforts and the empowerment of women brought about by U.S. government-funded leadership training and skills-building courses is a far more daunting task.

Measuring the impact of public diplomacy programs will become more and more difficult as we shift resources away from educating a manageable target group of elites about the United States (propaganda) toward trying to instill democratic values and empower broad swaths of civil society to reform their countries (development).

Art, Science or Religion?

In light of the problems we have had in proving the impact of our PD programs, a logical question arises: How do we justify continuing to implement and expand programs without sufficient evidence of their effect?

I’ve posed this question to many of my PD colleagues over the past few years. The most common response is that public diplomacy is an art, not a science. As long as your programs are strategically focused, they assure me, you shouldn’t worry too much about measuring the impact. After all, any PD officers worth their salt know “in their gut,” from site visits and anecdotal evidence, whether a program is working or not.

While I have been known to say similar things myself in the past, I now find that claim unsatisfying (to be satisfied by anecdotal evidence alone is to be self-satisfied). By continuing, year after year, to evangelize about the greatness of democracy, proselytize on behalf of multiculturalism and preach the importance of equality without significant proof that we are in fact having any real impact, we make ourselves vulnerable to the charge that we do so largely on the basis of faith. One might argue we are closer to practicing a religion than to implementing an effective foreign policy program.

Even though it’s true that many government programs, domestic and foreign, continue to be funded despite their inability to live up to the congressional requirement that federal agencies be “accountable for achieving program results,” we should not be complacent. Our inability to prove the effectiveness of our programs should bother us, because it impedes our ability to make intelligent decisions about our funding priorities.

For example, when the State Department proposed cuts to the Fulbright Europe program in the Fiscal Year 2015 budget to increase funding for newer initiatives, there was a large outcry from Fulbright alumni, some of whom published opinion pieces and started an online campaign (www.savefulbright.org) arguing that cutting the program would have dramatic negative consequences for our foreign policy. Many of the arguments relied on rhetoric that was full of fallacious reasoning (e.g., appeals to history, anecdotal evidence, slippery-slope arguments, begging the question).

That’s not to say that the Fulbright Program hasn’t had great impact; it could very well be our most effective public diplomacy effort. But without evidence to help us weigh the cost-effectiveness of one program compared to another, we won’t ever have a way to adequately and dispassionately adjudicate budget disputes. Rhetoric will continue to rule the day.

What Is to Be Done?

Is there a way we can move from our current “faith-based” public diplomacy model toward a more evidence-based model? Possibly, but it will necessitate a shift in the way we think about our work. Here are a few recommendations, some of which echo those made in the advisory commission’s report and others like it.

1. Increase evaluations. As many have argued, we need to dramatically increase resources for independent evaluations, and we need to approach that process with more seriousness and honesty than we have in the past. We need to get away from the idea that by aggressively evaluating our programs, we are somehow fashioning our own noose. And we need to be prepared to discontinue programs that do not show evidence of impact. While some PD programs may be difficult to measure, that’s no excuse for not trying.

2. Reduce the number of PD programs. There are so many programs, initiatives and exchanges run by so many different State Department offices that PD officers spend their time in a frantic scramble, trying to keep up and execute as many as possible. The proliferation of programs has tended to result in quantity being preferred to quality, with very little time left for evaluation and measurement. As PD scholar Bruce Gregory has argued, PD officers need to learn how to “prioritize ruthlessly” and “say no” to programs that fall outside strategic goals. Only by reducing our focus will we ever have the time and ability to measure and evaluate the impact of our interventions.

3. Focus mainly on mid-level elites. Focusing limited resources on up-and-coming mid-level elites remains the more cost-effective and target-efficient PD programming. It is cost-effective because resources go toward cultivating those with greater potential impact in their societies; and it is target-efficient because future leaders are easier to identify at the mid-level than as youth. Most important, programs targeting a defined cohort of mid-level elites are easier to track and evaluate than those that do not. Our relationship with mid-level elites continues as they move up the ladder to become senior elites, giving us ample opportunity to continually measure and evaluate the impact of our investment. We should rethink programs targeting the very young and other non-elite groups, as they are almost always “drop-in-the bucket” gambles or photo ops.

4. Stop “fill-in-the-blank” diplomacy. Too often in public diplomacy, “innovative” is just a buzzword meaning little more than “new.” It seems that every week brings with it the proposal of a new genre of PD: fashion diplomacy and flash-mob diplomacy are just a few recent examples. Most of these are novelties, not well-thought-out program proposals based on thorough analysis and planning. A truly innovative program would be one that is designed in a way to measure its own impact. We currently have a great enough variety of programs to last a lifetime. Let us focus our efforts on measuring and evaluating our current projects, before we chase new butterflies.

Will we ever find the holy grail of measurable PD impact? Perhaps not. But we must not let our inability to measure impact enable an “anything-goes” approach. With greater rigor and investment in evaluation, we can go a long way toward becoming a more evidence-based discipline (in every sense of that word).

And, who knows? Maybe the evidence we gather will reveal that we’ve been even more effective than we thought.

James Rider is a mid-level public diplomacy-coned Foreign Service officer who is currently the political-economic section chief in Libreville. He previously served in Caracas and Tel Aviv. In 2013, he won AFSA’s W. Averell Harriman Award, recognizing constructive dissent by an entry-level Foreign Service officer.

 

Read More...