Foreign Assistance Lessons from Afghanistan: How to Balance Accountability and Learning

An insider to the two-decade-long U.S. mission in Afghanistan offers a refreshing look at the challenge of effective foreign assistance oversight.

BY DAVID H. YOUNG


Afghan National Army (ANA) cadets practice drills on the parade grounds at the Afghan National Defense University, where future ANA officers were trained, in Kabul, on May 7, 2013.
U.S. Department of Defense

Despite having a reputation as bean counters and thorns in the sides of agencies, government oversight organizations are integral to ensuring U.S. foreign assistance is effective. The organization I work for, the Special Inspector General for Afghanistan Reconstruction (SIGAR), reports directly to Congress and has oversight of all U.S. assistance to Afghanistan—from any agency providing it—totaling $148 billion since 2002. Our audits, inspections, and investigations have cumulatively saved the taxpayer $3.97 billion since our founding in 2008.

Yet in important ways, oversight offices can sometimes introduce new challenges to the delivery of U.S. foreign assistance. Traditional oversight work by Congress, SIGAR, and similar government organizations tends to reinforce a zero-sum battle between holding U.S. agencies accountable on the one hand, and helping them learn from their mistakes on the other. As the deputy director of SIGAR’s Lessons Learned Program, I spend most of my time at this strange crossroads in U.S. foreign assistance.

It seems intuitive that accountability for mistakes would naturally lend itself to learning from those mistakes. After all, upon being dragged through the coals for failure, which U.S. official or agency wouldn’t want to avoid that frustration in the future? It turns out, however, that the way oversight is conducted can have surprising effects on the incentives of senior U.S. officials and what precisely they learn from the oversight.

It may help State and USAID diplomatic professionals see the bigger picture of this problem when they look at it as “outsiders.” Consider, therefore, an example from the Department of Defense (DoD).

Building the Afghan Army and Police

In early 2011, nine years after the U.S. government started rebuilding the Afghan Army, none of its 134 battalions could operate independently, and only 32 percent were deemed “effective with advisers.” The rest still needed significant handholding or worse, and the Afghan police were in almost identical shape.

In March of that year, at a hearing held by the House Armed Services Committee, Representative Robert Andrews (D-N.J.) told General David Petraeus, then commander of U.S. forces in Afghanistan: “I notice that on both the police and army readiness measures, none of the units are at the ‘green’ or ‘independent’ level yet. … What do you think is going to happen to that pace in both the police and the army, let’s say, in the next six-month window? What can we expect?”

This is what traditional congressional oversight generally looks like: the overseer requests and reviews government documents, asks senior U.S. officials to testify, engages them on comparisons between a policy’s objectives and its results, and reminds U.S. officials that their work is being scrutinized. The hope is that U.S. officials will thoughtfully absorb this feedback and use it to improve U.S. foreign assistance. Yet what often happens is closer to a perversion of that feedback loop.

Three months after this hearing, the U.S. “surge” of troops ended. The pressure to transition responsibility for security to Afghan forces was immense. So, to plausibly demonstrate progress toward that goal, in August 2011, the U.S. military changed the Afghan forces’ highest capability category from “independent” to “independent with advisers.” Lowering the bar in this way allowed 12 Afghan Army battalions to be recategorized into the top tier by February 2012, artificially quickening the “pace” of improvement that Rep. Andrews and many other U.S. policymakers were concerned about. The gimmick was used for the Afghan police as well. In a report to Congress in April 2012, DoD wrote: “The number of [Afghan police] units rated ‘Independent with Advisers’ increased from 0 in August 2011 to 39 in January 2012.”

A conventional oversight approach by Congress or an organization like SIGAR would correctly draw attention to this subterfuge and argue that DoD was simply hiding failure, rather than addressing it. Yet the prevalence of this trend by State, USAID, and DoD across America’s 20 years in Afghanistan is symptomatic of something far worse and more entrenched than U.S. officials occasionally trying to make themselves look good.

Experts at Distortion


Hospital Corpsman 1st Class Craig Gold (at left)—assigned to the Border Mentoring Team of 3rd Battalion, 1st Marine Regiment, Regimental Combat Team 7—instructs an Afghan soldier on proper weapons handling at the border patrol compound in Shamshad in Helmand Province on May 15, 2010.
U.S. Navy

For two decades, the American people heard claims that we were on the right track in Afghanistan, that with a little more time, the Afghan government and its institutions could become self-sustaining and allow a U.S. withdrawal. It was mostly nonsense. DoD knew Afghan security forces were not on track—a third of them had to be replaced every year. State and USAID likewise knew corruption and poor capacity in the Afghan ministries were prohibitive—they kept most U.S. funds away from Afghan government coffers to prevent Afghan officials from stealing or misallocating those funds.

So why did these officials give such rosy assessments to Congress and the American people year after year? The answer is simple: The U.S. government’s foreign assistance machinery structurally motivates senior officials to distort, embellish, and spin—even if it means significantly hurting the quality of that foreign assistance or enabling failures to continue.

SIGAR’s Lessons Learned Program has conducted more than 1,200 interviews with government officials and contractors who worked in or on Afghanistan, leaving us with a detailed composite of how this incentive system works. Senior U.S. officials were often in a very difficult position. Keeping with the DoD example, legislators and administration officials constantly told them, in effect: “We’ve been at this for years, and Afghan forces are not improving fast enough. Let’s see some progress.”

Building a military from scratch takes many decades, and constant turnover in U.S. staff at all levels meant that people who were not around when poor decisions were originally made were still expected to answer for those decisions years later. Under scrutiny, these U.S. officials transferred the pressure coming from above downward onto their staffs until it eventually reached the trainers and mentors working side by side with Afghan security forces.

Yet those trainers were also perpetually new to the job and inherited problems beyond their power to fix. Still, they heard the message loud and clear—demonstrate progress or else. Unable to accelerate the improvement of Afghan forces, they did what was in their power—namely, to change how improvement was measured to give the impression of more improvement.

Between 2010 and 2014, when the pressure to transition authority to Afghan forces was greatest, DoD cycled through seven different systems for evaluating the capabilities of troops and police. After the seventh, DoD opted to start classifying the assessments. Several iterations in the framework allowed senior U.S. officials to take credit for gains that were almost entirely on paper, perpetuating the illusion that sufficient progress could be made to permit withdrawal in the near future. In SIGAR’s experience, many U.S. staff working in Afghanistan were genuinely devoted to building Afghan institutions. Still, these DoD officials—like those at State and USAID—became experts at distortion, in part because they were held accountable for the wrong things.

Progress vs. Learning

Oversight of foreign assistance often centers on the question, “What progress have you made?” This is certainly an important question, but learning and improving is far harder when U.S. officials constantly feel the pressure to demonstrate tangible progress on timelines that are often absurdly short. Foreign assistance frequently fails and, even when successful, takes considerable time to yield results. Pressure to show progress in such conditions creates a very simple path of least resistance—game the system. Even a leader with integrity who demands fast progress from their staff may unintentionally pass down the message that the appearance of progress is more important than progress itself.

So rather than learn why those Afghan battalions were not becoming independent and speak openly about constraints that must be addressed if that independence is to be achieved, it seemed that DoD as an institution learned that avoiding criticism was more important than exploring meaningful ways to improve.

Indeed, avoiding the appearance of failure often seems to be the North Star for large bureaucracies operating under traditional oversight. The daily expectation of progress makes real learning far harder because staff are too preoccupied finding quick victories, even if they are fleeting or, worse, pyrrhic. Over time, the effort hollows out to become a house of cards and collapses, just as the U.S.-supported Afghan government did.

What is Congress or an oversight office to do—not ask about progress? Not exactly. Rather than asking, “What progress have you made?” perhaps the overriding question guiding oversight of foreign assistance should be: “What have you learned?” In different ways and in varying degrees, DoD, State, and USAID are slowly coming around to the idea that they need clear evidence demonstrating that any given strategy or program is likely to work. That low bar is quite an improvement over their work in Afghanistan, where SIGAR’s lessons learned reports describe in detail many U.S. government strategies and programs based on dubious or false assumptions, untested theories of change, and mere hope.

When Congress and SIGAR criticized U.S. agencies for problems in Afghanistan, those agencies seldom raced to collect the evidence necessary to improve, but rather they opted to find creative ways of avoiding the appearance of failure. This is certainly an accountability problem but one that deserves a different approach. Particularly with foreign assistance, U.S. agencies should be held accountable first and foremost for their failure to learn, not their failure to succeed at any given moment.

Moreover, the answer to the question “What have you learned?” will likely lead Congress or an oversight organization to the same information as asking more directly about progress, but through a much healthier pathway—one that incentivizes U.S. officials to base their decisions on evidence and convince overseers of the merits of doing so.


General John F. Campbell, commander of Resolute Support Mission and United States Forces–Afghanistan, testifies during a Senate Armed Services Committee hearing on the ongoing situation in Afghanistan on Oct. 6, 2015.
UPI / Alamy Stock Photo

A New Model for Assessment

What would this look like in practice? Members of Congress and oversight organizations would direct more scrutiny toward the systems the agencies have in place for collecting and analyzing data about their efforts, what their evidence tells the agencies about progress when measured against their goals, and what the agencies plan to do differently given this evolving body of evidence.

Indirectly, Congress and the oversight organizations would still receive ample information about progress and performance, but in a way that cultivates a culture of self-scrutiny at the agencies and a tolerance among oversight professionals for a learning curve on immensely complex issues. Indeed, if failures lead to demonstrable learning and verifiable improvement, it will become less controversial for a senior U.S. official to admit to facing significant challenges.

Under this model, senior officials would still be held accountable for foreign assistance failures, but the basis for those failures would instead revolve around, for example, asking the wrong questions, doing sloppy data collection and management, having insufficient or unqualified personnel to translate that data into actionable evidence, and failing to take any actionable evidence about what works and what doesn’t and do something meaningful with it.

Too many decisions at State and other agencies are based on the judgment of senior officials, many of whom have significant personal experience but often rely on that experience as a sacred oracle. When a strategy or program fails, if these officials can credibly tell Congress, “We simply followed the available evidence,” it helps create a more reliable North Star, brings attention back to the institution rather than a specific official, and forces the more important question, “Why didn’t the evidence lead to success?”

What the Agencies Can Do

It may be tempting for agencies to think that responsibility for kickstarting reforms such as these rests entirely with Congress and oversight organizations, but the agencies themselves have more power than they think. As agencies have the most to gain from this shift, State, USAID, and others should lean into it.

First, they should find opportunities big and small to demonstrate how learning has improved their work. This will require investing more in research capacity—from data collection all the way to formulating courses of action—or more thoughtfully leveraging research already completed or underway.

Second, agencies should take the time to craft and tell compelling stories. This is not a call for anecdotes about beneficiaries but rather stories about positive and negative trends, related both to the substance of what agencies are learning and the challenges of the learning process itself. For example, Congress and oversight offices need to understand how difficult it is to define and measure “success” in foreign assistance.

This knowledge is not intuitive for the uninitiated. The agencies tend to think that sharing how hard a task is opens them up to criticism for being bad at that task, but doing so is likely to change the nature of oversight over time, as long as the bad news is followed by “and here’s what we’re doing differently as a result.” Only the agencies can describe the challenges and importance of their work, but that story will not tell itself.

And third, agency leadership—certainly including all political appointees and their senior staff—should move away from the traditional oversight mindset in their own management practices. This mentality is not simply inherited from Congress and oversight organizations. The traditional oversight mindset is traditional for a reason—it’s widespread across most thinking about motivation and achievement from a workforce. Instead, at every meeting, supervisors at agencies involved in foreign assistance should be asking their staff, “What are we learning?” and then deploying the staff and resources necessary to get better at learning, which will in turn improve their performance.

Initially, the incentive to distort will still seep into discussions about evidence as this shift takes root, but over time officials conceiving and implementing foreign assistance will see that building a culture of evidence will actually help them succeed, get promoted, and build credibility with Congress and the American people.

In Congress, oversight organizations, and in the agencies themselves, our notion of accountability in foreign assistance needs a reset, where mistakes are no longer prohibited but repeating them is.

David H. Young is deputy director of the Lessons Learned Program at the Office of the Special Inspector General for Afghanistan Reconstruction (SIGAR). He was the lead researcher for SIGAR’s reports on U.S. stabilization efforts, U.S. support to elections, the agency’s lessons-learned compendium report “What We Need to Learn,” and the congressionally mandated report, “Why the Afghan Security Forces Collapsed.” He has extensive field experience in six conflict/post-conflict environments: Afghanistan, the Sahel, Israel/Palestine, the Balkans, the Caucasus, and Northern Ireland. In the past, he has worked as an adviser to the U.S. Department of Defense, the World Bank, the U.S. Institute of Peace, Adam Smith International, and Interpeace.

The views expressed in this article do not necessarily reflect those of the Office of the Special Inspector General for Afghanistan Reconstruction.

 

When sharing or linking to FSJ articles online, which we welcome and encourage, please be sure to cite the magazine (The Foreign Service Journal) and the month and year of publication. Please check the permissions page for further details.

Read More...