The Foreign Service Journal, June 2009

J U N E 2 0 0 9 / F O R E I G N S E R V I C E J O U R N A L 13 D uring her confirmation hear- ing, Secretary of State Hillary Clinton declared that making sure the department is functioning at its best is absolutely essential to Amer- ica’s success. While there are myriad challenges to rebuilding a strong and effective State Department, I would like to address one in particular: the dysfunctional performance evaluation system for Foreign Service officers. The Employee Evaluation Reports we spend so much time writing every year fail to give promotion panels a use- ful means for comparing officers to their peers. Raters and reviewers are not required to rank their subordinate officers, and almost never do. That leaves panel members almost wholly dependent upon the EER narratives, most of which describe the rated offi- cers as diplomatic wunderkinds. And when everyone is advertised as a super- star, it is hard to differentiate between real achievers and mediocre perform- ers. The result is promotions that are far more random than they should be. This problem is not new. In fact, it was one of the five key weaknesses identified in the “War for Talent” study that McKinsey & Company conducted for State in 1999. The report found that “the department fails to differenti- ate people sufficiently based on per- formance. It does not offer fast enough advancement for the best and bright- est, nor does it move aside enough of the weaker performers.” Disappointingly, when the depart- ment asked McKinsey to update its study in 2005, the company found that “The area in which the department has made the least change is in perform- ance evaluation, whose processes still work largely as they did in 1999.” Sen- ior State officials at the time said that the amount of effort required to fix the problem exceeded the benefits of doing so, revealing a disappointing ap- athy toward talent management. Since then, the only noticeable change has been to expand the use of a new EER form (DS-5055) that re- quires rated officers to write a greater portion of their own evaluations. I, for one, am quite happy to have more space to sing my own praises, but don’t see how this injects objectivity or rigor into the process. Where All the Officers Are Above Average For those unfamiliar with the EER process, here’s how it works. Most For- eign Service officers devote the first half of every May to drafting annual evaluations. Individual officers, their immediate supervisors (raters), and their raters’ bosses (reviewers) spend hours and hours filling up three pages with dense, single-spaced text detailing the rated employee’s numerous contri- butions to the salvation of the republic over the past year. There is also a single line devoted to “General Appraisal” that asks the rater “Was performance satisfactory or bet- ter?” Except in very rare cases, the “yes” box is automatically checked. In addition, raters and reviewers almost always include a recommendation to promote the rated officer immediately. The percentage of officers receiving such recommendations far exceeds the number of promotions available in a given year. In 2008, for example, only about 15 percent of FS-2 economic of- ficers made the cut. The huge gap between the number of those recommended for promotion and the small minority who will make the cut renders most of our EERs close to useless. But with no requirement to rank subordinates against their peers and the sure knowledge that everyone else is engaging in the same kind of grade inflation, no one has an incentive to disadvantage his or her own subor- dinates by writing candid evaluations. That leaves promotion panelists, who EERs: The Forgotten Front in the War for Talent B Y J ONATHAN F RITZ S PEAKING O UT Employee Evaluation Reports fail to give promotion panels a useful means for comparing officers to their peers.

RkJQdWJsaXNoZXIy ODIyMDU=