24 JULY-AUGUST 2025 | THE FOREIGN SERVICE JOURNAL assessment while avoiding “hurt feelings” for leaders who still struggle with direct constructive feedback. Structural Elements of the New System While the department could copy and paste existing precepts into this proposed system, we have an opportunity to go one step further and refine our focus on an employee’s demonstrated characteristics and impact. If framed properly, these criteria would be equally relevant to officers of all ranks, further eliminating wasted effort narrating the nuanced context of every position in the department. Here (see box) is one vision of a framework that evaluates across 12 elements of four core precepts. While these precepts represent just one view of evaluation criteria, our current EER’s failure to capture some of these characteristics indicates major gaps in how we think about what makes a quality leader and what employees need to succeed at higher levels. Implementation Mechanics A simplified rater scoring process. Accepting that the vast majority of employees perform their jobs well, scores between 1 and 3 do not require justification. Raters only need to write justification text for adverse scores or values above 3. Instead of wasting effort building context and recounting a year’s worth of accomplishments in narrative form, these short text blocks substantiate why the employee’s performance falls outside the expected range (see Figure 1). Reviewer’s verification and ranking. The reviewer section focuses on validating the rater’s assessment, serving as an accountability check against grade inflation from a more experienced leader with a broader perspective on organizational norms. The reviewer also directly ranks the employee’s performance against department peers. A weighted formula is also applied here to correct raw rankings so they mirror the expected workforce distribution, with most employees in the mid-range and very few “unsatisfactory” and “eminently qualified” outliers (see Figure 2). Accounting for small sample sizes. A combination of statistical adjustments, procedural safeguards, and training would be used to compensate for raters with limited rating histories. These tools are critical during the initial implementation phase when all raters/reviewers lack a historical record. Statistical tools like “shrinkage estimation” or Bayesian modeling can bias A Possible Framework 1. Mission Accomplishment Performance Substantive and technical expertise Initiative 2. Leadership Leading others Developing others Effectiveness under stress 3. Management Organizing projects and managing tasks Accountability and integrity Resource optimization Intellect and wisdom 4. Communication Decision-making ability Judgment An example of a reviewer section. The reviewer ranks the employee among all peers in the department by selecting the most appropriate description block. The “tree diagram” visually depicts the workforce distribution for each ranking option, nudging reviewers to assign more employees in the lower (yet highly qualified) blocks. The reviewer also assesses the employee’s suitability for promotion. (Adapted from two military branch forms.) Figure 2: Sample Reviewer Section G. REVIEWING OFFICER ASSESSMENT COMPARATIVE ASSESSMENT Provide a comparative assessment by selecting the appropriate group that represents the officer’s performance and potential within the department. In selecting the comparison, consider all officers of this rank whose professional abilities are known to you personally. DESCRIPTION The eminently qualified officer One of the few exceptionally qualified officers One of the many highly qualified professionals who form the majority of this rank A qualified officer Unsatisfactory PROMOTION RECOMMENDATION o Significant Problems o Progressing o Promotable o Must Promote o Promote Early REVIEWING OFFICER COMMENTS: Amplify your comparative assessment mark; evaluate potential for continued professional development and promotion. COMPARATIVE ASSESSMENT
RkJQdWJsaXNoZXIy ODIyMDU=