BY UBAH KHASIMUDDIN
Speaking Out is the Journal’s opinion forum, a place for lively discussion of issues affecting the U.S. Foreign Service and American diplomacy. The views expressed are those of the author; their publication here does not imply endorsement by the American Foreign Service Association. Responses are welcome; send them to firstname.lastname@example.org.
As an office management specialist (OMS), I was mandated to participate in the pilot iMatch program in the summer of 2021. iMatch is an algorithm-based program to match bidder and bureau preferences. It is modeled after the National Resident Matching Program that is used for American medical students when applying for medical residency. iMatch is touted as more objective and fairer than existing bidding systems and is being evaluated for possible adoption as the new bidding tool for State. When the 2021 bid season opened, OMSs were instructed to follow our regular bidding procedures but also register for iMatch at the same time.
Traditionally, bidding for an OMS job is exactly like that for the other specialties in the Foreign Service. The specialist goes into Talent Map, the user-friendly administrative arm of the bidding operation, where the job descriptions, points of contact and numbers of bidders for the position are logged. He or she uses the Community Lobby Center (CLC) to gather recommendations from supervisors, co-workers and subordinates. These recommendations are available for viewing by posts/bureaus when making selections on candidates. In addition, the bidder still has to send out email letters of interest to potential posts and do phone/video interviews.
Typically, each open OMS position gets roughly five to seven bids, with the low end being one bid and the high end, for very select nonhardship posts, being 35. OMS bidding is not as politically loaded with preferential treatment as generalist bidding. The kind of senior management and bureau interference commonly spoken about in connection with generalist bidding is not seen at any equivalent level by OMSs.
While iMatch had some of the same information as Talent Map, it didn’t really serve any specialized purpose other than ranking one’s choices. In addition, iMatch was not easily usable in controlled access area spaces because you had to scan a QR code from your mobile phone to register and get a six-digit authenticator code from the same mobile phone every time you wanted to log on to the system.
Further, you needed job position numbers from Talent Map to make your rank list in iMatch, because iMatch didn’t have them readily available. Also, perhaps because this was a pilot program, no one at post could assist if there were any technical issues.
Finally, the two systems (Talent Map and iMatch) had different dates for taskers to be completed, which added another layer of labor to keep track of various deadlines, thus making the exercise that much more onerous. For example, bidding season opened on Sept. 1, but OMSs couldn’t get into iMatch until Sept. 20.
Because the OMS job is very front-facing (i.e., heavy on customer service), there should be an emphasis on personality and informal references when making hiring decisions. Yet iMatch attempts to stamp these factors out. Although this might make sense on grounds of fairness for generalists, where subject matter expertise should be more valuable than who you know, it leaves OMSs at a disadvantage.
Many hiring managers don’t know our job so they can’t really gauge our effectiveness beyond standard department criteria. Further, I’m not sure how beneficial this stock yardstick is for the receiving office; one can learn how to do a travel voucher in E2 (the web-based travel management system), but a difficult temperament is hard to accommodate once enmeshed.
In the end, I am not sure how valuable iMatch was, despite its claims of trying to make the playing field fairer. If the point was to end the lobbying and jockeying element of bidding, iMatch failed. It was more than evident that posts and bidders quickly figured out how the game was played, and behind-the-scenes machinations continued.
If the point was to end the lobbying and jockeying element of bidding, iMatch failed.
Prior to interviewing, some posts asked where they were on the applicant’s ranking; in other instances, posts eliminated at-grade qualified candidates without explanation. From outside looking in, there appeared, in some instances, to be interference to guarantee certain positions to specific OMSs.
Because iMatch favors the bidder over the post, you have to be 100 percent on your number-one pick and lock it in a week before matching, even before early handshakes. In my case, some things changed in that week before match day, but my rank list was already locked in, and I could do nothing about it. In traditional bidding, this would not have been an issue, as I could have gotten handshakes from both posts (different bureaus) and then been able to make up my mind.
The iMatch system works in your favor if you and the post both register each other as number one. This necessitates finding out where you are on that post’s list, which some posts don’t want to tell you. iMatch forces the bidder to reveal all their cards and pick a number one very early on; but if, as happened to me, your family’s situation changes as you are interviewing with posts, what looked like number one at the start of September may not be that by late October.
Also, with this pilot program, some of the posts didn’t fill out the ranking list correctly, and people who should have been matched were not, or were incorrectly matched. This can hardly make anyone happy with the end result.
To make iMatch more palatable, I would suggest the following:
I would be hesitant to advocate the iMatch program in its current form to the broader Foreign Service. As with so many things at the State Department, the fundamental complaint about the bidding process is its lack of transparency; and, sadly, I am not convinced iMatch solves that concern.
Ultimately, the question I have is this: What was the problem iMatch was supposed to solve? What was the improvement expected of iMatch over the old bidding method? Only when the objective is clearly articulated can we determine if this change improves bidding, makes it worse, or does nothing at all.
Meanwhile, I suggest iMatch remain grounded.