The “lemon” problem
Economist George Akerlof called this problem “the lemon problem” in his seminal paper of the 1970s (he went on to win a Nobel for his work in this area), illustrating it with the example of the used car market. In his telling, there exists a fundamental information asymmetry between sellers and buyers of used cars – the seller knows more about the real value of the car they are selling than the buyers. Mixed in with the high-quality used cars available on the market are duds or “lemons”. Buyers are aware of this, but are unable to differentiate between the two. As a result, they end up paying the same price for used cars – in effect, paying a premium if they end up getting the dud, but by the same token, getting a high-quality used car at a steal.
The result of this information gap can be costly for the market as a whole: Sellers of high quality used cars lack the incentives to participate in the market. Buyers, faced with uncertainty, are similarly reluctant to join in.
We see this dynamic play out to some extent in the market for short-term talent . While there are many consultants and self-anointed experts in education, there is wide variation in the quality of their work product and practically no reliable information to help organizations evaluate expertise before hiring someone for a project. Moreover, in the absence of a systematic and unbiased mechanism to validate and showcase expertise, individuals who bring specialist skills in an area often find it challenging to signal their superior grasp of the subject.
Why it matters
For an organization, working with a “lemon” can result in time and money being invested in an initiative without commensurate results. At a systemic level, one implication is that over a period of time, market participation by experts as well as organizations in need of expertise is less than what it could be. Another implication is that in the absence of directly relevant information about expertise, buyers and sellers end up relying on proxies to help them separate the wheat from the chaff. References from people we know, the pedigree of the education institutions that the expert attended, the salience of the organizations that she has worked with, the number of Twitter followers she has – all these and more become thumb rules that we use to gauge whether someone has the chops to deliver on a project requirement .
These proxies may be useful. But they are incomplete. As Rick Hess points out in his thought provoking essay in EdWeek, they can “lead to our investing great authority in this or that expert for a season” or extrapolating from expertise in one area, investing an individual with “presumed expertise across a broad range of issues”. The result is often an underwhelming work product, followed closely by skepticism about the benefit of expertise in general.
Another challenge presented by the proxies is that they often play into and reinforce biases and create barriers for those who through quirks of fate or their own idiosyncratic decisions are not “in the network”. A preference for working with someone who went to a certain college or has TFA or KIPP on her resume isn’t wrong per se. However, it can prevent an organization from working with an expert who might be a better fit for the task at hand, but whose resume may lack the words it’s scanning for. It can also – unfairly – require her to vault over a higher bar to get access to the same professional opportunities.
The value of unbiased information
So what’s the solution? An organization seeking expert talent should ideally put in the legwork vetting the expertise of a consultant before signing them on. They should ideally look beyond the “old boys network” when sourcing talent. And, equally importantly, the hiring manager should ideally check for her own biases since just the act of being aware of our subconscious preferences makes it more likely that we will fairly evaluate the options in front of us.
I say “ideally” because doing some of this requires time and effort, which is always a constraint in the sector. We therefore also need to look at systemic solutions. Regulation helps, but it’s a clunky and heavy-handed answer. Better information gathering and sharing, facilitated by technology, is a far more elegant option, since it balances out the information asymmetry between buyers and sellers. At Catalyst:ED, we are incredibly excited at the power of the data we are collecting on the expertise of individuals through our upfront vetting process as well as mid-project feedback and post-project evaluations. These “reputational tools” can help create more transparent and effective markets by providing talent with a credible way to communicate their expertise and buyers with more information to enable them to work with someone who is the right fit for their needs .
The information we gather doesn’t just allow us to differentiate between levels of skill, but also between types of skill. Our reference check process, for instance, gives us great insight into the skill-sets of experts, often adding nuances that the experts themselves may not be aware of. For instance, a couple of months ago, we had two expert development professionals apply to our network on the same day. Both had been recommended to our network by people whose judgment we trusted. They also both brought solid and directly relevant experiences and spoke knowledgeably and passionately about their expertise during the interviews. The reference check process revealed interesting differences though: while one’s references extolled his ability to work really well independently and turn out very high quality grant proposals almost single-handedly within superhuman timelines, the other’s references spoke glowingly about her ability to orchestrate a team effort to produce outstanding work products. Two different skill-sets that are best set up for success in two completely different situations.
How do we see this panning out? Here’s what I believe will happen if we do this well: More organizations will look for experts who are vetted and who bring a specific skill-set and mind-set as opposed to a “general purpose” expert. More projects in the sector will go off well, thanks to more informed and better matches. Pricing will show greater dispersion and will be a better reflection of the level of expertise that someone brings. And more organizations and talent will participate in the market for expertise.
 While this may also be a problem for the talent market as a whole, it is especially so for short-term talent, since the consultant doesn’t have a lot of time before he or she has to start delivering results and the option to “develop and train” the person doesn’t usually exist.
 Not surprisingly, we find through our data on consultant pricing that hourly rates tend to cluster based on seniority rather than expertise.
 A working paper by the Mercatus Center makes this point a lot more eruditely than I do.