Sunday, August 12, 2012

Myths of Measurement: Talent Measures Have No Side Effects

Medication and medical procedures have side effects. Side effects can be harmless (too much niacin can cause blushing) or serious (statins can cause muscle damage). These medications work within the complex chemistry of the human body, and how a person will experience a side effect is unpredictable.

Talent measures are not that different. They work within complex human social systems, and they alter these systems—often in unintended ways. In other words, they have side effects. The side effects most often affect motivation.

There are many ways to increase organizational performance. Many are effective, few are ineffective. And a few can actually damage the organization. Measurement falls into this last category. Because measurement can have negative outcomes, it is worth knowing how these results come about and how they can be managed

When We Measure, We Expect Results

Talent measurement often has the goal of understanding, motivating, or directing workforce performance.  This idea is summarized in the phrase what gets measured gets managed, which is sometimes attributed to Peter Drucker. This phrase is universally framed in a positive light—just measure and results will improve.  

The challenge is that this catchy phrase doesn’t consider the complexity of human performance in organizations. We often measure to get a reaction, but we expect the workforce to react in predictable ways. In reality, measurement can have the intended, positive effect—increased motivation, aligned effort, and focus—or it can have unintended negative effects that result in counterproductive changes. 

Every Measure Motivates

It’s important to remember that every measure has the potential to motivate someone, somewhere. When you review marketing data to decide which product should receive more investment, someone is motivated to receive the investment. 

Consider balanced scorecards. Kaplan and Norton developed the scorecard as a strategic learning and steering tool. The scorecard is presented as a network of hypotheses that reflect strategy, referred to as a strategy map. The scorecard’s measures are intended to reveal whether the strategy and operations are working. Essentially, the measures test the validity of executive hypotheses and assumptions. 

As with many measurement systems, the intent of scorecards, at least initially, was to make decisions. Because each scorecard measure reflects directly on a different function within the organization, however, every measure will also motivate someone, somewhere in the organizational structure. In fact, a balanced scorecard is a motivational system. 

Regardless of incentives, balanced scorecard measures are scrutinized at the highest levels of an organization. Part of the motivation associated with scorecards is simply that people don’t want to look bad. 
Some organizations have gone so far as to link scorecard measures to individual executives. Simply reporting the measure makes the executive responsible. Other organizations have formalized this motivation system by developing cascading incentive systems that link the strategy of the organization to departments down and across the organization.

I’ve helped organizations use scorecards this way, and generally I see it as positive. It’s better to be deliberate in setting up motivation systems, because allowing random motivation systems to emerge can be damaging.  

Too often, organizations often proceed as if measures intended to be used for decision making won’t have a motivational effect. In other cases, organizations create measures with the intent to motivate, but assume that the measure will only have the intended effect. It’s important to remember, regardless of intent, that any measure will have motivational properties.

Strange Motivations

There are endless examples of the unintended motivational effects of measurement, also known as perverse incentives. One well known example is called the “Cobra Effect.” In British-controlled India, the government’s reward for every dead cobra—a reward intended to reduce the number of deadly snakes—resulted, of course, in people breeding cobras for income. A similar situation happened in Hanoi, under French colonial rule. In this case, the government paid a bounty for each rat pelt. Again, a program intended to exterminate rats instead led to rat farming.

But don’t think we’ve gotten smarter. IBM faced a similar problem when it decided to pay its programmers by the line. The programmers responded, predictably, by increasing the number of lines they wrote in each program. Instead of producing more programs, faster—the intended effect—the programmers simply wrote more complicated, and less elegant, code.

In some colleges, professors are rewarded for high scores on student evaluations. More often than not, this leads to easy courses and inflated grades, rather than improved accountability. Academic researchers are rewarded for a large number of publications. While the intent is to improve research productivity, the result is often incremental papers and little innovation. 

In the K–12 world, teachers are being rewarded for increased student test scores. Rather than improving education and teacher effectiveness, the effects have been teaching to the test and an emphasis on short-term learning.

Motivation and Measurement Are Personal and Social

The key to understanding these strange motivations is remembering that each employee, manager, and executive experiences his or her own context. Employees make meaning out of measures and measurement data, and they come to their own conclusions based on personal comparisons, context, and connotations

Personal conclusions and insights can be difficult to predict. All sorts of dysfunctional behavior can be observed in complex organizations. There are always challenges: What matters to one individual is often not what matters to the organization. It’s foolhardy to proceed under the assumption that all the people involved—measure developers, the executive team, employees—have the same worldview.

Many talent professionals assume that formal incentives, such as pay and advancement, are the primary motivators for employees. But there are other motivation systems have huge effects in organizations. These shadow incentives exert a powerful force on individuals. Personal relationships and social structures matter in organizations. This is the important conclusion of the Hawthorne studies, which showed that employee behavior, and organizational productivity,  is strongly related to social contexts.

A Shadow Incentive System

About a decade ago, a large telecommunications company started a program that encouraged repair technicians to develop their troubleshooting skills, in part by pursuing an associate’s degree in telecommunications technology. The company assumed that highly skilled technicians would be better at fixing problems, and that their increased skill would reduce the number of repeat service calls.

Despite the multi-million dollar price of the training initiative, repair technicians never repaired more than three phones a day. The company’s management, understandably baffled, hired a team of researchers to look into the problem.

The problem turned out to be shadow incentive system. An anthropologist joined the technicians in their trucks, watched their interactions, and found that the technicians had simply established a norm: three phone repairs per day. Anyone who worked faster was punished and shunned—serious disincentives. One technician who broke the rule had a tool dropped on his head by another worker on a pole above him.

This was the social element—team norms were strictly enforced. There was also a financial incentive. By reducing the number of repairs, the technicians were able to nearly double their income with overtime. The formal compensation and reward system simply didn’t matter. The shadow incentive system was much stronger. Ultimately, the program was discontinued.

What’s also interesting is that some workers weren’t consciously aware of the enforced repair limit. It took an anthropologist—an outsider—to see what was really happening. People often aren’t aware of the basis for their actions.

Understanding and building motivation systems requires insight on an individual and organizational level.

The Side Effect of Surveying Engagement

Many organizations now measure employee engagement with surveys, share the results, and hope the information will encourage managers and teams to improve. This feedback process is potentially positive and powerful.  

But conducting a survey—asking employees what’s wrong and how to make things better—can raise expectations. If the organization fails to make improvements based on survey feedback, the result can be the opposite of what was intended: lower morale. In these situations, survey results do little more than give dissatisfied employees something else to complain about.

In addition, managers and employees react to surveys and measurement according to their idiosyncratic worldviews. While one manager may work to improve engagement and expect her team to respond honestly, another manager may simply ask the team to rate survey questions higher, as a personal favor.

To manage the unintended side effects of surveys, we need to be aware of the expectations, strange motivations, and personal connotations that will inevitably come into play at different levels of the organization. 

If we’re aware of these different contexts, and the survey is framed in a forward-looking agenda, the results can help management focus employees on the positive aspects of improving the organization

In practice, it’s best to formally assign executives responsibility for the measures. Assigning accountability is going to happen anyway. Formalizing this effect increases transparency and openness across the organization.

Must We Measure Everything?

One of the challenges in measuring employees is that good measures are hard to find. If we’re going to evaluate a proofreader’s work, for example, the only way to measure the quality of the work would be to ask another proofreader. And who is going to evaluate the proofreader’s evaluation? Obviously, we can’t have a perfect measure of everything.

The lesson here is, don’t measure for the sake of measurement. A bad measure can create a bigger problem than not measuring at all.

If you can’t find a good measure, it might be worth looking for another way to monitor and motivate performance. In the case of the proofreader, you could consider surveying customers, who will have a sense of the quality of work. 

It’s worth asking the question: Do you really need a measure for this, or are you measuring because that’s what people do? 

Reducing Negative Side Effects: Managing Measures

As we have seen, measurement can lead to misalignment and malfunction in an organization or, for that matter, a country. Perhaps we should think about this differently: What gets measured needs to be managed. To be successful, we need to manage both the measure and its meaning.

If we can be deliberate in setting up motivation systems, being aware of the possibility of perverse incentives, it’s less likely that random motivations or shadow incentive systems will undermine the organization.

For measures to have their intended effect, it’s necessary to manage the meaning and the context. As always, communication is the key to successfully using measures.

I’ll write more about building shared meaning with measures in the next post.

Thursday, July 19, 2012

Myths of Measurement: Talent Measures Are Unaffected by Context

Physical measurement is barely affected by context: an inch is always an inch, and a cabinet always has the same dimensions, whether you are building it, improving it, or removing it.

This is not true of talent measures; talent measures are extremely sensitive to context. The same measure will yield different results in different contexts—whether you are selecting, developing, or laying off employees. It’s not a good idea to assume that you can use one talent measure for different purposes.

To understand the prevalent myth that talent measures are unaffected by context, we need to understand that measurement is just a method of conveying information: that is, measurement is a language. While the mathematical language of measurement is more precise than spoken language, meaning will vary with context.

Consider a competency rating. If the measure is used to set compensation many will only see the measure as a gateway to pay. If the organization uses the measure for two different purposes—compensation and developmental coaching—the coaching context will be contaminated by the context of pay. When it comes to competency ratings, employees often pay more attention to the context than to the measure itself.

Context is often more important than the measure. Let’s look at a few examples.

The Context of Performance Management

Most organizations have an annual performance appraisal. In most cases, an organization will review the number of ratings at each point on the scale (the distribution). Given obvious variability in performance, we would expect a normal distribution—a few employees would receive high ratings, and a few low ratings, but the great majority would cluster around the middle of the scale. In most organizations, however, nearly all the employees are clustered at the top of the scale, and only a few fall near the bottom. The distribution is skewed.

Over the years, skewed performance ratings have caused consternation, difficult conversations, and organizational chaos. Executives have looked at the distribution of performance ratings and thought:
  • We sure have a great workforce—everybody is doing well!
  • This measure is obviously biased—I know our workforce is not that great.
It should come as no surprise that the ratings are skewed, considering the context. Because the ratings may affect compensation, bosses tend to give higher ratings. The social context, not the measurement process, is causing the skew. Nevertheless, many organizations look for a better measure to provide more differentiation between employees, or to increase the number of low-rated employees. 

No matter how many times you change the performance appraisal measure, you’re unlikely to get a different distribution. The context stays the same, and as a result, the distribution is likely to stay the same. The employee/boss relationship will lead to a preponderance of positive ratings, and changing the measurement system will never solve the problem of skewed performance appraisals.

Solutions such as forced ranking, which I have discussed elsewhere as inappropriate, are simply masked attempts to develop a better measure. They may change the distribution, but they suffer from other problems such as spurious differences.

In any organization, the solution to skewed ratings in performance appraisal won’t be a better measure. Certainly, it is easy to change the measure.  Further, there are many different tweaks that can be made, including different rating scales, number of points on the rating scale, different dimensions to rate.  If, however, you really want to change the distribution, develop better management discipline and use the existing measurement system. This requires discipline difficult conversations between bosses and the employees who work for them. 

The Context of Employee Engagement

There is currently a small revolution happening in employers’ views of employee engagement. Starting with Marcus Buckingham’s research linking engagement survey results to positive outcomes such as productivity, customer satisfaction, and employee retention, employers have rediscovered employee surveys. Many executives worry about an unengaged workforce and the impact on their business, and many employers are surveying their workforce for the first time.

Some organizations have even linked incentives to engagement measures—for example, by increasing or decreasing a manager’s compensation based on the engagement scores in his or her area. This practice seems justified, since we can find relationships between engagement and outcomes such as profitability and retention. Anything that can be done to increase engagement should be tried.

As with performance appraisals, however, adding financial incentives will fundamentally change the context of the measure. Employees have told me, in confidence, that their manager asked them to respond to the survey positively, regardless of how they were feeling. One of the most disengaging things a manager can do is to ask an employee to misrepresent herself. The effect will be an extreme form of contamination of the measure. While actual engagement will decrease, the measure will show an increase. This is a form of cheating.

Another Name for the Myth

Psychologists who work with performance measures have developed a term for how measures are changed by context: When a measure of performance is affected by non-performance factors, they refer to it as criterion contamination. Because the performance variable will be contaminated by the context, researchers are warned not to use performance appraisal results when conducting research. If they do, the research will not yield meaningful results.

The various social and motivational forces that affect performance appraisals are one example of context. There are many other examples of contextual influence: organizational culture, business processes, personal beliefs, discipline, and so forth. These contexts affect every type of talent measures.

Organizations often forget about criterion contamination and try to use a single measure for different purposes. If a measure has been linked to incentive pay, for example, it’s not possible to use the same measure to study the relationship between employee engagement and customer satisfaction. The measure has been contaminated by the compensation context, and the context is always more powerful than the measure.

In this case, the solution to criterion contamination is to get a new measure. It’s certainly inconvenient to develop additional measures, especially when a perfectly good measure already exists.

The Same Context May Be More Different Than You Think

As the engagement example above shows, just as the meaning of measures varies according to the organizational context, it also varies according to individual context. This is another aspect of the myth of unaffected measures: there is often an assumption that the measure means the same thing to you and me.

As discussed in previous posts, this is the connotative, or subjective, meaning of the measure. Although in a denotative sense the measure will have exactly the same meaning at any level of an organization, within that organization, the measures will mean radically different things to different groups and different individuals.

In the engagement-gaming example above, for example, the context of the measure varies between the different parties:
  • Executive management is concerned with engagement and its impact on the business in terms of productivity, customer satisfaction, or employee engagement 
  • Supervisors have incentive compensation and are concerned with how the scores will affect their pay 
  • Employees feel pressure to respond positively, but may have insights to share—once again the system is preventing them from having a voice.
Recognizing the existence, and the effect, of connotative meanings presents one of the biggest challenges in talent measurement. If we pay attention to the connotative meanings—that is, the individual and group contexts surrounding a measure—we can communicate to create shared meaning. In a culture of open communication, there is a significant opportunity to get more value from measures.

To Use Measures Well, Remember the Myth

Organizations need employees who are engaged in achieving organizational goals. This idea goes by many names, such as ownership culture and results orientation. Measures are often used to encourage engagement, with the intent of building a shared worldview and an understanding of the organization.  Performance appraisals and scorecards help keep everyone on the same page. Or do they?

It’s important to remember how easily measures are changed—some would say corrupted—by context. Misuse a measure once and employees will remember it for a long time. Use a performance appraisal for laying off employees, and this will change the context in the future. It’s easy for a measure to pick up new connotations.

Human resource departments and leaders have an opportunity to manage the meaning of talent measures at all levels of an organization. One way to do this is to watch for this myth in action. Remembering that every talent measure is affected by context can lead to a more discerning use of measurement, better communication, and, ultimately, more positive outcomes.

Of course, I’m not the first to point out this challenge. In 1975, Donald T. Campbell observed a methodological phenomenon that some refer to as Campbell’s Law: 

The more any quantitative social indicator is used for social decision making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.

In the nearly 40 years since Campbell’s observation, talent measures and performance metrics have proliferated in organizations. The proliferation is accelerating as measurement becomes inexpensive and accessible. In my experience, however, few consider this dark side of measurement.

In the next blog post, I’ll consider how talent, which Campbell refers to as “the underlying social process,” is affected by measurement.

Thursday, July 12, 2012

Myths of Measurement: Perfect Employee Measures Exist

The full contribution of an employee can’t be measured with one metric. Over the last decade, I’ve asked thousands of HR professionals—including some very sophisticated ones—this question:

Can you give me an example of a measure that represents the total value an employee adds to the organization? 

No one has ever been able to do this. Sometimes they suggest a measure from a different company or a department that they don’t understand well. We want to think that somewhere, if we look hard enough, we’ll find the perfect measure of employee performance—it’s just that our own department or company is structured in such a way that we can’t do it.

Admittedly, this is a trick question. Everyone knows perfection doesn’t exist.

We want to think that our performance measurement system is really good—after all, we use it to make all sorts of decisions, including pay. On reflection, however, it’s clear that any measure of employee performance is deficient in that it doesn’t capture the entire value that the employee contributes. Any measure is contaminated by things beyond the employee’s control. Finally, any measure of an employee’s contribution is imprecise in that it isn’t really capable of making precise distinctions, either between people or between different aspects of a person. There is always an element of error, statistically speaking.

A good example is sales performance. Total sales may seem like a perfect measure until you consider the many factors that influence the measure. Any measure of total sales will be deficient because it doesn’t represent the good will associated with honestly representing a product. “Total sales” is short-term. Good will affects future sales. Every measure is deficient in that it doesn’t tell the whole story.

As a measure, “total sales” is also contaminated by factors beyond the sales representatives’ control—factors such as marketing, competition, and the quality or value of the product. Any seasoned sales representative will tell you that the territory will affect a salesperson’s ability to sell. In addition, all measures are imprecise. Consider two sales people: Both had a great year, and their annual sales were within a few thousand dollars of each other. Can we really differentiate between the skill, effort, or success of the two? No.

In practice, the myth of a perfect measure leads to misunderstandings about performance and capabilities. Here’s an example.

A call center manager was concerned about the employee taking the fewest calls. Clearly, this long-term employee was starting to slide. His performance had been in a slow decline.

The manager called the employee into his office. Being a sensitive boss, he asked the employee how things were going and if anything had changed in his life. Finally, he got down to business and confronted the employee about his performance—knowing there were no external factors affecting performance.

The employee’s response was unexpected: “I think I’m totally on top of my game. All my peers appreciate me, and I’m resolving the toughest issues. My productivity is very high!”

After a bit more discussion, the situation became clear. Informally, other call center representatives were escalating the difficult calls to the “problem” employee. If the employee is helping out a team member, he’s obviously contributing both experience and capability. This contribution, however, wouldn’t show up in his productivity measurement data. But his productivity was clearly high, considering that he was handling the most difficult calls.

In this case, the measure was deficient because it represented only quantity, not the quality or difficulty of the task. The employee’s performance was much broader than what was measured. Meanwhile, the other employees’ measurement data was contaminated—their productivity measures were a representation of the more experienced employee’s work. I would also bet this measure is imprecise—that it’s affected by random events, like the call-center computer going down.

This example shows that there can be great value in unmeasured aspects of performance, and that measurement can’t tell the whole story. In the best-case scenario, this call center manager would learn that it’s not possible to rely entirely on a measure that represents only one aspect of employee contribution.

 The Myth of Perfect Measurement in Action 

Organizations often act as if their measures are nearly perfect. As a consultant, I’m often asked to develop measures or provide measure-based guidance on important questions, such as “What value did we get from training?” It’s important to set expectations carefully. Without a perfect measure, there will be no perfect answer.

Early in my career, I spent months, and considerable amounts of a client’s money, searching for a good measure of technician performance for a regional phone company. In one sense, we had very good measures of contribution: the number of repairs completed successfully and the number of phone installations (this was in the era of copper lines and touch-tone phones).  But applying these measures in real-world context, even using strong quasi-experimental designs, we ran into many complications. In the process, my client and I learned, or relearned, a lot:
  • Employees "game measures"
  • Teamwork matters 
  • Organizations are multilayered and complicated 
  • Employee performance is multidimensional
  • It’s difficult to isolate the contribution of an individual to the organization 
  • It’s impossible to perfectly quantify the effect of an intervention (such as training) on an individual’s contribution. 
In the end, we were able to estimate the value of an employee training program. On the other hand, we didn’t use by-the-book analysis, and we certainly didn’t calculate a concrete ROI. Because measures are so imperfect, the ROI analysis would have been based on unrealistic assumptions. We were, however, able to make an educated projection. In the end, that was all that was needed to make good decisions about the training program.

Wouldn’t it be great if there were a perfect measure of employee performance? You could do all sorts of analyses and run organizations in a perfectly rational manner. Unfortunately, the perfect measure doesn’t exist. In fact, I’m not sure you can ever calculate the ROI of a training program, because there is never a perfect measure of employee performance—and that’s what training is supposed to affect.

Beyond this sort of program evaluation and decision making, many talent decisions are based on imperfect measures. As in the example of sales performance above, even sales commissions are based on measures that deficient, contaminated, and imprecise.

Consider More Subtle Outcomes

 In a general sense, phone technicians have clear outcomes: new phones are installed or repaired phones  work correctly. Many, in fact most jobs, have much less clear outcomes.

Consider a teacher in our public schools. Is the outcome of successful teach a grade on a test, success in life or something else.  Measuring the full contribution or value of these jobs is nearly impossible because the outcomes is so complicated. Even if we could define a clear outcome, its measure would be contaminated and deficient.  

What Can We Know? 

Even though all measures are deficient, contaminated, and imprecise, they can still be useful tools to support decision making, if we use them wisely. What’s critical is that we never assume that measures are perfect.

If we make this mistake—if we accept measures as having any degree of perfection—we shut down authentic conversations and prevent the exchange of meaning between managers, employees, and different parts of an organization. In the case of the call center described earlier, for example, relying on a measure without communication, and without the value of human judgment, could have resulted in the organization losing one of its most valuable employees.

Paradoxically, if we accept the inherent weakness of measurement, we can use measures more effectively. When we understand that measurement is an inherently simple approach that will never adequately describe the complexity of employees, or untangle the nuances of employee performance, we can compensate.

One way we can compensate is by relying on multiple measures—while accepting that even this solution is imperfect. Using multiple measures provides the benefit of triangulation, and is often adequate for making decisions, providing feedback, and even supporting human resource decisions.

Ultimately, the weaknesses of measurement provide an opportunity for discussion and learning. Authentic and productive communication that acknowledges these imperfections enhances learning about an organization and its complexities.

Saturday, June 30, 2012

Mistrust of Talent Measurement

In many organizations, you’ll find fear and mistrust. You’ll also find that employees often focus these negative sentiments on the organization’s measurement system.

A measurement system won’t work without trust. Trust is the basis of functioning human communication, and as I’ve discussed in previous posts, measurement is best understood as a means of communication. In this blog post we’ll explore these issues, and look at how to increase trust in measurement.

I had a client with a world-class performance management system. The system included many best practices:
  • Individual goals cascading from the strategy of the organization
  • Employees and managers met at the beginning of the year to discuss goals
  • Coaching happened throughout the year, with a formal documentation period
  • Final evaluation happened at the end of the year, with ratings of behavioral competencies as well as objective performance metrics.
The  employees hated the evaluation system. Why? They didn’t trust it. This organizations performance management system, like many, was a de-motivator!  

When there is no trust in measurement, the results are often ugly. In pre-revolution feudal France, the physical measurement system was controlled by the aristocracy. Measurement was idiosyncratic, fragmented, and non-standardized; there were approximately 14,000 different units of physical measurement. Mistrust of measurement, or measurement-based decisions (such as those involved in commerce), was one of the factors exacerbating class tensions.

There are historians who report metric riots over measurement issues during this period. When the nobility surrendered their privileges after the storming of the Bastille, these privileges included giving up control of measures.

This may sound extreme, but is it? What if you couldn’t trust that a pound was a pound, and the person measuring your pound of pasta was defining the measurement?

Today, we assume that physical measurements are accurate and standardized—everything from a pound of pasta to land surveys. There are governmental bodies regulating physical measures. We rarely even think about the physical measurements that we use daily.

The same is not true of talent measures. Each organization has its own set of talent measures (for good reason), and these measures are typically controlled by management.

As in the French revolution, those who control the measures have a significant source of power to use and, in some cases, abuse. Before the revolution, serfs often felt they were being cheated by nobility.  They probably were. After the revolution, there was a movement to base measurement on universal, naturally occurring objects. This is the origin of the metric system.

How does this relate to my client with the world-class performance management system? Employees hated the system because it was unpredictable and apparently arbitrary. It was unintentionally failing the fairness test in two ways. 

First, employees saw the performance appraisal procedure as unfair because it was constantly changing. In an attempt to improve the system, senior management was constantly tweaking it. Supervisors and employees weren’t sure how the system would ultimately appear on the intranet. Little was done to communicate how the system worked, or why it was changed. Given the complexities of the measures and the misunderstanding of the process, it’s understandable that employees started to wonder whether something nefarious was going on. Sometimes the tweaks made winners and losers. As a result, a near revolution was brewing in what Elton Mayo would have referred to as a social system. 

Second, employees saw the system as unfair because of the way rewards were distributed. The pool of bonus money was spread among the employees according to the ratings, often as the final ratings were calculated. This is not an unusual profit-sharing plan. Unfortunately, there were last-minute changes, sometimes for departments as a whole, that had unexpected effects on ratings and compensation.

Stability and transparency is needed to make a performance management system trusted and functional. This requires an acceptance of the imperfections of measurement, and much more communication. To a large degree, tweaking will not improve a talent measurement system—it will only serve to further distort the meaning of the measures. Often, tweaks occur because we believe in the myths of measurement:
  • My mental model of performance is correct
  • Measures are real
  • There is a perfect measure
  • People and the measurement system do not change in the process of measurement.

Opposition to Measurement

This brings up a second puzzle for a measurement guy like me. Why are so many employees, and unions, opposed to measurement?

Coming out of graduate school, I understood the formal side of measurement very well. I was surprised at the level of animosity toward measurement. I naively assumed that the honesty and accuracy associated with good measurement would enhance relationships, including employee and labor relations. I soon learned that in practice there is little appetite for measures, mostly because of how they have been, and are, used in organizations. 

I’ve found four major reasons for this opposition:
  • Employees being held accountable for factors beyond their control
  • Arbitrary use of power
  • Differentiation between employees
  • History of Taylorism.
These four reasons hamper effective talent measurement. They exert such a powerful force that new measurement initiatives start with a lack of trust. Often this lack of trust must be addressed and overcome before organizational learning and employee motivation can happen.

If you know of additional reasons, I’d love to learn about them.

Inappropriate Accountability

As I discussed in a previous post, measures are contaminated by factors beyond an individual employee’s control. For example, low-performing teammates may prevent an employee from reaching her full potential. This is, of course, a fact of life—we can’t control everything.  When employees are faced with a measurement system that ignores factors beyond their control, they can feel that their efforts are futile, and the measurement arbitrary. This tends to erode trust in the organization and management.

Arbitrary Use of Power

Managers have the power to change the measurement system; sometimes this power is used to withhold rewards or punish employees in some way. For example, if a salesperson has earned a large reward based on a previously defined incentive system, sometimes the system is reconfigured to avoid the large payout. Of course, this breaks trust—and it does happen. Unfair actions related to measurement  are remembered for a long time.

In addition, management is often responsible for interpreting or framing measures. Setting unattainable goals, for example, can lead to employee dissatisfaction and decreased morale, as employees give up trying to improve their performance.

Differentiation between Employees

Measurement is often used to differentiate between employees. If you don’t trust the measurement system to make important distinctions related to employee performance, you won’t like differentiation. 

In the coming year, many U.S. school districts will differentiate between teachers using standardized tests. Some will think this is fair, but many will not. Methods have been developed, such as value-added scores, that attempt to statistically control for factors, such as race or wealth, that affect student growth. Despite this, many fundamentally mistrust the standardized tests on which the evaluation systems rest.

Differentiating between employees is also antithetical to unionist philosophy. Unions believe that differentiation reduces solidarity, and they tend to believe that everyone is the same. In fact, this mistrust of measurement, based on valuing both similarity and solidarity, leads to a question often asked by unionists: How can we tell who is better than another? If you can’t answer this question, the only basis for differentiating payment is seniority. While experience does increase knowledge, skill, and even wisdom, it’s not performance. It is, however, an unambiguous differentiator.

History of Taylorism

In a sense, Frederick Taylor’s use of time-and-motion studies to set performance goals destroyed trust in measurement. As I discussed in the last post, Taylor’s approach to measurement was used mechanistically, against employees, and not in partnership with them.

Interestingly, Taylor assumed that employees would embrace scientific management. Sometimes they did, but most often they didn’t. He was, of course, assuming that employees were motivated only by money and the possibility of higher wages based on increased work.  Of course, we now know that this assumption is wrong.

In an interesting turn of history, employee unions and strikes were ultimately the downfall of Taylorism. Strikes by public sector employees caused Congress to hold hearings. After five years, scientific management was essentially outlawed by limiting the use of incentive wages and stopwatches.

Building Trust in Talent Measures

So how do we build trust in talent measures, given that the way measurement has been used in the past creates negative preconceptions? I have three suggestions that will help create a climate of fairness and transparency:
  • Make measurement predictable. Build the best technical system you can, and accept that some people will manipulate the system. The solution to manipulation is rarely a better measure—most often, it’s better management and leadership, using the measures.
  • Communicate, communicate, communicate! Communication is creating shared meaning. An organization’s definition of the meaning of measures will not be the same as employees’ meanings, or experience. Both meanings need to be acknowledged and managed in a two-way give and take. Never assume that a measure means the same thing to everyone.
  • Build trust in measurement. Anything that could be interpreted as using the measures against the workforce or individuals should be avoided.  Apply measurements consistently across your organization. Hold everyone accountable to the same standards, and make those standards clear.  Remember, we want our employees to be thinking about the work and engaging with the organization.  We do not want them to be thinking, “Is this measurement system fair?”

Tuesday, June 19, 2012

Talent Measurement Schools of Thought

Here is a puzzle: In our day-to-day life we do not treat people as inanimate objects—but we try to measure them as if they are! We treat the people in front of us as living, breathing, reacting entities, but few consider the complexity and reactivity of human nature when developing or managing with measures. Why the inconsistency?

Two Schools of Thought: The Taylor and Mayo Dichotomy

To solve this puzzle, you have to go back to school—graduate school. As a graduate student, you’ll probably learn one of two different approaches to talent measurement. One school of thought is focused on the technical aspect of measurement, and the other on the human aspect. The challenge for measurement professionals is to master both schools of thought. The two are rarely reconciled, however. Professionals generally have expertise primarily in one approach.

The technical or engineering school will teach you how to calculate reliability and validity, and introduce you to different measurement methods. This school of thought dates back to Frederick Taylor, one of the first manufacturing engineers. Frederick Taylor is considered the father of scientific management, which emphasizes task analysis, efficiency studies, time-and-motion studies, and using compensation schemes for motivation. 

The human relations school has a different point of view: Employees are complicated, and don’t work mechanistically. If your graduate program emphasizes human relations, you’re likely learn more about personality types or team functioning measures that will facilitate interactions between people at work. You’ll be introduced to validity and reliability, but you’ll be taught very little about the technology and theory of measures. The human relations school of thought dates back to Elton Mayo, a psychologist. 

The ghosts of Taylor and Mayo haunt today’s organizations. To this day, consultants, managers, and leaders adhere to one school or the other. Taylor adherents tend to advocate for measurement as a formal and rigid process. Mayo adherents focus more on group processes, interpersonal communication, and intrinsic motivation. 

Both Taylor and Mayo made essential contributions to the art of management and leadership. But it’s not an either/or choice. It often takes decades of experience to merge the two schools of thought into a practical working knowledge of measurement. Some never see the dichotomy and its implications.

I’m writing this blog post in the hope that we can accelerate the process of combining and ultimately uniting these two schools of measurement.

The Engineer: Frederick Winslow Taylor (1856 – 1915)

“In the past the man has been first; in the future the system must be first.”

Taylor grew up affluent and gifted in the second half of the 19th century, in an era of huge industrial change. He chose not to follow his father into the legal profession, although he was accepted into Harvard. Instead, he worked in industry, starting as a machinist and becoming a foreman, and went on to study engineering. 

As an engineer, he first improved manufacturing technology such as lathes and forging equipment. Early on, he noticed that these technical improvements demanded similar organizational innovations to be effective. As his ideas developed, he saw manufacturing as a larger system that could be improved by optimizing the various pieces to contribute to the larger system. Over the course of his career, he contributed his ideas to equipment (he had several important patents), business processes (such as accounting methods), and methods of managing employees.  

Taylor and Time-and-Motion Studies

As he looked at the larger manufacturing picture, Taylor was concerned that laborers were not working at full capacity. To fix this problem, he identified the optimum work-output level, and provided incentive pay for this level of output.  

Determining workers’ optimum output involved time-and-motion studies. Taylor divided the work into steps, each of which he timed separately. He then combined the time for each step into a total time for the job. By dividing the work day by the total job time, he arrived at an optimum production rate.

Workers were paid on a graduated scale. Low levels of output were paid very little, but as productivity approached the maximum, unit pay increased. Workers attaining the optimum production rate would be paid 60% more using Taylor’s methods. 

While he became infamous for his time-and-motion studies, it’s important to recognize that, for Taylor, these studies were part of a larger system of managing employees. Taylor used worker productivity as a talent measure. He studied measures of productivity to make decisions, organize work, set production expectations, motivate employees, and identify employees to retain. In the best cases, Taylor’s scientific management methods could reduce costs and increase productivity by 50% to 100%.

Human reaction to measures and management methods didn’t factor into Taylor’s thinking. He was convinced that employees only work for money. Labor problems were simply an engineering challenge to be managed. Taylor paid lip service to selecting and developing talent—he mostly set output targets. Workers who were able to keep up the pace self-selected and developed their capability.

Taylor’s blind spot—the human factor—can be seen in many contemporary organizational improvement interventions, such as re-engineering, which has a success rate as low as 30%. Human readiness and acceptance of change is often a barrier to re-engineering success.   

Taylor’s approach also was inconsistent. Sometimes it worked, sometimes it led to significant problems.
Employee reactions to Taylor’s intervention often led to work actions and strikes.  Ultimately there was an congressional investigation. By the time of Taylor’s death at age 59, Congress had outlawed use of stopwatches and bonus payments in the federal government. Scientific management was increasingly discredited.

The Humanist: George Elton Mayo
(1880 – 1949)

So long as … business methods take no account of human nature … expect strikes and sabotage to be the ordinary.”

Mayo grew up in a distinguished Australian family. He began his studies in medicine and ended up studying psychology, focusing on social interactions at work. His most famous research work can be found in the Hawthorne studies, which demonstrated that employees are largely influenced by social factors, and that they react to being observed.

Mayo’s most important work coincided with the Great Depression. He believed that the industrial revolution had shattered strong social relationships in the workplace, and he found that workers acted according to sentiments and emotion. He felt that if managers treated workers with respect and tried to meet their needs, then both workers and management would benefit.  

Mayo’s research indicated that belonging to a group is a more powerful motivator than money. In his management philosophy, he saw attitudes, proper supervision, and informal social relationships as the key to productivity.

Some consider Mayo’s work to be a reaction to Taylorism. But Mayo was also concerned with output and productivity. Unlike Taylor, however, he was interested in the social and psychological interventions that increased productivity. These interventions are indeed helpful, and understanding the human factor is critical.  

Thanks to Mayo’s work, we recognize that, in organizations, informal social structures matter as much as formal structures, such as the chain of command. For example, a likeable senior engineer who dislikes a new manager could undermine the manager’s authority by making jokes at his expense during every meeting. In effect, the engineer becomes more influential than the manager—outside the hierarchy of the organizational chart.

Today, many organizational interventions emphasize team-building, and are based on the recognition that organizational culture is important, and managers have ongoing relationships with employees. By acknowledging the importance of the informal structure of an organization, factors such as relationships, informal leadership, and influence can be aligned with organizational needs and direction.

Mayo’s insights were synthesized into a school of thought referred to as human relations. The human relations school continues strong to this day, often in the form of leadership development, team building, or change initiatives. 

The insight missed by Mayo is that measurement—even Taylor’s productivity measurements—are essentially a social process. Measurement is simply a method of communication—a way to make meaning between groups.

Since Mayo, many people have failed to make this essential connection: We can extend Mayo’s insight into the importance of informal (social) structures into an understanding of the importance of the informal (connotative or personal) meanings of measures. As I have discussed before, the informal meanings of measures matter as much as, if not more than, their formal meanings. Like social structures, these connotative meanings can be managed—but only when their existence and importance are acknowledged.

If you’re creating an organizational, and you follow Taylor, you may believe that compensation is the sole motivation for performance and advancement. If you follow Mayo, you may believe that love, fear, and other ineffable human factors are the primary motivators. 

In the same way, the designers of a formal measurement system may believe that their measures will motivate by providing people with a positive opportunity to make more money (the denotative meaning). Instead, the designers may find that the connotative meanings provoke reactions that ultimately trump their intentions—reactions from outright rejection to gaming the system.

It’s odd that many of our measurement systems haven’t progressed beyond Taylor’s way of thinking. We have many tools to address the informal meanings of measures—tools we can draw on from interpersonal communications theory, management practices, and organizational learning.

Another danger of following Mayo’s approach is that it often pays too much attention to the informal and emergent social structures of an organization. While these informal structures are powerful influences on individual performance, it is possible to merge formal and informal organizational structures into a shared structure. This is where measurement can be incredibly effective, if it is used as a means of communication: It can create shared meaning that bridges the organizational and personal definitions of performance, motivation, and reward.

Finding a Middle Ground

What’s most unfortunate about the Mayo vs. Taylor bifurcation is that they were both right: The difference between the two schools of thought is ideological, not practical. In practice, we use both approaches. We need both engineers and social scientists (psychologist and sociologists) to run organizations efficiently.

If we attend one graduate school, we may learn to develop measures that are technically good, but we’ll have trouble assessing the human reaction to measurement. If we attend another school, we may learn to facilitate social interaction and meaning, but we won’t be trained to motivate, direct, or improve performance through measurement and feedback. Personally, I attended a more technical school, but my life and work experiences have led me to appreciate a balanced approach.

What Taylor missed was the importance of social structures in motivation, and the human factor in reaction to measurement. What Mayo missed was that measurement in itself is a social process, and measures have informal (social) meanings that can be managed.

Today, 80 years after Mayo’s Hawthorne studies, we should be able to merge the two schools. There is a wealth of possibilities for applying Taylor’s ideas in measuring individual productivity. At the same time, we’ve vastly increased our understanding of human relations—there’s a huge industry that’s evolved out of Mayo’s original insights.

Finally, in resolving the polarity of these two approaches, we need to acknowledge that measurement is communication, and that communication is shared meaning. By starting with a simple point—that people always react to measurement, and that the reaction is unpredictable—we can take the denotative and connotative meanings of measures, the formal and informal structures in organizations, and the two schools of thought, and synthesize them into an elegant, effective approach to talent measurement.  

Wednesday, June 13, 2012

Myths of Measurement: Do Measures Reflect Reality?

In the last blog post we discussed the mental models that inform our understanding of talent. Today’s post will examine how measures make mental models explicit and useful. This is true of talent management and other fields. 

I’d also like to discuss how easy it is to misunderstand talent measures as concrete entities. Just as there was a danger in reifying our mental models of talent, it’s easy to forget that measurement results are just a numerical representation of a model. The model is not “real,” and the measures, for all their predictive or descriptive strength, are just a representation of the model.

Mental and Mathematical Models

When measuring talent, we develop mathematical models to represent our mental models.  Often we start with a conceptual model, which is a sketchy idea. An operational model, on the other hand, is precisely specified in mathematical language. Operational models often have good predictive or descriptive strength. 
This is similar to an architect’s process. An architect starts a project by drawing a conceptual sketch, and refines the sketch into a scale plan. Sometimes it turns out that the original ideas don’t work. Sometimes the scale plan makes the concepts more workable. Scaling the concept mathematically makes it more predictive, more descriptive, and more useful.  

Refining Measures

Operational measures and scales are strong tools, and often work well to summarize personality, results, potential, or competency. The numerical values of the scales can be compared and linked to other values such as compensation. They can also be tested.

As an architect may find that her concept won’t work in practice, we may find that a talent measure does not work as we conceptualized it. For example, if we compare measures of performance and personality to investigate our mental model that extroverts are better at sales, we may find that personality does not relate to performance as we expected.

Statistics can help us refine and strengthen our talent measures. If we find that an employee engagement survey is only weakly related to customer satisfaction, we can add survey questions to strengthen the relationship. Adding questions about the organizational climate, such as “my co-workers really care about the customer’s experience,” is likely to increase the correlation. Examining statistical correlations can help us develop a measure that’s quite important to the business.  

Personality assessments are among the most refined talent measures. Many personality instruments have been revised over the years—the state of the art, in some cases, is astounding. The Hogan Personality Inventory (HPI), which defines personality as social reputation, has now undergone 30 years of refinements. It was developed by correlating respondents’ answers to survey questions with friends’ and co-workers’ descriptions of the respondents (social reputation). Today, the 206 questions of the survey—questions such as “I would like to be a race-car driver”—allow surprisingly accurate assessment and precise differentiation between different aspects of personality. 

Many assessment participants feel that the HPI can read their minds, but the “wow” factor is simply produced by probabilistic relationships between survey questions and reputation. In a sense, it’s the magic of statistics—“any sufficiently advanced technology is indistinguishable from magic” (Arthur C. Clarke). However, participants’ feelings that the HPI personality instrument can see their true selves can easily lead to reification.

Of course, not all personality instruments are as well refined as the HPI, and it’s important to remember that even the HPI is probabilistic. These instruments are accurate nearly all the time, but not always. Imperfections are easy to overlook because the instruments are “right” so often, and in general. Overlooking the imperfections, however, has dangers.

How Reification Happens

There is something about putting numbers on a model that makes the model seem real and unquestionable. But this presents a problem. When we can’t ask questions about our models, we can’t learn.  

For some reason, it’s easy to accept mathematical talent measurement results as the truth, and not look beyond the numbers. I have some theories about why this reification happens.
  • Some people aren’t as comfortable with numbers as they are with words. If it’s a lot of work for an individual to understand a chart or a report full of numbers, it’s likely that the person will only review the measures superficially. It’s also less likely that the person will ask questions. 
  •  The basis of talent measures isn’t always made clear. When providing HPI feedback, we don’t explain conceptually or computationally how the scales were developed or scored. In fact, the calculation methods are a secret known only to the Hogans. In one sense, it’s not important to know these details. But in another sense, not understanding how a measure works—or having no access to the mechanism behind the measures—could lead to reification. 
  •  When the talent measures are rigidly used for decision making, for example compensation or selection, the are in a sense real. Certainly they control real outcomes.  

Reification and the History of Intelligence Testing

The danger of measure reification is obvious in the long and often sad history of intelligence testing. In 1905, Alfred Binet proposed  a method to measure intelligence in children. A careful scientist, he noted the method’s limitations: 

This scale properly speaking does not permit the measure of … intelligence, because intellectual qualities … cannot be measured as linear surfaces are measured.

Binet intended to develop a tool to classify children needing attention. He tried to not reify the underlying capability.

Since then, intelligence has been reified and recast as a real and invariable human attribute—an attribute that describes a limit of human potential. The application of intelligence testing has limited access to immigration, schools, and jobs.  

When we reify a measure, we extend the measure beyond its original design. In this case, research indicates that intelligence does change. In addition, capabilities such as emotional intelligence are more important for some jobs. Making decisions based solely on employee intelligence is a mistake.  Intelligence quotient is not a real thing. It is a measure developed for a specific and narrow task: identifying children who need attention to succeed academically. Use in industry, and for immigration, came much later.

While many would argue with me, I assert that intelligence must be combined with other measures to be useful in business.

Reification and the Danger of Self-Fulfilling Prophecies

Reifying measures can lead to self-fulfilling prophecies. For example, designating an employee as “high potential” one year often means they will continue to be seen as high potential in future years, regardless of changes in performance. This is similar to calling a student “gifted.”
When a manager gives a low performance rating to an employee, there can be similar long-term consequences. People often conform to expectations. This is called the Pygmalion effect, which is well studied in schools. The Pygmalion effect also happens in organizations

Reification and the Danger of Limited Thinking

Unquestioning acceptance of any representative model is a problem because it limits our ability to think broadly about a situation. We tend to think that a talent measure describes talent completely. If we do this, we fall into the trap of mistaking the map for the territory.

Early sea charts were representations of mariners’ mental models. They were crude but adequate for coastal navigation at the time. Today they seem wildly imaginative and mostly decorative. But partly as a result of the maps’ reification of these mental models, sailors stayed close to shore to avoid the monsters, whirlpools, and other dangers that became very real to them—including the danger of sailing over the edge of the world.  

Sometimes, we stay close to what is familiar. If we’re familiar with the idea of intelligence, we refer to someone as smart. If we’re familiar with descriptions of personality, we may refer to a person as an introvert.  But there is much more to a person than our mental models, and our measures, would suggest.

Recognizing the Limits of Measures Is the Key to Using Them Well

Ultimately, talent measures are just representations of mental models. The underlying talent is always much more complicated. Any representation, or model, is necessarily a simplification.
I am concerned that we take measures as better, and more, than they actually are. If we don’t consider the limits of the tools, the limits of the tools become our limits.

I don’t think we should look for more perfect measures of talent. I am certain they do not exist. For one thing, the available technology reflects our current understanding of talent. 

So, throwing out our current talent measures is probably not helpful. Instead, we can do better by increasing our understanding of the current measures. This is an evolutionary process, and a probably a process that must be done in collaboration with others. How else can we examine our assumptions, and question both our measures and the underlying mental models on which they’re based? (I’ll be talking extensively about building shared meaning of measures in future blogs)

If we’re to use our measures intelligently, we won’t expect them to be more perfect than they are—even if they’re mathematically correct 95% of the time. We’ll remember that measures are never true representations of reality: A measure can never contain the whole truth, the total complexity of a person, or an entire situation. And we won’t allow ourselves to be daunted by the “truth” of numerical measures, which leads us to accept them superficially. Instead, we can use measures as a starting point for thoughtful exploration and deeper communication. 

It’s important to remember that all measures represent someone’s theory. The theory may not be appropriate in the current context, and may not be measured well

Thursday, June 7, 2012

Myths of Measurement: Is Performance a Real Thing?

When I was a child, a teacher was considered high performing if he had a quiet and orderly classroom. This is no longer true. As pedagogical theory as evolved, student engagement in learning has become more important than order and quiet. Now, if children are noisy but engaged, a teacher is performing well. Which model of teacher performance is correct?  

In the last 40 years we’ve seen two different models of teacher performance.  Our understanding of employee performance evolves. Actually, our understanding of talent, which is largely conceptual, is constantly evolving, and varies between people. In terms of teaching, discipline and order were important; now engagement is paramount. This is not an obscure pedagogical point. It is the key to successfully using talent measures.

Mental Models of Talent 

The most important features of talent are invisible—features such as performance, potential, personality, and intelligence. As a result, we have mental models of talent. Our ideas of talent and performance change over time, as we saw in the example above. In a practical sense, everyone agrees that a desk is a desk, or a rock is a rock, but performance is not always performance

Consider sales representative performance in the life insurance industry. The measure of sales productivity is commission, and commission is a percentage of the premium paid each year for a policy.  A veteran salesperson in this industry may not be selling new policies, but may be paid handsomely for policies she sold years ago, since the customer is still paying the premium.  This is a unique model of performance, since it includes results of behaviors from years past.  It is quite different from a typical model of sales performance, which is the amount of product sold in a month or quarter. 

Clearly, performance is not simply performance. Apparently obvious measures of performance, such as sales, involve assumptions. Even the idea of sales performance is a model. In a business context, the mental model of talent or performance is built by management’s expectations.

In a general sense, models of talent are networks of theories and assumptions. It could be a theory about how people tend to react to the environment—this is personality.  It could be a theory about the organization’s business model and how employees contribute to the model—this is performance.  It could also be a theory about how people should relate to each other and themselves to support the organization—this is a competency model. These theories and models are all helpful tools for understanding and describing human capabilities and outcomes.

The Strengths of Mental Models

Models are helpful. Architects, boat builders and other craftspeople have used them for years. In a management context, we need mental models of talent to understand employees, to know how employees contribute to the larger operation, and to be able to predict how employees will react in a range of situations.  Without these powerful tools, we could not effectively manage our talent. 

Competency models work so well because they make these mental models explicit and transparent, and because they allow us to articulate the behaviors that are related to performance. Explicit competency models have radically changed how talent is managed. In the past, a manager might have said only that an employee needed to be a better team player. A competency model gives the manager an elaborate description of what it means to be a team player, and describes the behaviors in terms that can be communicated, measured, and emulated. 

When these behaviors are measured, competency models support better insights, more motivation, and obvious decisions. For example, the Danielson framework is a model of teacher performance that allows organizations to select, train, coach, and improve educator performance using a single set of expectations.

Competency models measure observable behaviors; personality assessments, which describe innate natural tendencies, offer another set of powerful tools. A coach who has a strong understanding of a personality system (for example, the MBTI or the HDI) can assess someone to gain insights and then coach using the framework.  A manager who has a strong mental model of personality is better able to see consistencies and predict how others will react to situations.

The Problems with Mental Models

Although models of personality, performance, and competency are powerful places to start, we often forget that any model is a simplification of a complex reality. A personality measure focuses only on a few aspects of an individual’s nature; a performance measure considers only one contribution to a business; and a competency assessment considers only a few human capabilities.

 There is also a danger when we’re not aware that we’re using a model. Mental models, as defined in organizational system dynamics, are deeply held images of thinking and acting. Mental models are so basic to understanding the world that people are hardly conscious of them, and this leads to problems. 

For example, if I’m talking with an employee about her performance, we may be talking about two different things. My employee may be focused on the quality of her writing and communication, while I’m focused on the number of billable hours. We’re talking about doing a great job—and we’re completely miscommunicating. Both of our models are necessary simplifications. One is necessary from a business standpoint, and the other from the standpoint of doing the work.

As this example shows, when our mental models are implicit—not apparent to either person—they limit our perceptions and prevent us from deliberately acting and communicating.

Implicit mental models of talent lead to misalignment, miscommunication, and narrowed focus.

Miscommunication. Good communication is based on shared meaning. Words like intelligence, personality, or performance mean different things to different organizational stakeholders. Unacknowledged mental models of these critical talent constructs lead to miscommunication.  We may be talking, but if our meanings are different, we are not communicating.

Narrowed focus. Mental models provide a framework for what we should pay attention to. The problem is that in looking for one aspect of personality, performance, or competency, we may miss another, equally important factor. 

For example, a personality model directs our attention to behaviors that suggests a personal tendency to react in a predictable way. We may miss other behaviors that would tell us something else about how the individual can contribute. I have colleagues who have such a strong understanding of the DISC personality system that they immediately notice that someone is largely Dominant, Influence, Steady, or Compliant. They are so good at classifying others using this system that they miss other aspects of their personality.

This is a shame, because there are many ways to look at personality.  The simplest model has four factors, but more complicated models exist, including the 16PF or the Caliper Profile.  A more complicated model allows for more refined insights.

Most aspects of talent are multidimensional. A person may be high in Dominance, but nearly as high in another dimension. Further, different aspects of personality may appear in different situations.  My colleagues’ mental models may be limiting their expectations of others to something much more simplistic than it actually is.

Misalignment. To be useful, talent models must align with organizational needs.  If we are unaware of our models, this may not be the case.

Think back to how life insurance sales representatives are paid.  If you’re not familiar with the insurance 
industry, it seems odd.  However, it is perfectly sensible to an insurance insider. One of the strengths of the industry is the stability associated with customers paying premiums year after year for their entire life, and only collecting a payout when they die.  Because of this, long-term relationships and accountability is important.  In this sense, the model of sales representative pay is aligned with corporate strategy. 

As organizations change, our mental models of talent must also change. Unlike a desk or a rock, talent can change and adapt.  New insights and technologies can suggest to better, different, or more detailed models of talent.  Often, this is an opportunity for growth and development.

However, if we are unaware of our mental models, they are difficult to change. 

For example, salespeople in an organization moving to a team-based sales environment must be able to examine their assumptions, and must be aware of their own mental models, because the organization is changing the model. Performance is no longer individual. The change will affect the team, its management, and the support of the team. Team members will have to change how they view performance, the management will have to think differently, and the measurement and pay systems will have to change.

Thinking Differently

Teachers today face more competition for children’s minds.  It may be that engagement is more important in an era of video-games and 24/7 entertainment, so the new model of teacher performance is appropriate for today.  However, the education system has a difficult task in getting veteran educators to think differently.  For too long, assumptions about effective teaching were based on outdated thinking. Worse, we were not aware of our assumptions about teaching.

If we are unaware of our model of effective teaching, we will have a hard time discussing change, let alone changing. If we are not aware of the model we are using, we will not manage talent optimally.  

The bad news is that when we measure talent, we always are measuring a model. The process of developing measures that represent the model we tend to made something seem extremely concrete.  The model is transformed into something more real than it actually is.  

The good news is that we have become much more sophisticated in our thinking about talent models. The competency revolution made behavioral models apparent. Now we simply have to remember that underlying every measure is a mental model of talent.  Remembering this will help us question our assumptions, articulate our mental models, and test alignment with organizational direction.

In the next blog post we will consider reification of talent measures.