Showing posts with label measurement. Show all posts
Showing posts with label measurement. Show all posts

Wednesday, June 13, 2012

Myths of Measurement: Do Measures Reflect Reality?


In the last blog post we discussed the mental models that inform our understanding of talent. Today’s post will examine how measures make mental models explicit and useful. This is true of talent management and other fields. 

I’d also like to discuss how easy it is to misunderstand talent measures as concrete entities. Just as there was a danger in reifying our mental models of talent, it’s easy to forget that measurement results are just a numerical representation of a model. The model is not “real,” and the measures, for all their predictive or descriptive strength, are just a representation of the model.

Mental and Mathematical Models

When measuring talent, we develop mathematical models to represent our mental models.  Often we start with a conceptual model, which is a sketchy idea. An operational model, on the other hand, is precisely specified in mathematical language. Operational models often have good predictive or descriptive strength. 
This is similar to an architect’s process. An architect starts a project by drawing a conceptual sketch, and refines the sketch into a scale plan. Sometimes it turns out that the original ideas don’t work. Sometimes the scale plan makes the concepts more workable. Scaling the concept mathematically makes it more predictive, more descriptive, and more useful.  

Refining Measures

Operational measures and scales are strong tools, and often work well to summarize personality, results, potential, or competency. The numerical values of the scales can be compared and linked to other values such as compensation. They can also be tested.

As an architect may find that her concept won’t work in practice, we may find that a talent measure does not work as we conceptualized it. For example, if we compare measures of performance and personality to investigate our mental model that extroverts are better at sales, we may find that personality does not relate to performance as we expected.

Statistics can help us refine and strengthen our talent measures. If we find that an employee engagement survey is only weakly related to customer satisfaction, we can add survey questions to strengthen the relationship. Adding questions about the organizational climate, such as “my co-workers really care about the customer’s experience,” is likely to increase the correlation. Examining statistical correlations can help us develop a measure that’s quite important to the business.  

Personality assessments are among the most refined talent measures. Many personality instruments have been revised over the years—the state of the art, in some cases, is astounding. The Hogan Personality Inventory (HPI), which defines personality as social reputation, has now undergone 30 years of refinements. It was developed by correlating respondents’ answers to survey questions with friends’ and co-workers’ descriptions of the respondents (social reputation). Today, the 206 questions of the survey—questions such as “I would like to be a race-car driver”—allow surprisingly accurate assessment and precise differentiation between different aspects of personality. 

Many assessment participants feel that the HPI can read their minds, but the “wow” factor is simply produced by probabilistic relationships between survey questions and reputation. In a sense, it’s the magic of statistics—“any sufficiently advanced technology is indistinguishable from magic” (Arthur C. Clarke). However, participants’ feelings that the HPI personality instrument can see their true selves can easily lead to reification.

Of course, not all personality instruments are as well refined as the HPI, and it’s important to remember that even the HPI is probabilistic. These instruments are accurate nearly all the time, but not always. Imperfections are easy to overlook because the instruments are “right” so often, and in general. Overlooking the imperfections, however, has dangers.

How Reification Happens

There is something about putting numbers on a model that makes the model seem real and unquestionable. But this presents a problem. When we can’t ask questions about our models, we can’t learn.  

For some reason, it’s easy to accept mathematical talent measurement results as the truth, and not look beyond the numbers. I have some theories about why this reification happens.
  • Some people aren’t as comfortable with numbers as they are with words. If it’s a lot of work for an individual to understand a chart or a report full of numbers, it’s likely that the person will only review the measures superficially. It’s also less likely that the person will ask questions. 
  •  The basis of talent measures isn’t always made clear. When providing HPI feedback, we don’t explain conceptually or computationally how the scales were developed or scored. In fact, the calculation methods are a secret known only to the Hogans. In one sense, it’s not important to know these details. But in another sense, not understanding how a measure works—or having no access to the mechanism behind the measures—could lead to reification. 
  •  When the talent measures are rigidly used for decision making, for example compensation or selection, the are in a sense real. Certainly they control real outcomes.  
 

Reification and the History of Intelligence Testing

The danger of measure reification is obvious in the long and often sad history of intelligence testing. In 1905, Alfred Binet proposed  a method to measure intelligence in children. A careful scientist, he noted the method’s limitations: 

This scale properly speaking does not permit the measure of … intelligence, because intellectual qualities … cannot be measured as linear surfaces are measured.

Binet intended to develop a tool to classify children needing attention. He tried to not reify the underlying capability.

Since then, intelligence has been reified and recast as a real and invariable human attribute—an attribute that describes a limit of human potential. The application of intelligence testing has limited access to immigration, schools, and jobs.  

When we reify a measure, we extend the measure beyond its original design. In this case, research indicates that intelligence does change. In addition, capabilities such as emotional intelligence are more important for some jobs. Making decisions based solely on employee intelligence is a mistake.  Intelligence quotient is not a real thing. It is a measure developed for a specific and narrow task: identifying children who need attention to succeed academically. Use in industry, and for immigration, came much later.

While many would argue with me, I assert that intelligence must be combined with other measures to be useful in business.

Reification and the Danger of Self-Fulfilling Prophecies

Reifying measures can lead to self-fulfilling prophecies. For example, designating an employee as “high potential” one year often means they will continue to be seen as high potential in future years, regardless of changes in performance. This is similar to calling a student “gifted.”
When a manager gives a low performance rating to an employee, there can be similar long-term consequences. People often conform to expectations. This is called the Pygmalion effect, which is well studied in schools. The Pygmalion effect also happens in organizations

Reification and the Danger of Limited Thinking

Unquestioning acceptance of any representative model is a problem because it limits our ability to think broadly about a situation. We tend to think that a talent measure describes talent completely. If we do this, we fall into the trap of mistaking the map for the territory.

Early sea charts were representations of mariners’ mental models. They were crude but adequate for coastal navigation at the time. Today they seem wildly imaginative and mostly decorative. But partly as a result of the maps’ reification of these mental models, sailors stayed close to shore to avoid the monsters, whirlpools, and other dangers that became very real to them—including the danger of sailing over the edge of the world.  

Sometimes, we stay close to what is familiar. If we’re familiar with the idea of intelligence, we refer to someone as smart. If we’re familiar with descriptions of personality, we may refer to a person as an introvert.  But there is much more to a person than our mental models, and our measures, would suggest.

Recognizing the Limits of Measures Is the Key to Using Them Well

Ultimately, talent measures are just representations of mental models. The underlying talent is always much more complicated. Any representation, or model, is necessarily a simplification.
I am concerned that we take measures as better, and more, than they actually are. If we don’t consider the limits of the tools, the limits of the tools become our limits.

I don’t think we should look for more perfect measures of talent. I am certain they do not exist. For one thing, the available technology reflects our current understanding of talent. 

So, throwing out our current talent measures is probably not helpful. Instead, we can do better by increasing our understanding of the current measures. This is an evolutionary process, and a probably a process that must be done in collaboration with others. How else can we examine our assumptions, and question both our measures and the underlying mental models on which they’re based? (I’ll be talking extensively about building shared meaning of measures in future blogs)

If we’re to use our measures intelligently, we won’t expect them to be more perfect than they are—even if they’re mathematically correct 95% of the time. We’ll remember that measures are never true representations of reality: A measure can never contain the whole truth, the total complexity of a person, or an entire situation. And we won’t allow ourselves to be daunted by the “truth” of numerical measures, which leads us to accept them superficially. Instead, we can use measures as a starting point for thoughtful exploration and deeper communication. 

It’s important to remember that all measures represent someone’s theory. The theory may not be appropriate in the current context, and may not be measured well

Tuesday, May 15, 2012

Measuring Invisible Talent


When we think about measurement, we usually think about measuring objects—their length or weight. The concrete nature of this sort of measurement is easy to understand and predictable: A meter is always a meter.

Talent measurement, in contrast, is not easy to understand, and it’s unpredictable. Measuring talent has its challenges, but it’s one of the keys to organizational learning and employee motivation.

When it comes to measuring talent, we look for consistent and concrete measures, just as we do in the physical world. We hope for honesty in the mathematical precision offered by measures and metrics.  We hope for less wiggle room and more candor. We hope measurement data provides less theory, better insights, and obvious decisions. 

Comparing physical and talent helps us to see the value and potential of measurement. The value is high, but if we cling to the metaphor of physical measurement, we will grow frustrated. We may also miss one of the real strengths of talent measurement.

Talent measurement is different and it is complicated. Among the complications, it has a special attribute: measurement motivates. To access the benefits of talent measurement we need to consider how physical and talent measures differ. 

Talent Is Invisible

Talent measures aren’t concrete, like their physical counterparts. Many of the most important assets of today’s world are essentially invisible—think of wealth, power, relationships, personality, or intelligence.  Because the aspects of talent that we care about are invisible, it can be difficult to know what we are measuring.    

It’s not just that talent is invisible. We also need to remember that measures are just a representation of talent. This adds complications. We all know that someone’s height in inches isn’t the person. It’s easy, however, to confuse a measure of potential with the value of a high-potential employee. The measure of potential is a representation of an underlying capability, and the measure is accurate only in a probabilistic sense.   

Invisibility and representation are two reasons that talent measures tend to be less precise, and less consistent, than physical measures. Have two managers rate how well an employee completed a difficult task, and you’re likely to get two different answers. Ask two skilled carpenters to measure the length of a cabinet, and you’ll probably get two nearly identical answers, accurate to within one-sixteenth of an inch. 

Despite this, organizations often act as though their measures are nearly perfect. For example, some management consultants recommend that employees be ranked annually. Let’s be clear about what ranking actually means: Employees will be listed in order, from best to worst. To truly rank employees, there would need to be a distinction between the 10th and 11th best employees. Without a perfect measure, this is impossible. Since we don’t have measures that are up to this task, we would have to use other means to rank employees, such as intuition. 

Ranking employees by a physical measure sounds easier, but even this is complicated. Let’s say you’re at a family gathering and you’re taking a photo of your grandparents, siblings, nieces, nephews, and so forth. You’d ask people to organize themselves by height—shorter people in front—so that the camera can capture their faces. We often think about employees this way: We can just line them up according to some feature, such as performance. We’ll keep the best, or the tallest, and get rid of the rest.

Of course, it’s not that simple! Even physical measurement is imperfect. Imagine trying to get 1,000 employees to stand in order, by height. I can hear the questions already. Does big hair count? Should we take our shoes off? In the end, we’d probably need to ask ourselves, "what is employee height, anyway?"

If we take physical measurement to this logical conclusion, it provides a useful lesson: measurement is more complicated in practice than in theory. We may think we understand what leadership is. When it comes to measuring it, we need to get pretty specific in our meaning.  

People React to Being Measured

In general, physical measurement has few side effects. If you measure the length of a cabinet, you don’t affect the cabinet, and the cabinet isn’t likely to react. Talent, unlike inanimate objects, is affected in complicated ways by measurement. People react to measures.

In fairness, some physical objects are affected by measurement. When checking tire pressure, a small amount of air escapes. This affects the tire pressure. This is a simple example of an observer effect, which has been well documented in physics. For example, a glass thermometer absorbs thermal energy when taking a measurement. 

Observer effects on physical objects are generally unsurprising and small. Talent’s reaction to measurement is complicated and can be large. 

The effect can be positive. Measurement can lead to motivation, increased effort, and more focus.  Feedback and reasonable goals often lead to higher levels of performance, as we discussed in past blog posts.  

Practical experience however, shows that this isn’t always the case. Unfortunately, measurement can change talent in counterproductive ways, depending on the context. Measurement can de-motivate and distract. If measures are linked to very difficult goals, employees sometimes give up, or—even worse—lose faith in the organization and disengage. 

To make things worse, reactions to measurement can also motivate talent to corrupt or game the measures. Physical measurement never has this issue. Humans, however, have a major preoccupation with gaming measures. It’s so common that the famous methodologist Don Campbell, discussing program evaluation in 1975, described what has become known as Campbell's Law:
The more any quantitative social indicator is used for social decision making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.   

Given these complexities, perhaps talent management should steal the nickname “dismal science” from economics:
  • The underlying dimension you wish to understand is invisible 
  • Measuring talent is imprecise and probabilistic 
  •  Humans react to measures, sometimes by changing themselves, and sometimes by changing the measure.
 

So, Do We Stop Measuring Talent?

It’s been said that since we can’t measure talent perfectly, we should simply give up and acknowledge defeat. For 30 years, some consultants have advocated getting rid of performance appraisals altogether. I’m amazed this impractical idea is still being considered.

If we don’t measure human performance, we lose a powerful motivational and learning tool. As with many aspects in life, we simply have to manage the dilemma and tension. 

Measurement is flawed and we must use it.  We can’t hide our head in the sand and hope these flaws will go away. The flaws are inherent in measuring, especially something as complicated as talent. If we step into the real world of human idiosyncrasies, measurement is a powerful tool that can help us improve organizational performance, ensure educational excellence, and motivate personal growth. 

In the coming blog posts I will further elaborate the myths of talent measurement and how we can think more clearly for organizational learning, motivation and growth.

Charley Morrow

Tuesday, May 8, 2012

Motivating with Measures: Accountability, Incentives and the Dark Side


The benefits and risks of using measures for motivation are amplified when employees are made accountable or incentivized.

Measurement and Accountability

Measurement is at the heart of accountability. In the dictionary, accountability has a neutral meaning: an obligation or willingness to accept responsibility for one’s actions. This is the denotative, or literal, meaning. In a work setting, the denotative meaning of accountability is a goal that defines who will do what by when.

Accountability in this sense is the basis of management by objectives  (MBO).  While MBOs were popularized in the 1950s, they remain a central element of most organizations’ annual performance appraisals. 

While some objectives are task-based, the best objectives are measurement-based. We have found that the most effective examples of accountability-based motivation use SMART goals—goals that are Specific, Measurable, Agreed upon, Realistic, and Time-bound. Describing expectations in terms of measures at the beginning of a project motivates performance. 

Accountability is a word often loaded with connotations. How would you feel if you were told in a business meeting that you will be held accountable? Queasy? The phrase suggests that you’re in trouble. This isn’t actually accountability—it’s scapegoating. This is a connotative meaning, and the connotations of accountability are negative. The fear of negative consequences can lead to all sorts of dysfunctional behavior.

If the meaning of the measures isn’t managed, then accountability is more likely to instill a culture of fear than it is to motivate employees and support the organization’s strategic goals. 

To motivate with measurement-based accountability, the meaning of the measure must be managed. Use measures to describe expectations before the employee works to achieve results.  Articulate both the formal, denotative meaning (how the measure works) and the connotative meaning (the implications for the employee).

  • If you’re building a measurement system, remember that the connotations are probably more important than the measures. Consider how the measures will be seen by employees. Develop a list of actions employees could take to influence the data.  Be sure to include actions that the system intends to encourage as well as unintended actions. Adapt your system accordingly to especially encourage the intended and discourage the unintended.
  • If you’re managing employee accountability with measures, be sure to talk with employees about both the denotative and connotative meanings. It’s important to develop a shared vision: This is a true leadership communication task.  If there is clear agreement on the meaning of the measure as well as the level of performance expected, accountability can be positive. 

Accountability is simply responsibility. Measurement can help top build responsibility for results and the rewards or consequences of the results.

Incentives and Measurement

If goal setting and accountability work, why not add incentives to make them work even better? Why not juice the motivation system? Many of us have worked, or currently work, in an incentive system. Sales people work on commission. Managers get bonuses and stock options.

There is a whole industry of compensation consultants trying to create incentives that work. Since the industrial revolution, we’ve been trying to get incentives right—and some of us are starting to wonder if incentives are just wrong.  

Research summarized by Daniel Pink suggests that incentives lead to lower performance in completing tasks that are complex or involve creative thinking. I’m sure we’ll find that this relationship is true in many circumstances. 

I am also sure there are many circumstances in which incentives lead to improved performance, even in complex and creative tasks. As with many aspects of human performance, there are complexities.
Life isn’t one-size-fits-all; there are individual differences and nuances of context that influence how incentives affect performance. Long-term goals, which are difficult to study experimentally, may work better with incentives.  Mr Pink presents the world in black and white; i am confident there are many shades of gray.

There is a bigger problem with linking incentives to measures, however.  Incentives, or consequences, have the tendency to put the focus exclusively on moving the needle—on affecting the data and the measure rather than addressing the underlying goal. Too much focus on the connotations of the measure, as opposed to the meaning of the measure, leads to gaming.



The Dilbert comic strip may seem ridiculous, but as is always the case in Scott Adams’ cartoons, absurdity reflects reality to an uncomfortable degree (many of his cartoons are based on real-life examples submitted by readers). Incentives can have unintended consequences, often encouraging employees to behave unethically. For example, if you were earning a subsistence wage as a packer for Green Giant, and the company announced that a bonus would be paid to every employee who could find and remove insect parts from packages of frozen peas, what would you do? Possibly what many of the employees did—bring insect parts from home to earn the incentive.

There are, of course, more troubling examples of the dark side of measurement-based motivation. In the sad story of system-wide cheating in Atlanta Public Schools, 178 employees, including both teachers and principals, are now suspected of inflating scores on standardized tests to earn the significant rewards that come with rapid improvements in school performance. Outright swindles, such as Bernie Madoff, are all too common.

In sales departments there are more subtle examples of gaming incentive systems. Sales departments have been known to count all sales in the current quarter toward commissions—even though many of the sales are not actually closed. 

Conclusion

It’s dangerous to rely too much on measures for motivation: The more you emphasize measures, the more apt the measures are to cause dysfunctional, even unethical, behavior.  If you need to use measures for accountability and incentives, be careful.  Measures can’t replace management; they are a management tool.  It is necessary to make sure that the measures are reasonable – not gamed – and that accountability is understood and positive. 

Wednesday, April 25, 2012

Comparison, Context, and Connotation: Turning Data into Insight

How can we turn inert data into dynamic insights? We turn measurement into data, data into information, information into meaning, and meaning into insight. Each of these four steps is critical. If you are building talent measurement systems or using talent measurement as part of your management responsibilities you should know the three key Cs.

Turning Data into Information: The Key of Comparison

Many organizations have generous amounts of data filling their databases, but few organizations make use of it. Data by itself is meaningless. And at this point, it’s clear that our processing skills haven’t kept pace with our ability to collect terabytes of it. 

Imagine looking at a column of numbers. What does it tell you? Nothing, I imagine.
Now imagine that you can look at the same column of numbers with other data points for comparison. You might compare each piece of data to:
  • Time (each data point is part of series across time)
  • Employees (each data point is one employee’s performance)
  • A norm (one number represents typical performance, and another number represents actual performance)
  • A goal (one number is the target, and the other number is the actual amount)
  • An implication (one number is an employee’s performance, the other number is an incentive payment associated with it).
Data is useless unless it is compared to something. Trends matter. 

The need for comparison isn’t always obvious, however. Occasionally, I’m asked to analyze an employee survey and there’s no good way to compare the data. Often, data is collected without considering how it will be turned into information. you have to compare across time, subgroups or at least across different questions. To turn data into information, you need a comparator.
To understand a survey, comparator can be previous survey results, a goal, normative responses, or the results of measurement in similar organizations (the basis of benchmarking).   In a pinch, you can compare survey questions to each other, but this provides limited information.

Turning Information into Meaning: The Key of Context

Information, however, is also never useful in itself. To have meaning, it needs to  be placed in context and interpreted. It’s critical to ask why.

  • Why does the pattern of numbers vary with time?
  • Why do employees’ levels of performance vary? Are there differences in aptitude, skill, or motivation? Or are they working in different environments?
  • Why is our data above or below the norm?
  • Why are (or why aren’t) we achieving the goal?
  • Why am I receiving a smaller bonus than other employees?

In many situations, meaning is elusive because it requires a broad understanding of context. If two employees have very different performance results, are they working in the same context? Does one employee have more difficult tasks—a more involved project, a larger territory, more complex machinery to run? Only when you’ve determined that the context is comparable can you infer that different levels of skill or motivation underlie the difference in results.

Management is drowning in information. As we collect more and more data, comparison becomes easy. But, putting information into context requires bridging different data sources, integration, and creative thinking. The strategic and operational environment matters. Important information is relevant, given the context. 

Turning Meaning into Insight: The Key of Connotation

Given the process of translating data into meaning, it’s easy to see why there is miscommunication. Two people looking at the same data can have completely different interpretations—and they may not even know it. 

This may seem paradoxical at first; after all, measures are precise way of communicating. We wouldn’t say a car “doesn’t need much gas.” Instead, we use mathematical precision to talk about miles per gallon. Imagine the reactions of shareholders and analysts if an executive talked about “a pretty good investment,” rather than discussing a percent return on equity.

Nevertheless, people with different perspectives or goals see measurement results very differently. As with other types of communication, measures have denotative and connotative  meaning. Making a distinction between the two types of meaning is the key to understanding measures.

To illustrate the difference, let’s look at the term re-engineering. In literal (denotative) terms, re-engineering is a way to understand business processes and optimize them. In subjective (connotative) terms, there are implications of re-engineering—mass layoffs. I once used re-engineering as an example in a speech that I was giving in a company and received a strange and hostile response.  It turns out that the there was a history of using the term euphemistically!  That speech never recovered—the mere use of the term destroyed any trust between the audience and me.  

In measurement, the denotative meaning is often defined mathematically. It is the connotative meanings, however, that often matter to employees. In other words, the implications of the measures are more important than the measures themselves.  

It’s important to remember that the implications of measures are personal. How employees interpret measures, and how they react to performance appraisals, are influenced by their upbringing, their personalities, their motivations, and their worldviews. 

For example, if a salesperson is focused primarily on money as an indicator of success, he may consider performance measurement only in terms of the size of his bonus. It’s likely this will lead to misunderstandings with a company executive, who is looking at the measures to answer different questions: Does the salesperson need more training? Is the product “good.” Did the customers have good experiences?

In another common example, it may not be possible for an employee to receive measurement-based feedback constructively for any number of reasons. If the employee is perfectionistic or competitive, she may only be able to receive the feedback as criticism. Another employee may be so consumed with feeling miserable about failing to meet last year’s goal that he can’t engage in an authentic conversation about the future.

In my experience, many employees will try to avoid measurement because they distrust management, or are afraid of being targeted in a blame-oriented culture. This example of connotation is probably all too familiar to the readers of this blog.

Of course, personal perceptions and assumptions can be influenced. To build meaning, and ultimately insights, organizations must spend time decoding both the denotative and connotative meaning of measures. Unless both the denotations and connotations are addressed, there is little chance of communicating with the measures successfully, or gaining insight from the data.

Decoding measures is a dialogue. As a consultant, I need to discuss both the objective and subjective implications of measures with my clients. Managers need to do the same with their employees. It is only through dialogue that the measures’ contrasts and contexts, the meanings and implications, can be understood.

This is the path from data to shared insights. Of course, the path is littered with suggestions from many sources. But the salient point remains: To reach shared insights from measurement, organizations need to confirm that there is shared meaning. We can think about this as a four-step process:

There is no data without measurement, no information without data in comparison, no meaning without an understanding of information in context, and no insight without communicating shared meanings.

Is your organization taking steps to make sure that insights are gained from measurement?

In the next posts I’ll talk about motivation, organizational learning, and accountability. In the meantime, I welcome your thoughts.
Charley Morrow

Tuesday, April 10, 2012

Human Performance Measures: Start of a Series

I’ve been working with people measures for more than 25 years. Nearly every day, I see strong reactions to these common leadership tools.  Some embrace measurement as a tool for positive change, and others are nervous. Some question the measures, and others hide behind the authority of the data. 

These reactions to measurement and data fascinate me. They also hold the key to getting results from measurement systems. 

When measurement systems work well, people develop understanding, gain insight, become motivated, and set new directions. Just as often, however, measures simply do not work. In these cases, people ignore the measures or build elaborate defenses to dodge, manipulate, or diminish the data.

Over the next few months I’ll be writing about how systems and people respond to measures of human performance and how organizations can get beyond negative reactions. This is a topic I’ve been researching for years, and it may be my strongest and most nuanced area of understanding. 

I started my career focused on measurement systems. I took enough graduate courses in statistics and methodology to work as a psychometrician, and my dissertation combined the disciplines of psychology and economics. 

As I matured and worked in the real world of organizations, I started to see that the value of measurement can be found less in precision and mathematical finesse than in communication and learning. The most elegant performance management system is useless unless it is genuinely called on to help people communicate, learn, and adapt. 

In other words, measures need to be applied to produce data; data needs to be reviewed and interpreted to be useful; and useful information needs to be considered in context if people are to learn and improve. 

I can say with confidence that measures and data alone will not change organizations or behavior. There are too many psychological, organizational, and social factors that can prevent measures from translating into learning and improvement.

As a society, we spend huge sums of money on human performance measurement—and we start measurement early. All of us are familiar with the U.S. public education system, which now tests every student in the third through eight grade annually. In a number of states, databases are being developed to link these test scores to school, teachers, and student demographic information. 

When we graduate from the public education system, we find that most large organizations rely on annual employee appraisal systems. A manager can spend a few months each year rating employees, summarizing the information, and providing feedback. 

Despite the intensity of the data-gathering, improvement is not obvious. Many are dissatisfied with the measurement systems.  As a result, these measurement systems are often re-imagined and implemented with great hope and promise, only to fail. I don’t think much of this activity and investment. Don’t misunderstand: I’m a fan of measurement, because it’s critical to precise feedback and growth. But I’m an advocate for thoughtful investment in measurement. I’ve seen its transformative power. 

The public education system is still experimenting with measurement systems, and will be for years to come. Some corporations rethink their annual appraisal systems regularly. 

Technological and social trends suggest that performance measurement will only increase. Some argue that this investment is inappropriate. Addressing the merits of this societal investment isn’t my purpose here. My purpose is to make sure that individuals, organizations, and society get more value from the investments that are made.

I have workable tools and tips to make sure all of this data yields some return. Paradoxically, I won’t spend much time writing about measures. As I’ve said, it’s not as much about the measures as how they are used. I hope you will find the posts in the following weeks useful.