Showing posts with label competency model. Show all posts
Showing posts with label competency model. Show all posts

Wednesday, June 13, 2012

Myths of Measurement: Do Measures Reflect Reality?


In the last blog post we discussed the mental models that inform our understanding of talent. Today’s post will examine how measures make mental models explicit and useful. This is true of talent management and other fields. 

I’d also like to discuss how easy it is to misunderstand talent measures as concrete entities. Just as there was a danger in reifying our mental models of talent, it’s easy to forget that measurement results are just a numerical representation of a model. The model is not “real,” and the measures, for all their predictive or descriptive strength, are just a representation of the model.

Mental and Mathematical Models

When measuring talent, we develop mathematical models to represent our mental models.  Often we start with a conceptual model, which is a sketchy idea. An operational model, on the other hand, is precisely specified in mathematical language. Operational models often have good predictive or descriptive strength. 
This is similar to an architect’s process. An architect starts a project by drawing a conceptual sketch, and refines the sketch into a scale plan. Sometimes it turns out that the original ideas don’t work. Sometimes the scale plan makes the concepts more workable. Scaling the concept mathematically makes it more predictive, more descriptive, and more useful.  

Refining Measures

Operational measures and scales are strong tools, and often work well to summarize personality, results, potential, or competency. The numerical values of the scales can be compared and linked to other values such as compensation. They can also be tested.

As an architect may find that her concept won’t work in practice, we may find that a talent measure does not work as we conceptualized it. For example, if we compare measures of performance and personality to investigate our mental model that extroverts are better at sales, we may find that personality does not relate to performance as we expected.

Statistics can help us refine and strengthen our talent measures. If we find that an employee engagement survey is only weakly related to customer satisfaction, we can add survey questions to strengthen the relationship. Adding questions about the organizational climate, such as “my co-workers really care about the customer’s experience,” is likely to increase the correlation. Examining statistical correlations can help us develop a measure that’s quite important to the business.  

Personality assessments are among the most refined talent measures. Many personality instruments have been revised over the years—the state of the art, in some cases, is astounding. The Hogan Personality Inventory (HPI), which defines personality as social reputation, has now undergone 30 years of refinements. It was developed by correlating respondents’ answers to survey questions with friends’ and co-workers’ descriptions of the respondents (social reputation). Today, the 206 questions of the survey—questions such as “I would like to be a race-car driver”—allow surprisingly accurate assessment and precise differentiation between different aspects of personality. 

Many assessment participants feel that the HPI can read their minds, but the “wow” factor is simply produced by probabilistic relationships between survey questions and reputation. In a sense, it’s the magic of statistics—“any sufficiently advanced technology is indistinguishable from magic” (Arthur C. Clarke). However, participants’ feelings that the HPI personality instrument can see their true selves can easily lead to reification.

Of course, not all personality instruments are as well refined as the HPI, and it’s important to remember that even the HPI is probabilistic. These instruments are accurate nearly all the time, but not always. Imperfections are easy to overlook because the instruments are “right” so often, and in general. Overlooking the imperfections, however, has dangers.

How Reification Happens

There is something about putting numbers on a model that makes the model seem real and unquestionable. But this presents a problem. When we can’t ask questions about our models, we can’t learn.  

For some reason, it’s easy to accept mathematical talent measurement results as the truth, and not look beyond the numbers. I have some theories about why this reification happens.
  • Some people aren’t as comfortable with numbers as they are with words. If it’s a lot of work for an individual to understand a chart or a report full of numbers, it’s likely that the person will only review the measures superficially. It’s also less likely that the person will ask questions. 
  •  The basis of talent measures isn’t always made clear. When providing HPI feedback, we don’t explain conceptually or computationally how the scales were developed or scored. In fact, the calculation methods are a secret known only to the Hogans. In one sense, it’s not important to know these details. But in another sense, not understanding how a measure works—or having no access to the mechanism behind the measures—could lead to reification. 
  •  When the talent measures are rigidly used for decision making, for example compensation or selection, the are in a sense real. Certainly they control real outcomes.  
 

Reification and the History of Intelligence Testing

The danger of measure reification is obvious in the long and often sad history of intelligence testing. In 1905, Alfred Binet proposed  a method to measure intelligence in children. A careful scientist, he noted the method’s limitations: 

This scale properly speaking does not permit the measure of … intelligence, because intellectual qualities … cannot be measured as linear surfaces are measured.

Binet intended to develop a tool to classify children needing attention. He tried to not reify the underlying capability.

Since then, intelligence has been reified and recast as a real and invariable human attribute—an attribute that describes a limit of human potential. The application of intelligence testing has limited access to immigration, schools, and jobs.  

When we reify a measure, we extend the measure beyond its original design. In this case, research indicates that intelligence does change. In addition, capabilities such as emotional intelligence are more important for some jobs. Making decisions based solely on employee intelligence is a mistake.  Intelligence quotient is not a real thing. It is a measure developed for a specific and narrow task: identifying children who need attention to succeed academically. Use in industry, and for immigration, came much later.

While many would argue with me, I assert that intelligence must be combined with other measures to be useful in business.

Reification and the Danger of Self-Fulfilling Prophecies

Reifying measures can lead to self-fulfilling prophecies. For example, designating an employee as “high potential” one year often means they will continue to be seen as high potential in future years, regardless of changes in performance. This is similar to calling a student “gifted.”
When a manager gives a low performance rating to an employee, there can be similar long-term consequences. People often conform to expectations. This is called the Pygmalion effect, which is well studied in schools. The Pygmalion effect also happens in organizations

Reification and the Danger of Limited Thinking

Unquestioning acceptance of any representative model is a problem because it limits our ability to think broadly about a situation. We tend to think that a talent measure describes talent completely. If we do this, we fall into the trap of mistaking the map for the territory.

Early sea charts were representations of mariners’ mental models. They were crude but adequate for coastal navigation at the time. Today they seem wildly imaginative and mostly decorative. But partly as a result of the maps’ reification of these mental models, sailors stayed close to shore to avoid the monsters, whirlpools, and other dangers that became very real to them—including the danger of sailing over the edge of the world.  

Sometimes, we stay close to what is familiar. If we’re familiar with the idea of intelligence, we refer to someone as smart. If we’re familiar with descriptions of personality, we may refer to a person as an introvert.  But there is much more to a person than our mental models, and our measures, would suggest.

Recognizing the Limits of Measures Is the Key to Using Them Well

Ultimately, talent measures are just representations of mental models. The underlying talent is always much more complicated. Any representation, or model, is necessarily a simplification.
I am concerned that we take measures as better, and more, than they actually are. If we don’t consider the limits of the tools, the limits of the tools become our limits.

I don’t think we should look for more perfect measures of talent. I am certain they do not exist. For one thing, the available technology reflects our current understanding of talent. 

So, throwing out our current talent measures is probably not helpful. Instead, we can do better by increasing our understanding of the current measures. This is an evolutionary process, and a probably a process that must be done in collaboration with others. How else can we examine our assumptions, and question both our measures and the underlying mental models on which they’re based? (I’ll be talking extensively about building shared meaning of measures in future blogs)

If we’re to use our measures intelligently, we won’t expect them to be more perfect than they are—even if they’re mathematically correct 95% of the time. We’ll remember that measures are never true representations of reality: A measure can never contain the whole truth, the total complexity of a person, or an entire situation. And we won’t allow ourselves to be daunted by the “truth” of numerical measures, which leads us to accept them superficially. Instead, we can use measures as a starting point for thoughtful exploration and deeper communication. 

It’s important to remember that all measures represent someone’s theory. The theory may not be appropriate in the current context, and may not be measured well

Thursday, June 7, 2012

Myths of Measurement: Is Performance a Real Thing?


When I was a child, a teacher was considered high performing if he had a quiet and orderly classroom. This is no longer true. As pedagogical theory as evolved, student engagement in learning has become more important than order and quiet. Now, if children are noisy but engaged, a teacher is performing well. Which model of teacher performance is correct?  

In the last 40 years we’ve seen two different models of teacher performance.  Our understanding of employee performance evolves. Actually, our understanding of talent, which is largely conceptual, is constantly evolving, and varies between people. In terms of teaching, discipline and order were important; now engagement is paramount. This is not an obscure pedagogical point. It is the key to successfully using talent measures.

Mental Models of Talent 

The most important features of talent are invisible—features such as performance, potential, personality, and intelligence. As a result, we have mental models of talent. Our ideas of talent and performance change over time, as we saw in the example above. In a practical sense, everyone agrees that a desk is a desk, or a rock is a rock, but performance is not always performance

Consider sales representative performance in the life insurance industry. The measure of sales productivity is commission, and commission is a percentage of the premium paid each year for a policy.  A veteran salesperson in this industry may not be selling new policies, but may be paid handsomely for policies she sold years ago, since the customer is still paying the premium.  This is a unique model of performance, since it includes results of behaviors from years past.  It is quite different from a typical model of sales performance, which is the amount of product sold in a month or quarter. 

Clearly, performance is not simply performance. Apparently obvious measures of performance, such as sales, involve assumptions. Even the idea of sales performance is a model. In a business context, the mental model of talent or performance is built by management’s expectations.

In a general sense, models of talent are networks of theories and assumptions. It could be a theory about how people tend to react to the environment—this is personality.  It could be a theory about the organization’s business model and how employees contribute to the model—this is performance.  It could also be a theory about how people should relate to each other and themselves to support the organization—this is a competency model. These theories and models are all helpful tools for understanding and describing human capabilities and outcomes.

The Strengths of Mental Models

Models are helpful. Architects, boat builders and other craftspeople have used them for years. In a management context, we need mental models of talent to understand employees, to know how employees contribute to the larger operation, and to be able to predict how employees will react in a range of situations.  Without these powerful tools, we could not effectively manage our talent. 

Competency models work so well because they make these mental models explicit and transparent, and because they allow us to articulate the behaviors that are related to performance. Explicit competency models have radically changed how talent is managed. In the past, a manager might have said only that an employee needed to be a better team player. A competency model gives the manager an elaborate description of what it means to be a team player, and describes the behaviors in terms that can be communicated, measured, and emulated. 

When these behaviors are measured, competency models support better insights, more motivation, and obvious decisions. For example, the Danielson framework is a model of teacher performance that allows organizations to select, train, coach, and improve educator performance using a single set of expectations.

Competency models measure observable behaviors; personality assessments, which describe innate natural tendencies, offer another set of powerful tools. A coach who has a strong understanding of a personality system (for example, the MBTI or the HDI) can assess someone to gain insights and then coach using the framework.  A manager who has a strong mental model of personality is better able to see consistencies and predict how others will react to situations.

The Problems with Mental Models

Although models of personality, performance, and competency are powerful places to start, we often forget that any model is a simplification of a complex reality. A personality measure focuses only on a few aspects of an individual’s nature; a performance measure considers only one contribution to a business; and a competency assessment considers only a few human capabilities.

 There is also a danger when we’re not aware that we’re using a model. Mental models, as defined in organizational system dynamics, are deeply held images of thinking and acting. Mental models are so basic to understanding the world that people are hardly conscious of them, and this leads to problems. 

For example, if I’m talking with an employee about her performance, we may be talking about two different things. My employee may be focused on the quality of her writing and communication, while I’m focused on the number of billable hours. We’re talking about doing a great job—and we’re completely miscommunicating. Both of our models are necessary simplifications. One is necessary from a business standpoint, and the other from the standpoint of doing the work.

As this example shows, when our mental models are implicit—not apparent to either person—they limit our perceptions and prevent us from deliberately acting and communicating.

Implicit mental models of talent lead to misalignment, miscommunication, and narrowed focus.

Miscommunication. Good communication is based on shared meaning. Words like intelligence, personality, or performance mean different things to different organizational stakeholders. Unacknowledged mental models of these critical talent constructs lead to miscommunication.  We may be talking, but if our meanings are different, we are not communicating.

Narrowed focus. Mental models provide a framework for what we should pay attention to. The problem is that in looking for one aspect of personality, performance, or competency, we may miss another, equally important factor. 

For example, a personality model directs our attention to behaviors that suggests a personal tendency to react in a predictable way. We may miss other behaviors that would tell us something else about how the individual can contribute. I have colleagues who have such a strong understanding of the DISC personality system that they immediately notice that someone is largely Dominant, Influence, Steady, or Compliant. They are so good at classifying others using this system that they miss other aspects of their personality.

This is a shame, because there are many ways to look at personality.  The simplest model has four factors, but more complicated models exist, including the 16PF or the Caliper Profile.  A more complicated model allows for more refined insights.

Most aspects of talent are multidimensional. A person may be high in Dominance, but nearly as high in another dimension. Further, different aspects of personality may appear in different situations.  My colleagues’ mental models may be limiting their expectations of others to something much more simplistic than it actually is.

Misalignment. To be useful, talent models must align with organizational needs.  If we are unaware of our models, this may not be the case.

Think back to how life insurance sales representatives are paid.  If you’re not familiar with the insurance 
industry, it seems odd.  However, it is perfectly sensible to an insurance insider. One of the strengths of the industry is the stability associated with customers paying premiums year after year for their entire life, and only collecting a payout when they die.  Because of this, long-term relationships and accountability is important.  In this sense, the model of sales representative pay is aligned with corporate strategy. 

As organizations change, our mental models of talent must also change. Unlike a desk or a rock, talent can change and adapt.  New insights and technologies can suggest to better, different, or more detailed models of talent.  Often, this is an opportunity for growth and development.

However, if we are unaware of our mental models, they are difficult to change. 

For example, salespeople in an organization moving to a team-based sales environment must be able to examine their assumptions, and must be aware of their own mental models, because the organization is changing the model. Performance is no longer individual. The change will affect the team, its management, and the support of the team. Team members will have to change how they view performance, the management will have to think differently, and the measurement and pay systems will have to change.

Thinking Differently

Teachers today face more competition for children’s minds.  It may be that engagement is more important in an era of video-games and 24/7 entertainment, so the new model of teacher performance is appropriate for today.  However, the education system has a difficult task in getting veteran educators to think differently.  For too long, assumptions about effective teaching were based on outdated thinking. Worse, we were not aware of our assumptions about teaching.

If we are unaware of our model of effective teaching, we will have a hard time discussing change, let alone changing. If we are not aware of the model we are using, we will not manage talent optimally.  

The bad news is that when we measure talent, we always are measuring a model. The process of developing measures that represent the model we tend to made something seem extremely concrete.  The model is transformed into something more real than it actually is.  

The good news is that we have become much more sophisticated in our thinking about talent models. The competency revolution made behavioral models apparent. Now we simply have to remember that underlying every measure is a mental model of talent.  Remembering this will help us question our assumptions, articulate our mental models, and test alignment with organizational direction.

In the next blog post we will consider reification of talent measures.

Wednesday, April 4, 2012

Transforming Work: Strategic HR and Competency Models

In recent weeks, I’ve blogged about competency models: why they matter, how they were invented, and how they have evolved.

In this final post of this series, I want to discuss one of the most useful types of competency models: enterprise competency models. These models have changed HR to be much more capable of contributing to strategy and support a radical change in how we think of work and jobs.  I’ll also explain why strategic HR models will become even more important in the coming years.

Enterprise Competency Models: Replacing Research with Theories

One of the most significant developments in the competency revolution was the emergence of enterprise competency models. These models outline behavioral or leadership styles that everyone in a company (or all company managers) should demonstrate. 

Enterprise competency models demonstrate a theory of the behaviors needed to implement the organization’s strategy. In other words, the competency model is a summary and vision of the company’s strategy, articulated in terms of how employees should relate and work.

The model ensures that employee and leader styles reflect organizational needs. For example, key competencies could be:
•    Collaboration, if integrated products are central to an organization’s success
•    Customer focus, if the organization is focused on sales
•    Innovation, in a fast-paced technology company.

As with other competency modeling methods, competencies are described by observable behavioral indicators, such as:
Asks questions to determine customer’s point of view before making decisions.

Enterprise competency models are an evolutionary step for managers. Management has always defined organizational structure and jobs. Competency models extend this responsibility into another aspect of the organization.  This top down approach is, however, quite different from the research-based models that “discovered” the underlying success factors for a job.  It is also quite different from a detailed task or job analysis which are considered the starting point for most legally defensible HR processes.

Of course, there are weaknesses. 

An important concern is that managers’ theories are difficult to test. If an organization is not successful, is the competency model to blame? How long does it take to implement strategy?  How can we improve a competency model if it can’t be validated?

These models can also over-generalize the importance of a competency.  Consider decisiveness, a competency that appears in many models. It’s true that managers who postpone or avoid decisions are ineffective. It is also true, however, that managers who make decisions too quickly squelch innovation and creativity.  As a result, being decisive often sub-optimizes decisions. 

Because there is no way to test the competency model, “pet theories” tend to appear in the models. The risk is that theory-based models can be wrong, and they are rarely tested.

To Transform Work, Tell Employees How to Behave

When HR/Talent departments were largely concerned with jobs and tasks, the function was bureaucratic and pigeonholed. The focus was on providing a stock of qualified people to complete required tasks.

Competency models allow HR and talent departments to manage employees’ general interpersonal and intrapersonal style by describing, rating, and even incenting specific behaviors.

This is revolutionary. Consider a typical competency: develops networks across divisions. This is really a tool for culture change; the behaviors associated with networking describe an expectation that employees exchange ideas and information with other divisions. Ultimately, adopting this competency would reduce the silo-ism that is a problem in many organizations.

Today, organizational leaders have a powerful way to describe how they expect people to relate to one another, and even how they relate to themselves. By articulating competency models, and linking HR/Talent systems to the described behaviors, organizations have a new set of tools for shaping how employees manage themselves, how they relate to each other, and how they relate to customers. 

A new era has begun. By describing, rating, and incenting behavioral performance, HR has the potential to evolve into a real business partner.

The End of Task Management

If a company manages its expectation of behaviors using a competency model, and determines the results using performance management (i.e., a Results-Oriented Work Environment, or ROWE), the change is considerable. Jobs, which are task lists, are less important.

As jobs become less important, HR will be able to focus on business results and become a business partner. The idea of supplying human resources becomes less important, while the idea of talent management becomes critical.

Integrating HR Systems with Competencies

There are many ways to direct and encourage behavior. Too often, the methods a company uses to direct and encourage behaviors aren’t integrated. As a result, the organization ends up encouraging different, or even conflicting, behaviors.

An enterprise competency model provides a general theory of employee success that can be used for a variety of systems:
•    Staffing/selection
•    Succession planning
•    Compensation
•    Development/training
•    Performance management.

An enterprise competency model reflects strategy and links all the major HR systems.



The Future

Today, a typical large organization has seven separate databases related to human resources. This makes it difficult to create a single integrated environment for analytics, reporting, and decision-making. In the coming years, many organizations plan to integrate these databases into one Human Resources Information System (HRIS).

As companies integrate various HR applications, competencies will be at the center of the solution. While the HRIS platform is important, it is the content, and the decisions made with the information, that will be the critical components. In other words, the competency model that provides the architecture of the system will be the real key to business success.  

Strategic HR/Talent professionals should be prepared to align competencies with the strategy of the organization. If you want to have a unique and differentiated strategy, I would encourage you to consider a unique and differentiated enterprise competency model.   If your organization is following the same strategies as others in your industry, it’s appropriate to buy an off-the shelf competency model. Most organizations, however, will want a competency model that represents their unique strategy.

In Summary: Using Competency Models

The so called “competency revolution,” as some call it, has come a long way in 40 years.  McClelland and his protégés, who initially proposed competency are almost historical figures. Their initial methods have been adapted to keep pace with changes in work and technology.  The standard of research based models to uncover unconscious competencies for a single job have been replaced with theories of personal success that span an entire enterprise.

In the process of this adaption, many fine methods of competency modeling have been developed.  I do find it interesting that few HR professionals consider the many approaches to competency modeling and the strengths and weaknesses of each.

This six week  review of competency models has emphasized that different competency development methods yield very different information, and that each is appropriate to a specific task. I expect that integrated HRIS platforms will force us to be more specific about differences in competency models. While I expect that enterprise competency models will become paramount, other methods will remain very useful.

Wednesday, March 21, 2012

Get the Most from Your Competency Models: Understand All Competency Models Are Not the Same


Since their inception 40 years ago, competency models have progressed through distinct stages, in sync with changes in organizations. As a result, there are many types of competency models, each appropriate for a specific task. But we treat them as if they are the same. 

To get the most value from this valuable tool, we need to understand and recognize the differences in the meaning of “competency.” In the last blog post I described the roots of competencies.  In this post I will describe how they have evolved, and the best use of the various competency modeling methods.

Proliferation of Competency Modeling Methods: Moving from Clarity To Confusion

Corporations have wanted competency models since about 1990. Since they can genuinely drive business results and shape culture, organizations were willing to pay for competency models. As a result, consultants got in the competency modeling business in a big way.  

Initially, competency models were only developed for high-leverage jobs such as sales executives and leaders. A $250,000 competency research project carried out over two months was a good investment because individuals in these jobs drive organizational results.  Further, interpersonal (e.g., teamwork) and intrapersonal (e.g., multi-tasking) savvy are very important in these sorts of jobs and the behavioral event interview (BEI) method was exceptionally good at capturing these capabilities. 

Competency models soon began to trickle down to other job-families. Different methods of developing competency models also proliferated. Consultants argued that their methods were unique and better. 

Many consultants used executive interviews to understand competencies.  Often, the interviews started with organizational strategy and then inquired about the human capability needed to achieve the strategy. These capabilities became the organization’s competency model.

Some used focus groups to quickly capture ideas about competencies from leaders or incumbents. Often the focus groups learned about competency models and then generated examples of behaviors that achieved exceptional results.  Synthesis of these behaviors led to a competency model.

Other methods were clearly cheaper.  An organization could purchase a standard dictionary, or a card deck, of possible competencies. By thinking about the target job, it was just a matter of picking the right cards.  Using this tool, a professional could build model in an hour.

Competency models were also published and compared. Some noticed that leadership models from different organizations were quite similar. Would you go to the trouble of building a model if 80% of  competencies are the same in all organizations? 

Standardized competency models were developed. These models were built by integrating many competency models (research) or using on someone’s ideas of what it takes to be a successful employee or leader (theory). These standard models describe good management and leadership, but note how far we have moved from research to unlock the unconscious secrets to high performance! Many of the competency models were simply theories described in behavioral terms.

Ultimately, confusion reigned.  Everyone was talking about competency models, but in fact they were talking about different things.  There was (and still is) no agreement on the meaning of “competency.” 

Some large organizations (e.g., AT&T) had hundreds of unrelated competency models built with different methods and with different underlying assumptions! Many organizations became overwhelmed.  Some went so far as to ban competency models, at least temporarily.

Where We Are Now:  Many Methods to Address Multiple Challenges

Was this a fad or something else? Three things have happened:
  • We learned a lot about the inter- and intra- personal capabilities required for key positions; many leadership competencies are better understood. For example, nearly everyone in business now talks about “emotional intelligence.”  Daniel Goleman, who coined the term, trained under McClelland at Harvard, and notes that this inter-and intra-personal intelligence was influenced by competency research
  • The idea of a competency changed and became vague.  Whereas competency was defined as “a pattern of thought or behavior that differentiated average from superior performance,” now it more generally means behavioral performance expectations. Beyond that, there is little agreement about the definition of competency
  • We went too far.  We started to think that competencies are the only human capabilities that matter. This is clearly a mistake. Many professional jobs do not rely heavily on inter- and intra- personal capabilities  If you are hiring, developing, promoting or rewarding an engineer, use skills or tasks! 
Competencies are clearly not a fad.  After 40 years, i am confident that competencies are a key and useful tool for Talent Management. 

With the benefit of hindsight, we should have a more sophisticated understanding of competency models and what they can do for your organization’s performance. We should recognize that competency models built using different methods have different sweet spots. As a starting point, here is summary of the various modeling methods and situations when they are best used. I welcome your thoughts and additions.




























I welcome your thoughts and additions.

Charley Morrow

Monday, January 30, 2012

The Nerd Competency Model: What We Can Learn?



Spend time surfing the web and you will find this Venn diagram describing the overlapping capabilities of the “not-cool” kids.   

It is fun, but recently, I had a serious conversation about it.


A friend of mine, who has a doctorate in theoretical physics from Harvard, is burning out from teaching.  Looking at the model, she said “I’m not a nerd. I’m not obsessed enough—I don’t want to spend 90 hours a week perfecting technology.”  However, she is smart; she enjoys doing complex math.  She will probably change to a career that requires lot of smarts but less obsession and people skills. 


I was surprised this internet joke provided insights!  But, upon reflection, we can we learn a few things from this model: 

  1. Human performance is based on a mix of capabilities. Intelligence is never enough to be successful! Tech innovators like Mark Zuckerberg are smart, obsessed, and lack social grace.  Change one of these capabilities and you don't get the full package
  2. Sometimes lower or even negative capabilities are important for success.  Consider McClelland and Burnham’s seminal finding that the most successful leaders are concerned with power--relationships and influence.  A corporate leader will only be successful if the concern for power is tempered with inhibition.  Similarly, they found that leaders that are overly concerned with relationships make poor leaders
  3. The competencies underlying performance are not always obvious.  A nuanced understanding of competencies helps.   If you want to develop leadership in general, you can develop a general competency model.  If you want to develop specific types of leaders, you will be more successful if you fully research the model