Measuring the effects of knowledge management

来源:百度文库 编辑:神马文学网 时间:2024/04/30 01:08:49

Measuring the effects of knowledge management

Why measure?

Measurement is undoubtedly the least developed aspect of knowledge management, which is not surprising given the difficulties in defining it let alone measuring it. In fact some practitioners feel that measurement is premature at this stage and that trying to measure knowledge before you fully understand how knowledge is created, shared and used is likely to lead you to focus on the wrong things. Elaborate measurement systems, they say, cannot currently be justified because we simply do not yet know enough about the dynamics and impact of knowledge.

 

That being said, in practice, few organisations have the luxury of being allocated resources to implement something without being required to demonstrate its value. Without measurable success, enthusiasm and support for knowledge management is unlikely to continue. And without measurable success, you are unlikely to be able to what works and what doesn’t and therefore make an informed judgement regarding what to continue doing, and what to adjust.

 

What to measure? Common measurement approaches

There are a number of approaches that are increasingly being used to measure the value of, and progress in, knowledge and knowledge management in organisations. Some of the more common approaches are outlined here for the purposes of providing a general overview.

 

Measuring the impact of knowledge management on the organisation’s performance

Given that the whole point of knowledge management is to improve the performance of your organisation and to help it to achieve its objectives, the best and most logical approach is tie-in measurement of knowledge management with your organisation’s overall performance measurement systems. This can be done either at an organisational level, or for individual projects and processes.

 

However, one limitation of this approach is that if knowledge management practices are made an integral part of work, you cannot be sure of the relative contribution of those knowledge management practices to the success of a project or process, versus other factors. In view of this, O’Dell and Grayson, in Chapter 12 of their book If only we knew what we knew: the transfer of internal knowledge and best practice (1998) recommend a two-pronged approach that seeks to measure both outcomes and activities.

 

Measuring outcomes focuses on the extent to which a project or a process achieves its stated objectives. The success of the project or process serves as a proxy measure for the success of the knowledge management practices embedded in it. In other words, knowledge management is seen as an integral tool for improving a project or process, rather than as a separate thing. For example, outcomes might be measured in terms of the reduced cost of a process, improved efficiency, the reduction in time taken to do it, the improved quality of delivery, etc.

 

Measuring activities then shifts the focus onto the specific knowledge management practices that were applied in the project or process. What were the specific knowledge management activities behind this practice and what was their effect? In measuring activities, you are looking specifically at things like how often users are accessing, contributing to, or using the knowledge resources and practices you have set up. Some of these measures will be quantitative (‘hard’) measures such as the number and frequency of hits or submissions to an intranet site per employee. However these measures only give part of the picture – they do not tell you why people are doing what they are doing. Hence to complete the picture, you will also need qualitative (‘soft’) measures by asking people about the attitudes and behaviours behind their activities.

 

The balanced scorecard

An increasingly popular approach to measuring an organisation’s performance, and one that is being widely adopted in knowledge management, is the balanced scorecard. The advantage of this approach in knowledge management terms is that it directly links learning to process performance, which in turn is linked with overall organisational performance. Developed by Kaplan and Norton, the balanced scorecard focuses on linking an organisation’s strategy and objectives to measures from four key perspectives: financial, customers, internal processes, and learning and growth. In contrast to traditional accounting measures, the balanced scorecard shifts the focus from purely financial measures to include three key measures of intangible success factors. These roughly equate to the three components of intellectual capital – namely human capital (learning), structural capital (processes), and customer capital. The four perspectives can be framed as follows: 

  1. Financial: How do we look to our ‘shareholders’ (or governing bodies)?
  2. Customer: How do our patients see us? Are we meeting their needs and expectations?
  3. Internal processes: What do we need to do well in order to succeed? What are the critical processes that have the greatest impact on our patients and our financial objectives?
  4. Learning and growth: How can we develop our ability to learn and grow in order to meet our objectives in the above three areas? 

This knowledge management, which is about learning and growth, is measured as an integral and yet distinct part of overall organisational performance.

 

The balanced scorecard approach can be applied to individual initiatives as well as to a whole organisation.

 

Return on investment (ROI)

Most initiatives that require resources will be expected to show a return in investment –what benefits did we get to justify the costs involved – and knowledge management in usually no exception. The problem is that both the costs and the benefits of knowledge management can be notoriously difficult to pin down. While the costs associated with an investment in information technology can be relatively straightforward to identify, other costs can be less so, such as for projects that involve an amalgam of resources from across the organisation, or those inherent in challenging an organisation’s culture. On the benefits side, how do you measure things like increased knowledge sharing, faster learning or better decision-making?

 

A number of approaches have been developed for showing financial returns on knowledge assets.  Such approaches tend to be rather complex, and therefore are probably more appropriate to organisations that are reasonably advanced in their knowledge management efforts, rather than just starting out.

 

The knowledge management lifecycle

Some organisations measure the progress of their knowledge management activities in terms of their maturity – how far ‘down the line’ they are in implementing knowledge management practices and ways of working. The American Productivity and Quality Center has developed a framework known as Road Map to Knowledge Management Results: Stages of Implementation. The aim is to provide organisations with a map to guide them from getting started right through to ‘institutionalising’ knowledge management – embedding it in the organisation and making it an integral part of the way an organisation works. The map has five stages:

 

  1. Get started
  2. Develop a strategy
  3. Design and launch a knowledge management initiative
  4. Expand and support
  5. Institutionalise knowledge management 

There are measures associated with each stage.

 

Employee surveys

Given the importance of people in knowledge management, employee surveys can be a useful additional to your measurement toolbox. Surveys can be used to assess aspects of organisational culture and the extent to which people’s opinions, attitudes and behaviours are, or are not, changing. Obviously such surveys measure people’s subjective perceptions and these may or may not reflect reality, but in many ways that can be their very benefit, as people’s perceptions will determine their behaviours with respect to knowledge management. In order to be effective, it is vital that any such surveys are carried out by people with the required expertise, whether that is through in-house capabilities or by hiring external consultants.

 

Measuring the value of knowledge assets

As well as measuring the progress and value of knowledge management initiatives, organisations are also developing ways to measure the value of their knowledge assets. The traditional balance sheet is increasingly being regarded as an incomplete measure of an organisation’s worth, as it does not place a value on intangible assets such as knowledge or intellectual capital. As already mentioned, intellectual capital is commonly regarded as having three components: human capital (the knowledge and skills of people), structural capital (the knowledge inherent in an organisation’s processes and systems), and customer capital (customer relationships). There are a number of key models for measuring the value of intellectual capital. Among the best-known are:

 

The Skandia Navigator and its associated Value Creation Model

Developed by Swedish financial services company Skandia, this approach uses the metaphor of a house whose roof represents an organisation’s financial assets and whose foundations represent innovation and renewal. The model includes a long list of measures, which are organised into five categories, namely: financial, customer, process, renewal and development, and human.

 

Sveiby’s Intangible Assets Monitor

Developed by knowledge management pioneer Karl Erik Sveiby, the monitor categorises intangible assets into human competence, internal structure and external structure, with further subdivisions into indicators of efficiency and utilisation, stability, and growth and renewal.

 

Intellectual Capital Services’ IC - Index

Originally developed in Scandinavia and Australia by Johan and Göran Roos, the index identifies four categories of intellectual capital: relationship, human, infrastructure and innovation; it then looks at the relative importance of each, and also at the impact of changes in intellectual capital.

 

Philip M’Pherson’s Inclusive Value Methodology (IVM)

A model in which users create hierarchies of intangibles to which they assign value ratings according to priorities, then a computer model determines the overall value rating and tests for areas of risk.

 

How to measure?

Melissie Clemmons Rumizen outlines the following steps in developing measures, in Chapters 19-22 of her book The complete idiot’s guide to knowledge management (2002):

 

Revisit your goals

Your starting point for measuring any knowledge management initiative will be the original goals of that initiative: what is it that you set out to achieve? Developing measures will often lead you to get clearer about how you define your goals in the first place; if your goals are not concrete and clear enough, then measuring your success or progress against them will be difficult. Hence ensure that your goals define clearly what constitutes success in measurable terms.

 

Know the audience for your measures

In defining success, you will often find that different people have different ideas about what constitute success. Managers who approve the allocation of resources will want to know about the returns on their investment. Users of the knowledge management initiative will want to know how it has benefited them and whether their participation has been worthwhile. Other beneficiaries of the initiative, such as patients, will want to know how they have gained.

 

Define the measures

Define what exactly you are going to measure, and what measurement approach or approaches you intend to take. Ensure that your measures are:

  • Valid - they actually measure what they are intended to measure rather than something else,
  • Reliable - they give consistent results
  • Actionable –they give information that can be acted upon if necessary.

Decide what data will be collected and how it will be collected

This is a process of ‘putting the meat on the bones’ – spelling out the details: what data will be collected, who will collect it, how, when, where, etc?

 

Analysing and communicating the measures

When analysing and presenting the results, be sure to refer back to your original goals and your audience. Aim to present results in a way that answers their questions in a meaningful way, rather than simply presenting facts and figures.

 

Review your combination of measures

Monitor and evaluate how your measures are working. Developing measures is a process of trial and error – don’t necessarily expect to get it right first time. Similarly, remember that as objectives and situations change over time, so will your measures need to.

 

Additional pointers emphasised by other practitioners include:

  • Measuring for the sake of measuring is a waste of time – be sure that you are measuring for a specific purpose or purposes.
  • Be sure that some kind of action or decision will be taken as a result of your measures
  • Don’t try to measure everything; instead, focus on what is important. Trying to measure too much not only requires a great deal of work, it also tends to dilute the important issues.
  • If your organisation already has a measurement system in place, then you can use those measures.
  • If your knowledge management initiatives work, then you might assume that this will show up in your organisation’s other performance measures. Of course there is no guarantee that existing measures are good ones so you might like to look into them, but there are two major advantages to ‘piggy-backing’ on existing measures: first, they are already accepted practice in the organisation, and second, they are most likely measuring things that are important to the organisation.