Wednesday, July 30, 2008

Multi-dimensional Customer Satisfaction - part 8

I will close this series on multi-dimensional customer satisfaction with some additional discussion of the combined satisfaction scores that were introduced in the previous blog in this series (part 7). The second half of the blog will close the series by answering the question “So What.” This question is offered to provide a perspective on this series in light of what others appear to be doing in the field of customer satisfaction measurement.

The Underlying Assumptions of the Customer Contact Model

The calculations that were performed on the customer satisfaction measurements to get to the combined score have several important underlying assumptions. These assumptions include:

1. Linearity of scales. While it is known that satisfaction scales are not linear, practitioners assume linearity to simplify the calculations and make the results readily understandable. Furthermore, the cost of calibrating a non-linear scale is generally prohibitively large compared to the cost of using an approximate “linearized” survey scale. This assumption is true for all satisfaction surveys where linear scales are used. As a quick self-check for non-linearity, answer the question “does it take the same amount of energy to move a customer’s perception from 9 to 10 on the 10 point scale as it does to move from 5 to 6 on the same scale?” This same logic suggests the same question for a 5 point scale.

2. Consistency of scales. When the same scale is used for members of different groups (engineers, operations managers, accounting personnel, etc.), the assumption is that each group will score satisfaction consistent with the other groups. Another way of saying this is that all groups would score a particular action the same. For example, would engineering, purchasing and operations personnel score the same level of satisfaction with product quality if a purchased product (that involved each group) failed during the warranty period? If the scale was consistent, each group would have an identical score for that aspect of satisfaction.

3. Repeatability of scales. When the same survey with the same scales is used over some period of time, the assumption is that the perception of the scoring does not change with time. Thus, it is assumed that a score of 9 in one period is identical to the score of 9 in some other period. While this seems to be a reasonable assumption, the reality is that customers have periods when they become more critical in their scoring and periods when they are less critical. In my experience, customers become less critical after a period of consistently reliable performance (during the useful life of the product) and then become more critical when the product no longer performs reliably. Similar changes in scoring also occur in the area of sales, customer service and administration. The conclusion is scales are not very repeatable. Once again, the assumption is made that the scales are repeatable to simplify the analysis and keep the costs low.

An important point that applies to all three assumptions is that even though it may not be realistic to assume linearity, consistency or repeatability are valid, the survey results still have meaning and can be used with other management information to draw inferences about the customer population and make strategic and tactical decisions.

When the scores for multiple customers are combined, the importance of these assumptions is critical since the errors associated with each of the assumptions are present in each of the scores. Thus, when the scores from different individuals are combined, the errors in those scores also combine.

Because these errors are present in the combined satisfaction scores, the use of the combined scores presents slightly more risk (due to combined errors) than single satisfaction measures. Each time the combined scores are used, the results should be examined in the light of these assumptions.

“So What?” or Why Do I Need Multi-dimensional Customer Satisfaction

It was important to write this series on multi-dimensional customer satisfaction because too many companies in the business-to-business world continue to view customer satisfaction as a single point measurement. I base this conclusion on my own personal experience. As I talk with service managers from different companies, I am amazed to find they are still using a single point measurement of customer satisfaction, the same as they did 15 years ago.

The same managers that use customer satisfaction measurements and measurement techniques that are 15 years old will have the latest technology of computers, printers, modems, scanners, etc. It is clear they see the need to update the technology of their computers and peripheral equipment, yet they do not see that there have been improvements in the way customers can be evaluated.

Loyalty and retention have evolved from the simple satisfaction measurements. We know that satisfaction is a passive state and that a satisfied customer is not necessarily a loyal customer. As J.D. Power has noted in its automobile surveys, car manufacturers are scoring better than 90% in satisfaction and yet the loyalty of car buyers is less than 50% at best. It is clear there is a significant ingredient missing in the car purchasing model and, I believe, there is a similar ingredient missing in the model for purchasing and using high technology equipment. Since loyalty is an active state and requires energy from the customer, compared with the passive state of satisfaction, the missing ingredient has to do with the customer involvement with the company. To ignore this fact, is to miss out on one of the easiest ways to improve customer retention.

Customer satisfaction is the first step to achieving customer retention, which I have shown mathematically in my previous articles, and which leads to increased market share. Once you have taken the first step by measuring customer satisfaction, not to take the next steps of measuring customer loyalty and customer retention would be equivalent to building a house but stopping after the foundation had been completed. Satisfaction measurement is the pabulum of customer information. Although you will not die from a steady diet of pabulum, you will also not grow very fast and eventually the food will become very boring. As you use the satisfaction information, you will, hopefully, start to see the need for more information and perhaps more sophisticated analysis. (You should be seeking food that is more satisfying.) My experience indicates that it usually takes a management team about one year to really grasp the fundamentals of customer satisfaction and begin to see the implications of the results of the surveys. A solid 3 or 5 year plan for increasing the level of sophistication should be a part of your survey plan.

Multi-dimensional customer satisfaction is the first step for those in the business-to-business world where the buying and use of products and services requires the interface between 2 or more people in the customer organization and 2 or more people in the company selling the products and services. Now that the methodology is available to measure customer satisfaction in multiple dimensions, companies should begin the process of understanding their customers by taking the first step in the multi-dimensional world where they operate every day; rather than hope that somehow a single measurement from the customer will accurately reflect the multiple ways in which the company is perceived.

If you have used any multi-dimensional measurement of customer satisfaction, I would like to hear from you. I would like to write a complete blog about companies that have taken this step so that I can report the pros and cons of implementation. Write to me at Pepperdine University. My e-mail address is wbleuel@pepperdine.edu

Monday, July 21, 2008

Multi -dimensional Customer Satisfaction - part 7

I closed the last blog with the calculation for the combined satisfaction scores from three measurements taken from three different people within the customer organization. This calculation yielded a single value to characterize customer satisfaction which, I believe, represents a research-based approach to the problem of reporting customer satisfaction when the interaction between the company and the customer has multiple interfaces. Since I have noted in previous blogs in this series the inadequacy of a single measurement of customer satisfaction for the complex customer (a customer that is characterized by two or more people interfacing with the company), I will omit repetition of the discussion in this blog. This blog will focus on the interpretation and use of the combined score.

The Longitudinal Perspective

The strategic value of the combined score lies in the longitudinal perspective. In other words, the view of the combined score over time provides the quantitative assessment of the direction of customer satisfaction from measurement period to measurement period. If the strategic direction of the company is to maintain customer satisfaction, the measurement over multiple time periods can be used to validate consistency. Alternatively, if the strategic direction of the company is to improve customer satisfaction, then the trend line of the measurement over multiple time periods should indicate a positive slope. Thus, the use of the combined score provides the numerical perspective for evaluation of strategy with respect to customer satisfaction.

The results from period-to-period measurement will not follow a smooth, straight line. Most likely, it will be erratic in its movement from period-to-period. I have discussed this effect in previous blogs and hence will not cover this erratic movement in detail at this time. However, the key is to realize that the results in any one period came from multiple samples, each of which contains sampling error. Thus, the likelihood of a smooth curve which contains no sampling errors is essentially zero!

The trend line constructed through the combined score for each period will provide a measure of the “fit” of the trend line to the scores. The term most frequently used to describe the “fit” of the trend line is”r-square.” This statistical term indicates the percentage of the information contained within the combined scores that is explained by the trend line. Thus, r-square may have a value between 0.0 and 1.0 or it may be noted by percent. Thus, a trend line with an r-square of 0.8 (80%) would be interpreted to mean that the trend line explained 80% of the information contained in the time series of combined scores. If every combined score fell exactly on the trend line the r-square would be 100%. On the other hand, if every point were randomly located about the line the r-square would be 0.0%. As long as the r-square is 60% or more, the trend line is considered to be a reasonable fit to the trend line of the combined scores data.

Thus, the use of trend lines created from the combined scores in each measurement period provides the quantitative assessment of the customer satisfaction strategy. Trend lines can be created for overall satisfaction with the company, product performance, service performance, etc.

Tactical Use of the Combined Scores

While trends are very important for strategy evaluation, they are of little help with operational tactics that take place in the market place every day. The people on the “front lines” are frequently required to respond to customers. The combined scores along with the customer contact model can be used to provide the background knowledge of the customers for these people.

For example, the customer contact model is an excellent tool for training the customer-contact personnel on where to focus their energy. The customer contact model uses time with the customer, information quantity and information quality as the three measures for valuing the customer contact. By directing sales, service and other customer-contact personnel (such as accounts receivable) to focus on these three measures, there is a reasonable chance that customer encounters will increase in value. (I must admit that this does pose an interesting dilemma for me, since I have been writing about the need for building relationships with your customers for the last several series (loyalty and retention series). The relationship with the customer still represents the best barrier to competition and the best long term strategy for customer satisfaction. The point here is to note that the three customer contact measures must be included in any customer contact even when relationship-building is the primary objective.)

Another tactical application of the multiple-dimensional measurement is to use the customer contact scores themselves as a way of assessing the effectiveness of individual customer-contact personnel. Average customer contact time should be relatively consistent for all sales contacts. This same logic should hold true for service contacts and contacts by accounts receivable personnel. Hence, a review of the customer contact measurement information can indicate customer-contact personnel who may be spending either excessive or insufficient time with customers. The manager or supervisor can use this information to coach the appropriate personnel who differ significantly from the “norm,” especially if other parameters of performance also indicate inadequate performance levels.

The same methodology described in the previous paragraph would apply to the area of quantity of information (the second parameter of the customer-contact model) for each of the customer-contact groups. The average scores for quantity of information should not vary greatly within a given group of customer-contact personnel. If the average score for sales contacts with engineering personnel is near 3.0, then sales personnel with much higher or lower scores may indicate potential problem areas. One could argue that a much higher score for quantity of information may be positive rather than negative. While a much higher score for quantity of information may indicate a strong relationship with the customer and hence more effective sales performance, it may also indicate a lack of understanding of the scoring system or a bias either intentional or unintentional.

Once again the methodology applied to the customer contact time can also be applied to the quality of information (the third parameter of the customer-contact model). The average scores for quality of information should also not vary greatly within a given group of customer-contact personnel. If the average score for sales contacts with engineering personnel is also near 3.0, then sales personnel with much higher or lower scores may also indicate potential problem areas. The same argument holds as noted in the previous paragraph, namely, a much higher score may be a very positive sign. Likewise, it may also indicate a lack of understanding of the measurement or a bias created by the customer-contact personnel. Quality of information, much like quantity of information represents very subjective areas and, as such, are very difficult to monitor for accuracy.
I will close this blog with a few thoughts about bias and error of the customer-contact parameters; especially the parameters of quality and quantity of information. Since these parameters are best assessed by the customer-contact personnel, it is imperative that the measures be used as management tools for improvement of customer relationships and satisfaction. Should these three parameters be used as a basis of performance reviews or incentives, a bias will undoubtedly appear. The two keys to successful use of the customer-contact information is to train your customer-contact personnel adequately on its use and value and then to be sure to use the information only for managing the customer process.

In my next blog I will cover in more detail the use of the combined scores. It is important that the underlying assumptions be clear so that misapplication of the calculations does not occur. When these assumptions are clearly understood, the use and value of the computations also become obvious.

Thursday, July 17, 2008

Multi-dimensional Customer Satisfaction - part 6

As this series of blogs has progressed, I have discussed the differences in perception from different organizations within a customer company. A further discussion was given which showed that these differences could be viewed individually within a company or from a more macro perspective by combining satisfaction scores from similar organizations to indicate differences by type of organization. For example, there may be a consistent difference between the satisfaction level indicated from operations and the satisfaction level indicated from purchasing among all the companies in a customer base. Several ways were discussed for combining the scores from the different organizations. After considering the use of an arithmetic average or a weighted arithmetic average, the use of a customer contact model was discussed.

I have concluded that the most rational way to combine satisfaction scores from various organizations is to use the customer contact model. The logic used to conclude that the customer contact model was the best way to combine satisfaction scores is based on the validity of the research performed to validate the model and an intuitive sense that this seemed to be the “best” way of those examined. (I continue to review the literature for better methodology but have not yet discovered one). This blog will take the values of the customer contact model and discuss various ways of combining the three components of customer contact; namely, time, information quality and information quantity.

The Scenario

In my last blog there were three different customer contacts evaluated. The sales contact was made by a salesperson and was presumably with the engineering department. The service contact made by a field service engineer and was presumably with the operations department and the accounts receivable contact was made by a person from the accounting department and was presumably with the purchasing department or the accounting department. In each case the customer contact was evaluated using the scoring system developed in the blog. The scores are shown in the following table.

Contact time information quality information quantity
Engineering 60 3 3
Operations 45 4 2
Purchasing 10 5 4
*Note - see preceding blog for a definition of each of the contact components.

The scores in the table can either represent average values from multiple contacts with each department or from values of a single contact each from sales, service and accounting. For example, the 60 minutes of time that sales spent with Engineering department personnel shown in the table above may be the average time spent for 5 different sales calls. Likewise, the scores of 3 for quality of information and 3 for quantity of information may be the average value for each of these two components from the same 5 sales calls. Since most companies have on-going contacts with their customers, it is most likely that the scores in the table will represent average values resulting from multiple contacts with the customers. It is important to note that since these values may be averages or even individual contact values, they will change with time.

Each of the customer contact component scores for time, quality of information, and quantity of information are the result of applying the scale for each component to each customer contact by the employee making the contact. Thus, following each customer contact the employee needs to score the contact for the three components. From the data base of customer contact scores, the table shown above can then be created.

Step 1 - Combining the Three Components

The first step to combining the scores for the three departments (Engineering, Operations, and Purchasing) into a single score is to combine the component scores for each department. There are many ways to combine the three component scores of time, quality of information and quantity of information. The key is to develop a method which is both easy to use and which will dramatically differentiate high levels of satisfaction from lower levels of satisfaction. In other words, the method should highlight differences in satisfaction levels rather than diminish the differences. While there may be a “best’ method, the method should be chosen that works “best” for you.

I suggest using the method of multiplication. This method simply multiplies the three scores together to get the combined score. The score for sales in their contact with Engineering would be 60 x 3 x 3 or 540. Similarly, the score for the service contact would be 45 x 4 x 2 or 360 and the score for the contact with Purchasing would be 10 x 5 x 4 or 90. Thus, the three customer contacts would yield contact scores of 540 for the sales contact, 360 for the service contact and 90 for the accounts receivable contact.

Notice that this method puts a much greater emphasis on sales and service when compared to the accounts receivable organization. This difference tracks my own personal experience which is why I favor this method of combining scores. Another simple method is to add the scores for each organization rather than multiply them together. If this method had been used the scores would have been 69, 51 and 19 respectively for sales, service and accounts receivable.

It is clear that both of these methods are sensitive to the values of the numbers used in the scales. In particular, time has a much broader scale than the other two components and hence can drive the results. If this is satisfactory, then you can proceed to the next step. If you have a further concern that the time measure should not have that much influence, the time scale can be adjusted to a Likert scale in the same manner as the other two components. Thus, the time component would have the same 1 to 5 scale as quality of information and quantity of information. Using this method of equal scales for each component, all three components would have equal weight.

Step 2 - Combining the Three Department Measures

Now that each department has a single score, the next step is to develop a weight. The weight will be used to determine the combined satisfaction level from the three separate satisfaction measurements taken (Engineering, Operations, and Purchasing). The method of choice for me is a weighted average. Each satisfaction score will be weighted according to the proportion of customer contact value computed in the previous step. To demonstrate this calculation use the customer contact scores noted in the previous paragraph for each person within the customer company; namely 540 for the sales contact in Engineering, 360 for the service contact in Operations and 90 for the accounts receivable contact in Purchasing. The combined scores for the three contacts is 990 (540 + 360 + 90). From these numbers the weight given to the customer contact in Engineering is 540/990 or 0.545. Similarly, the weights for Operations and Purchasing are 0.364 and 0.091, respectively.

Step 3 - Computing the Customer Satisfaction Score

With the weights calculated in the previous step, the satisfaction calculation can be made. In an earlier blog in this series, I noted the satisfaction levels for Engineering, Operations and Purchasing as 9.2, 8.3 and 8.8, respectively. When the weights noted above are applied to these individual satisfaction scores, the combined score is calculated as
Satisfaction = (0.545 x 9.2) + (0.364 x 8.3) + (0.091 x 8.8) = 8.84
Thus, the satisfaction measured from three different perspectives within the customer organization can be combined to yield a composite score for the customer.
Next blog, I will address how to interpret this composite measurement of satisfaction and how to use it.

Wednesday, July 9, 2008

Multi-dimensional Customer Satisfaction - Part 5

Last blog the question asked was how to combine multiple satisfaction scores from a single customer. The naive approach of taking the arithmetic average was examined and then discarded. The next approach was to use a weighted average based on company strategy and executive experience. The column closed with an introduction to a customer contact model as a methodology to use to establish weights for each of the individual satisfaction measurements.

The value of a customer contact model

In order to manage the customer relationship, understanding customer contact is of critical importance. The simplistic answer is to spend time with the customer frequently and make each occurrence of sufficient duration to have meaning. The answer provides insufficient information to adequately measure and monitor performance of those employees that contact the customer. Do I really want the accounts receivable department frequently calling the customer and spending time discussing the balance due? Do I want the service organization (on-site or customer support) constantly contacting the customer and spending time with them? How frequently should the sales person see the account and how involved should the meeting be?

Service encounters can be classified as high, medium and low contact encounters and each type of encounter can be appraised in terms of quality so that the customer contacts (service delivery) can be designed and managed to provide both effectiveness and efficiency. Equally as important is the knowledge and understanding of each type of customer contact so that its value can be assessed relative to other customer contacts.

The customer contact model is a way of measuring customer contacts so that values can be ascribed to each and with that knowledge each type of customer contact can be designed and managed to best meet customer needs while simultaneously maximizing the use of resources.

Some history of customer contact models

The late 70’s and early 80’s saw the first mention of customer contact models. In their book on Operations Management (1977), Chase and Aquilano noted “the main feature that sets a service system apart from a manufacturing system is the extent to which the customer must be in direct contact.” Chase went on to introduce the phrase “customer contact” in 1978 and in 1981 provided the first operational definition as “the time in the system relative to the total time of service creation.” Thus, an on-site service call has a component of customer contact and a component of repair. According to Chase, the customer contact time would be only that part of the service call dedicated to communication with the customer. Some other notable contributions are:
1. G. Jones in the Academy of Management Journal (1987) performed an elaborate study of customer/firm interactions and described them using the dimensions of specificity, infrequency and duration to determine structural differences between service firms.
2. Victor and Blackburn in the Academy of Management Review (1987) looked at interdependence as a method of studying interpersonal relationships between two individuals. They defined interdependence as “the extent to which a unit’s outcomes are controlled directly by or are contingent upon the actions of another unit.”
3. Daft and Lengel in Research in Organizational Behavior (1984) introduced the concept of information richness as a way of evaluating the value of resource exchanged in a service encounter.
4. K. E. Weick in Administrative Science Quarterly (1976) introduced the concept of coupling in the study of business systems. Loose coupling occurs when the elements or parties affect each other suddenly, occasionally, negligibly and eventually. Tight coupling occurs when the parties affect each other continuously, constantly, significantly, and immediately.

The customer contact model of today

When each of these research topics noted above is integrated, along with other research, a customer contact model can be visualized. Kellogg and Chase in Management Science (1995) have done just that and identified three components; namely, time, information richness, and intimacy. They believe that these components provide the basis of customer contact from which a measurement system can be implemented. When a scale is created for each of the three components, the resulting measures should consistently (and hopefully reliably) reflect a value of customer contact.

One of the more satisfying aspects for the use of these three components in a customer contact model is they appear to be relatively independent of each other. In building mathematical models for business, independence of the components adds real value to the model. In essence, when independence exists, there is no correlation between the components which then allows the components to be combined mathematically with little concern for interaction between them. One of the concerns in customer satisfaction surveys is how much correlation exists between different satisfaction measures on the survey.

A large amount of interaction may preclude the possibility of detecting which of the satisfaction questions exerts the major influence on overall satisfaction.
In order to establish a scale for each of the three components of customer contact, the components themselves must be defined.
Time - This term is unambiguous because it refers to the clock time associated with the customer contact. A better phrase would be communication time because the time measured reflects only the time during which customer communication is involved. In the past, this might have been defined as administrative time (when paper work time was recorded separately).
Information richness - This term is perceptual much like satisfaction. Like satisfaction, information richness reflects a perceived amount of information being exchanged between the customer and the company representative. It would be reasonable for sales and service personnel to have very rich information exchanges. On the other hand, dispatchers and receptionists would offer very poor information exchanges. For this reason, a Likert scale can be developed in the same manner as a satisfaction scale. The values will be ordinal and have the same properties as satisfaction values. A 5-point, fully anchored Likert scale could be defined as:
1- No information exchanged
2- Little information exchanged
3 - Some information exchanged
4- Much information exchanged
5 - Complete information exchanged
Intimacy - This term is also perceptual in the same manner as information richness. For this term, the quality of the information exchanged rather than the quantity of the information exchanged is used as the scale. Thus, a fully anchored Likert scale could be defined as:
1- Poor quality of information exchanged
2- Low quality of information exchanged
3 - Moderate quality of information exchanged
4- Good quality of information exchanged
5 - Excellent quality of information exchanged

Now that there is a scale for each of the three components, each type of customer contact can be evaluated for each of the three components.

Putting the Theory to Work

A simple example of how to apply this theory is to consider the case of only three types of customer contact; namely, sales, service and accounts receivable. The following discussion translates average customer contact evaluations for each type of customer contact. While averages are being discussed to describe the application of the theory, best or worst case situations can also be used.

A reasonable scenario is for an average sales call to last approximately 60 minutes with some information exchanged and moderate information quality. The three components of customer contact (time, information quantity and information quality) would have the scores of 60, 3 and 3 respectively. An average service call might last 45 minutes with little information exchange but good quality information. The service contact would have the three component scores of 45, 2 and 4 respectively. The accounts receivable contact may only last an average of 10 minutes but would have much information exchanged with excellent quality (you hope). Thus, the accounts receivable contact would have the three component scores of 10, 4 and 5 respectively.

The next question is what do you do with these numbers and how can they be applied to the customer satisfaction scores obtained from the surveys of each of these customer contacts. This topic will be addressed in the next article.

Tuesday, July 1, 2008

Multi-dimensional Customer Satisfaction - part 4

In the last three blogs customer satisfaction has been examined as a multi-dimensional measurement. Customers in the business world really have many contacts when communicating with another business. Since a business customer has many contacts, customer satisfaction cannot accurately be described with a measurement from a single individual within the customer company. In the previous blogs measurements from several functional departments within the customer organization were analyzed to indicate how differences can occur and how they can be detected. While there is more to explore within this fertile area of differences (such as looking for the key satisfiers in each area), the question must also be asked whether the individual measurements can be combined to provide an overall measurement for the customer. This is the topic for this blog; examining ways to combine individual scores when multiple measurements have been taken within a single customer organization.

The “I don’t know what else to do” Approach

I believe it necessary to address the approach that the uninitiated will take so that there is a base position from which to compare other methodologies. There are two key theorems in statistics that seem to be the backbone to the “I don’t know what else to do.” The first theorem is the law of large numbers. A paraphrase of the law says as the number of samples (repetitions of an experiment) increases, the difference between the observed results and the theoretical results gets smaller and smaller. This doesn’t apply to this situation since there are not a large number of samples for a given company and they are not similar since they represent different organizations within the customer company. Throw out the law of large numbers argument for use in combining satisfaction scores.

The second theorem is the central limit theorem. I usually refer to this theorem as the “Big Cohuna” of statistics since it is the base from which much of inferential statistics is founded. It provides the very strong theoretical base for the use of sampling to predict the characteristics of a population. Again, a paraphrase of the central limit theorem says that the means of samples are themselves normally distributed no matter what the shape of the population. For example, if fifty samples are taken from a population and each sample has 30 observations, when the average of each sample is calculated and then plotted, the distribution of the sample averages will follow the bell- shaped curve, known in statistics as the normal distribution. This will be true whether the samples were taken from a bi-modal distribution (one with two peaks) or a flat distribution (one with no peaks) or any other shape. Once again this very strong theorem does not apply since each observation must have similar characteristics. So throw out the central limit theorem argument for use in combining scores.

In spite of the preceding discussion there will be those who say “take the individual scores and combine them by taking the average.” While this average will give a measurement, it certainly is not clear what meaning the measurement has. Although not quite analogous, this measure has about as much meaning as measuring the body temperature behind the ear (between the ear and the scalp); while there will be a temperature measured, there will be no clear understanding how it relates to the health of the body. A similar conclusion can be drawn from the combination of the multiple measurements by simply taking the arithmetic average. The obvious flaw is that each measure is given equal weight. Since it is highly unlikely (virtually impossible) for each measure to have an identical impact on overall satisfaction, the arithmetic average may be dangerous since it will give the same weight to the least and most important measures. A high satisfaction score from the least important will counter a low satisfaction score from the most important. Thus, the conclusion might be that all is well with a specific customer even though the most important measurement indicates very low satisfaction.

Adding the “seat-of-the-pants” Correction Factor

As soon as it is obvious that the arithmetic average gives equal weight to each measurement, the simple answer is to change the weights. If there are three measurements for a specific customer, each score is weighted by 1/3 so that the total score is 1/3 of the first score plus 1/3 of the second score plus 1/3 of the third score. The key is that the sum of the weights must equal one. Therefore, a seat-of-the-pants solution is to change the weights on each measure from equal values to other values that are more representative of the business relationship. The weights given may reflect the perception of influence of each person surveyed on the business relationship. For example if the management organization has the most influence and the other two organizations (purchasing and operations are about equal) one could weight the measurement from management at one half and the other two at one quarter each.

As long as the sum of the weights equals one, any combination of weights is possible. The only question is what is the correct combination of weights? Clearly there are an infinite number of combinations and there is most likely a “best” combination of weights that best describes the overall satisfaction of the customer company. Without additional knowledge the weights given to each measurement might best be selected to reflect the company strategy. For example, if the company is a high technology company that sells to the operations organization of the customer, the weight placed on the measurement from operations should be greatest. If, on the other hand, the company sells a commodity product and differentiates itself through its service organization, either the purchasing or operations organization of the customer might receive the greatest weight.

In each of the examples noted in the previous paragraph, the weights used are somewhat arbitrarily chosen. While the weights might be “reasonable,” the likelihood they are the best combination of weights is remote. When the weight of one half is given to management, it could be that a weight of 4/10 or 6/10 might better reflect the customer. Since the weights are chosen based on strategy or perception of the customer, they represent management’s best assessment of the customer based on knowledge of the customer and the current business strategy. The fact is that many times the experience of the company executives is an excellent source for assessing the weights. While the weights they give may appear arbitrary, they come from years of experience in the industry and knowledge of the customers and hence, should not be taken lightly. Thus, when the “seat-of-the-pants” approach uses the experience of the company executives, it is probably the best estimate available.

Getting Weights from a Customer Contact Model

Most companies that I have worked with look at customer contact in a qualitative way. The only quantitative measure of customer contact is time with the customer. In order to put more accurate weights on the different measures of customer satisfaction from a customer company, an assessment of the customer contact in each of the measures provides a more accurate assessment of the value of the measurement. Recent research indicates customer contact has at least three dimensions; namely time with the customer, the richness of information transferred, and the intimacy of the contact. I will examine the current research on customer contact in the next blog in order to provide sufficient detail to understand how to apply the contact information into the weights for the multi-dimensional measurement of customer satisfaction.
 

web visitor stats
OptiPlex 755 Desktops