The Business Forum

"It is impossible for ideas to compete in the marketplace if no forum for
  their presentation is provided or available."           Thomas Mann, 1896


Articles from The Business Forum Journal

ORGANIZED CHANGE
By  David G. Chaudron

Master of all you Survey:

Planning and Analyzing Employee Surveys

Planning the employee survey

One of the major reasons why organizations don’t receive the “bang for the buck” from surveys is they don’t plan them well. Planning not only makes for less stress during your analyzing the survey but helps define assumptions and expectations about what you want to achieve.  The following are good “rules of thumb” for planning your survey, following by a few more suggestions to help analyze that mound of paper in front of you when the returns are in. 

Keep the data anonymous, but communicate the actions.

Organizations often keep survey information anonymous and confidential to increase the accuracy of the data received. This rule of thumb is usually a good idea, but also can have its drawbacks. Among these drawbacks is the uncertainty of what to do with survey comments that allege illegal actions or violations of company procedures.  Acting on such comments may violate the confidentiality of the respondents. Additionally, confidentiality can lead to inaction by those who need change the most, as the following story illustrates. 

We had conducted an employee survey for an aerospace client, who had decided that accusations concerning individual behavior would be noted but not acted upon. This was done to ensure the survey would not become a witch hunt, but would rather focus on organization-wide issues. Unfortunately, in the written comments collected, there were accusations of a married manager getting a single woman pregnant and rewarding her with a promotion. These accusations, if true, were a violation of company policy and normally would be investigated. However, as a result of the confidentiality restriction, the information was not directly acted upon. 

As you can see, Investigating specific accusations can be a problem. The organization has a choice of ignoring the problem, or trying to find out more information in focus groups. This randomly selected group of people can be asked if certain allegations are true, and what additional information they might have. These sessions need to done with the utmost confidentiality by a person of good reputation working for no one in the group. For additional information, see Edwards, et al (1996)

Don't look for what you already see.

Many organizations believe they understand their problems, and call in consultants to work out the details. This is  a self-fulfilling prophecy. If an organization investigates only subject "X", they will only get back information on subject X. They overlook other issues of major concern, as the following example shows.

An organization changed their telephone system, and hired a consultant to determine their training needs. After talking with the users of the new equipment, he realized that ignorance was not causing organization's telecommunication problems. Instead, it was a management and cultural problem.

Organizations can get around this problem somewhat by using a broad-spectrum survey at the beginning of their effort, and asking specific narrow questions later. Other ways around this problem are discussed in the next section.

Use multiple survey methods.

Using multiple techniques to ask about the same kind of information is a hallmark of good information gathering.  Any surveying technique has its weaknesses. For example, numerical surveys (where survey items are rating one a scale of one to five) are easy to score. However, the specific wording of the question may not exactly apply, and may miss getting to the heart of the matter. In addition, numerical surveys, especially those that ask a narrow set of questions, only allow survey takers to be asked a limited set of topics. An organization may miss discovering important issues because they didn't ask. 

On  the other hand, open-ended questionnaires have less of this problem. This is because questions are less precise, and so get richer information from the survey taker. Unfortunately, the more open-ended the questionnaire, the harder it is to score. Whoever summarizes written comments injects their own opinions into the rating process, something that does not happen with numerical surveys. 

Focus groups offer potentially the richest source of information for gathering information. This is true in part because the leader of focus groups can ask clarifying questions. However, because verbal information is such a rich source, it is harder to summarize and classify than written surveys. In addition, employees in focus groups and individual interviews lose anonymity. 

My recommendation is to use not one approach, but all of them if possible. Using one method just doesn't cover all bases. Focus groups and individual interviews are useful at the very beginning of the survey effort to find broad areas of concern. Open-ended survey questions and numerical surveys can pinpoint specific issues, and allow employees to express their concerns anonymously. Use focus groups again to get feedback on specific issues or recommendations.

Such information nowadays doesn’t have to be gathered via paper and pencil. Programs are available that allow employees to take the survey at their own computers, whether as a standalone program, or if they have Internet access, via the World Wide Web. Our experience has shown these methods produce more and more reliable results. 

Decide how to analyze data before you gather it.

One manager of a manufacturing organization developed a preliminary survey to assess the effects of their "de-layering" of the department. He sent it to other managers, wishing to get their feedback about the questions he developed. Instead of getting feedback about the questions, he received over 50 filled out surveys! As a result of this unexpected response, he had not decided what graphs, charts and analysis he needed. It took a staff assistant many long hours to change the data into a workable form. 

Whenever creating surveys, decide how to analyze, chart and graph the data before employees complete them. This approach avoids bias when there is no set procedure for analysis, and reduces last-minute panic when the data comes flooding in. After developing the survey and are uncertain about analysis, give the preliminary survey to a sample of people who are similar to employees. Use this sample to fine-tune  questions, decide how to analyze the data, and change the questions to make analysis easier.

Decide on your sampling plan and how to "break out" the data.

Many organizations survey their employees, usually once a year. Two problems arise from this practice: First, because the organization surveys only once, one can't distinguish between flukes and trends. Only surveying multiple times a year, using a sample of employees, can an organization distinguish between special, one-time events and ongoing concerns; and second, because employees can behave differently just before survey time. This "Hawthorne effect," where employees temporarily change their behavior based on expectations can mask underlying problems. A reverse Hawthorne effect also can occur, where employees worsen their behavior and exaggerate their responses on the survey.

When deciding a sampling plan, decide how to break out (stratify) the data before distributing the survey. Common breakouts include how staff employees feel compared to line employees, how each department answered the survey, or how male respondents compared to female ones. These breakouts can help pinpoint employee groups concerned about an issue. However, survey authors and analysts often make the mistake of using multiple “t” tests to determine if more than two group means are statistically different from one another.  Because these sample means are non-independent, determining the level of significance cannot be easily determined or interpreted. See Hays(1973) for a more detailed discussion of this. More appropriate statistics that avoid this problem are multiple comparisons (Kirk, 1968), discriminated analysis (Klecka, 1980) or logistic regression (Hintze, 1995).

Because these breakouts are easy to do with today's computers, organizations can create graphs and charts for their own sake. The greater the number of breakouts, the more employees must be surveyed at any given time. Otherwise, samples can be so small that the survey data are unreliable. As with any sampling method, the smaller the sample of employees, the greater the uncertainty that the sample's statistics will match population parameters. One can reduce this uncertainty by increasing sample size and using more reliable and varied methods of measurement, but probably at the cost of a more time-consuming survey. For a further discussion of sampling sizes and methods, see Kalton (1983). As with all sampling plans, survey analysts should evaluate survey statistics in light of survey results, and change their sampling accordingly.

Involve employees, especially powerful ones in the survey effort.

Organizations can survey their employees, accurately assess their needs, and still meet with resistance to change. One way to lessen this problem is to involve formally and informally powerful employees in the group that develops or selects the survey, distributes and analyzes the results, develops recommendations, and implements solutions. Such employees can include management, union officials, and elected representatives of departments or job classifications. These employees act as spokespersons for the groups they represent, communicate events to these groups, and provide vital information to the survey process. One group included a vice president, a director, a manager, two engineers, two supervisors, two from administrative support, and two inspectors, each representing employees in a job classification.

Never survey without acting.

Management can survey their employees to assess working conditions out of curiosity, or to relieve their anxieties about everything being "all right."  However, surveys raise expectations by those who take them, and those they tell. When expectations of change remain unfulfilled, employees can become more demoralized than before the survey. 

Management might ask "What if we survey our employees, and can't [or won't] do anything about their problems?"  These feelings are frequent when distrust is high between management and the rest of the employees, or where historically they have not gotten along. On one hand, such statements can be an excuse for inaction, but on the other, they raise a point. 

Management must decide what actions are possible and what are not, even before the survey authors create the survey or gather the data. When employees or raise concerns, management needs to communicate that they understand their concerns. If management cannot immediately solve these issues, employees must know this. At the minimum, management must communicate survey data and their response. Preferably, management should answer concerns and act on them.

Include the survey process into the normal business planning cycle.

One way to influence an organization is to become part of its planning cycle - its goals, objectives, and budgets.  Employee involvement efforts can achieve this by scheduling survey events so recommendations were ready the month before budget planning sessions. To accomplish this, schedule backwards. For example, if budgets are due in June, present survey recommendations in May and develop them in April. Analyze the survey recommendations in March, and distribute the survey (assuming a "one shot" survey)  in February.  Determine the survey ground rules in January, and form the survey group in December. By scheduling this way, surveys deliver the maximum "punch" possible.

Without such planning, management can respond to recommendations from surveys and employee suggestion systems  with "That's nice, and sounds like a good idea. Where is the money to pay for it?" 

Create clear, specific actions from the survey data.

"We must communicate more," and "We must change people's attitudes" are often the recommendations that come from surveys. Unfortunately, these platitudes do little to fix the problems that survey responses communicate. Listed in the table are some possible concerns raised by employees, and a brief summary of what might be done with each issue:

Employee Concerns

Possible Solution

fairness of promotions

change selection, promotion procedures, who decision-makers are

fairness of pay system

gain sharing flexible benefits plan

performance reviews

reward groups instead of individuals, change rating process

career development

create career ladders, clarify job descriptions, create mentoring systems, pay for knowledge

Communication

bulletin boards, all-hands meetings, company videos, E-mail, focus groups

Empowerment

delegate specific authority and decisions to employees

inter-group warfare, between-department communication management style

inter-group teambuilding, restructure by product or customer instead of functionally

360° feedback, management training

Clearly communicate the survey process, recommendations & actions.

Communication is a crucial, necessary ingredient in every phase of the survey process. Organizations must inform employees about survey planning, data collection, and implementation plans. Without this communication, employees who would otherwise support the survey become confused, frustrated, and eventually complacent. Loss of this critical mass of support may eventually doom whatever changes the company implements. Someone once said, " Whenever change takes place, a third are for it, and a third are against it, and a third don’t care. My job is to keep the third who don't like it away from the other two thirds!"

Use surveys with good reliability and validity.

Validity is how well a survey measures what it should. This usually means measuring each survey topic with several questions, and in several ways. This usually means at least three questions, preferably five on each survey topic, and asking similar questions during interviews and focus groups.  Review the survey's validity by comparing it  to existing methods of gathering information to minimize missing or unclear questions. 

Reliability is  how consistent the survey is over time, and the consistency of survey items with each other. If a survey is unreliable, survey statistics will move up and down without employee opinions really changing. What may look as a significant change over time may be due to the unreliability of the survey methods used. 

If you created or change a survey, determine its reliability on groups similar to your employees. Even if you don't change the survey, check and see what reliability and validity studies have been done. It is a good idea to test the survey on a sample of your employees, even if you don't change the survey at all. It's worse than useless for your organization to hand out a survey and receive information of unknown worth.

Developing the survey and analyzing the results

In the first part of our article, we talked about how to properly plan and implement employee surveys, and how to integrate them into organizational change. This article will focus on how to develop the survey itself, and how to make it a useful, reliable measurement tool of organizational change.

Developing items

It is generally best to start out not with individual items to include in the survey, but to develop broad categories (subscales) of questions. Then generate at least three questions per category. Three to five questions are needed as a minimum for consistency and reliability.

For example, let’s say that the survey authors decide to measure “the effectiveness of a supervisor’s listening skills.”  Most survey authors would simply ask one question, such as “How would you rate your supervisor’s listening skills?”

This is the equivalent of a one-legged horse: it looks funny, and doesn’t stand on its own. Instead, make 3-4 questions in the “effectiveness of a supervisor’s listening skills” category, such as:

  1. How would you rate your supervisor’s listening skills?

  2. How comfortable I feel about telling my supervisor about ideas for doing my job better.

  3. How often my supervisor listens to and acts on what I say.

  4. My supervisor’s understanding of my point of view.

Repeat this exercise for every category to be measured.

Format the survey and develop instructions

 Survey formats should be as clear and simple as possible and make clear to the respondent how to answer each question. Reduce as much as possible the chance of  “crossover” errors, where employees mean to answer one question but accidentally but answer another. For surveys with numbers to circle, check boxes to check etc., make sure that questions 1) either have ellipses (…) or an underscoring line from the end of the question to the numbers to circle, or 2) formatting (bold text, italics, different type sizes, etc.) that  clearly highlight what question goes with what answer. 

One thing definitely not do, especially the first time you use a survey, is to list the survey items by group, so that for example, questions 1,2 3,4 5, all refer to a supervisor’s listening skills, and questions 6,7,8,9, and 10 all refer to management’s responsiveness to change. Do not put “headlines” on the survey telling everyone how you have “lumped” together the survey items. This defeats the purpose of factor analysis, as described below and increases the “halo effect,” the tendency of employees to answer questions the same way.

Develop survey scales

This has nothing to do with rust, alligators or how much weight you’ve gained since the holidays. Instead, it’s deciding how to ask employees to react to questions. Many people use “agree-disagree” scales, so  people answer questions like:

1.     I like ice cream

·         strongly agree

·         agree

·         neutral

·         disagree

·         strongly disagree

 

2.    I hate ice cream

·         strongly agree

·         agree

·         neutral

·         disagree

·         strongly disagree

 

Unfortunately, this kind of scale has a lot of problems. Firstly, studies have shown that these scales suffer from “response set bias,” which is the tendency of employees to agree with both the statement and its exact opposite, like in the case above. Secondly, analyzing these kinds of statements is very hard to do. If I strongly disagree with the statement “I like ice cream,” what does that mean? It could mean that I hate ice cream, or it could mean that I don’t like it, I love it to death. There is no way of telling which of these employees mean. 

Instead, use frequency, intensity, duration or need for change/need for improvement. Specifically, these scales would be something like this:

Frequency:  My supervisor gives me feedback on my performance

  1. never

  2. once or twice

  3. sometimes

  4. often

Intensity:       My supervisor listens to what I say

  1. never

  2. once or twice

  3. sometimes

  4. often

Duration:       My supervisor keeps eye contact during my performance review

  1. at no time

  2. to a little extent

  3.  

  4. for much of the time

Need for improvement:  How promotions are handled in my department

  1. needs no improvement

  2. needs a little improvement

  3.  

  4. needs much improvement

Send out a sample and correct any problems.

After you’ve developed the initial draft of the survey, try it out on a sample of people who are similar to the ones who will ultimately take the survey. Conducting this sample satisfies several objectives: 1) it allows feedback on the clarity of questions; 2) allow you to practice the “pitch” to survey takers; 3) allows statistics (see factor analysis) to be produced that will tell how reliable your survey is and how to group your questions into categories; and 4) it allows practice of the step-by-step logistical sequence needed to disseminate, collect and enter the data of the survey into the computer.

Collect your data.

This is not as simple as it seems. To maximize the rate of return, must carefully encourage as many people as fit into the sampling plan to answer the survey. Though many organizations hope to achieve return rates of 80-90%, this is unrealistic to believe this will happen on its own. Just wishing that it will happen won’t get you any returned surveys. We have achieved return rates of 97% by 1) making the survey part of a well-organized, well publicized change effort; 2) encouragement by senior management to answer the survey; 3) mandating employees to attend meetings where they have the choice of answering the survey, or turning in a blank one. Without all of these factors, expect at best a 30-40% return rate.

Factor analyze the results, group items into categories and test their reliability.

Factor analysis is a technique most survey authors are not aware of, but is a critical and necessary part of survey design. Factor analysis groups items into categories so that it maximizes the reliability and “sturdiness” of the survey. 

The first thing factor analysis does is to define how many groups or categories of items to have. No matter how much experience authors may have with developing surveys, how they “lump” together items into categories often has little relation to the results of factor analysis. The grouping that you have performed is based in part on how a survey author perceives the relationships between survey questions. It is a good method to develop survey questions, but not to develop reliable categories. 

What factor analysis does is to 1) define how many statistically sound categories exist, and 2) group survey questions into categories based upon the statistical inter-correlations between the questions based on how all survey respondents answered your questionnaire. 

This statistical procedure is available through a number of statistics programs, such as SPSS, SAS, NCSS and others. I strongly suggest that you have a good understanding of how factor analysis works before really have to do it on a short timeline. 

After factor-analyzing the survey, test its reliability. Reliability is a measure of how consistently employees answer questions. There are two basic measures of reliability: internal consistency and test-retest. Internal consistency (measured by coefficient alpha) measures how well individual questions within each category measure the same thing. Test-retest reliability measures the consistency of survey answers over time. Both are important, but usually coefficient alpha is the only one used. Measuring test-retest reliability would require giving the same survey to the same people again, usually a couple of weeks later. Most survey authors don’t want to take the time or effort. However, if measuring organizational change over time, it is a good idea to know how much variation is due to organizational  change, and how much is due to the fuzziness of the questions.

Analyze and graph the data

Now comes the really fun part, analyzing the data. Two of the most common mistakes are to 1) not decide how they want the graphs to look before they analyze the data; and 2) use survey norms inappropriately.

Imagine yourself with an immense pile of printouts with no idea of how to analyze this data. It is not a fun feeling, believe me. Many a would-be survey analyst has been caught in this problem. The easiest way around this is to decide how to graph and categorize the data to look before this huge ocean of information drowns you.

The easiest way of doing this is to draw a few graphs of how the data might look. Develop a few scenarios with this fake information and ask yourself a few questions. “If the data looked this way, what would that mean?” “If the data looks that way, what would that mean?”

Using survey norms inappropriately is another problem. Survey norms are averages of how other people have answered the survey. Established survey companies often have these norms. When you get reports back from them, they will describe how your results stack up against these averages. 

There are two problems with this: 1) what norms to use; and 2) how to interpret the numbers. To properly use norms, they must come from an employee population very, very close to yours. This means they should come from the same industry, the same geographic location, the same job types and the same size of company. Very, very few if any norms exist broken down this finely. To avoid comparing their apples with your oranges, use your own company as one’s reference point, instead of over-generalized norms. Do this by taking a baseline survey of your employees before (or just at the beginning of) your organizational change effort. Then, re-survey a representative, statistically valid sample of them frequently over time. Compare these later results with your baseline without the problems associated with using someone else’s norms.

If you do decide to use these over-generalized norms, people often abuse them the following way. Let’s say that according to these norms, a particular supervisor is in the 20th percentile of listening skills - that is,  compared to the norms, 80% scored higher than she did. You automatically conclude that this supervisor has problems with listening skills. This is a bad conclusion taken from faulty data. This is because you are not comparing this supervisor to those in the same industry, geographic location, size of company and so on. 

Forget all this stuff. I’ll just buy a survey or use what we have. 

If you find a survey you like, you can skip all this survey development stuff. The job then focuses on making sure that the purchased survey has followed the above steps. In some cases they have, but often many of these steps are either unknown or ignored by the developers of commercial surveys. 

Ask them about how and if they use norms, what kinds of reliability measures they use, and what those measures tell them. Ask them if a factor analysis has been done, and what the results are. If they don’t understand the questions, they probably don’t know their stuff.

Another option is to customize a survey you’ve already bought. Guess what: Customizing an existing survey, still requires all the steps above. Even changing the sequence of questions has a significant effect on reliability.

All this may seem too much to you. If it does, then let me ask you a question: What is the consequence of a wrong organizational decision? If it is severe, you have little choice then to make organizational changes on the best information possible If the consequences of what you are doing are small, why are you going through such a tremendous effort of surveying your employees?

References

Edwards Jack, Thomas Maria, Rosenfeld Paul and Booth-Kewley Stephanie(1996) How to conduct organizational surveys. Sage Publications. 

Hayes, William (1973) Statistics for the social sciences. 2nd edition. New York: Holt, Rineheart and Winston, Inc. pages 478-479.

Hitnze, Jerry (1995). Number Cruncher Statistical System User’s Guide.  Kaysville, UT:NCSS, pages 1149-1158.

Kalton, Graham (1983). Introduction to survey sampling. Sage Series in Quantitative Applications in the Social Sciences, number 07-035.   Newbury Park, CA: Sage Publications.

Kirk, Roger (1968). Experimental design: procedures for the behavioral sciences. Belmont, CA: Brooks/Cole, pages 69-98.

Klecka, William (1980). Discriminant Analysis. Sage Series in Quantitative Applications in the Social Sciences, number 07-019.   Newbury Park, CA: Sage Publications.


About the Author:

Dr. David Chaudron is a Fellow of The Business Forum Association.  He is the managing partner for Organized Change™ Consultancy, brings over 20 years of experience assisting firms in their efforts to improve effectiveness, quality, and employee involvement. His efforts have included practical designs for major change efforts, strategic planning, re-engineering, survey development, team building, Total Quality Management, one-on-one coaching, and employee selection systems.

David has worked with manufacturing, financial services, banking, electronics, petrochemical as well as government and international organizations. His experience includes: Developing and managing implementation strategies for major organizations. Assessing organizational climate, group climate and management style as a prelude to a Business Process Re-engineering (BPR) initiative. Designing and managing the processes to implement a BPR initiative. Designing, developing, and delivering materials for training Total Quality Management (TQM) advisors. Conducting team building and cross-national teambuilding sessions with middle and upper management using the problem-solving model. Coaching senior management on management style and interpersonal relations with subordinates. Developing processes to assess company progress toward the Malcolm Baldridge Award. Developing and enhancing processes for selection and recruitment. Conducting job analyses to define career paths necessary aligned to company vision.

Dr. Chaudron has published many articles on teams, Business Process Reengineering, employee surveys, Total Quality Management, and organization change. He also is a speaker on an internationally televised videoconference seen by over 35,000 people in over 16 countries.

David's academic achievements include:  Ph.D., Industrial/Organizational Psychology, United States International University.  M.S., Industrial/Organizational Psychology, California State University, Long Beach.  B.A., Psychology, University of Arizona.  Advanced facilitator training, American Productivity and Quality Center


Previous articles by David Chaudron:

A Tale of Three Villages: Implementing Organizational Change


Visit the Author's Web Site

         
Website URL:

 http://www.organizedchange.com

Your Name:
Company Name:
E-mail:

Inquiry Only - No Cost Or Obligation


BACK TO  Articles from The Business Forum Journal


Search Our Site

 

Search the ENTIRE Business Forum site. Search includes the Business
Forum Library, The Business Forum Journal and the Calendar Pages.


Disclaimer

The Business Forum, its Officers, partners, and all other
parties with which it deals, or is associated with, accept
absolutely no responsibility whatsoever, nor any liability,
for what is published on this web site.    Please refer to:

legal description


Home    Calendar    The Business Forum Journal     Features    Concept    History
  Library    Formats    Guest Testimonials    Client Testimonials    Experts    Search  
News Wire
      Join Why Sponsor     Tell-A-Friend     Contact The Business Forum


The Business Forum
9297 Burton Way, Suite 100
 Beverly Hills, CA 90210
 Tel: 310-550-1984 Fax: 310-550-6121
 [email protected]

webmaster: bc[email protected]