First Monday

Web-based Surveys: Changing the Survey Process by Holly Gunn

Abstract
Web-based Surveys: Changing the Survey Process by Holly Gunn
Web-based surveys are having a profound influence on the survey process. Unlike other types of surveys, Web page design skills and computer programming expertise play a significant role in the design of Web-based surveys. Survey respondents face new and different challenges in completing a Web-based survey. This paper examines the different types of Web-based surveys, the advantages and challenges of using Web-based surveys, the design of Web-based surveys, and the issues of validity, error, and non-response in this type of survey. The author also discusses the importance of auxiliary languages (graphic, symbolic and numeric languages) in Web surveys, and concludes with the unique aspects of Web-based surveys.

Contents

Introduction
Types of Web-based surveys
Advantages of Web-based surveys
Concerns with Web-based Surveys
Validity of Web-based Surveys
Design of Web-based Surveys
The Language of Survey Questionnaires
Web-based Survey Respondents
Uniqueness of Web-based Surveys
Conclusion

 


 

++++++++++

Introduction

Web-based surveys are having a profound influence on survey methodology. "The Internet has truly democratized the survey-taking process" [ 1]. Survey professionals and large organizations are no longer the only people conducting surveys on the Web (Couper, 2000). Software, capable of producing survey forms, is available to the general public at an affordable cost, enabling anyone with a Web site to conduct a survey without a lot of difficulty. For that reason, the range and the quality of Web-based surveys vary considerably.

The skills required to produce a Web-based survey are different from those required to construct other types of surveys. Web survey design focuses more on programming ability and Web page design rather than traditional survey methodology (Couper, 2001). Because of the technology involved in developing Web surveys, leadership has come from people with a background in technology, not the survey methodology professionals (Shannon et al., 2002). "In fact, the use of Web surveys seems to have caught the survey methodology community somewhat by surprise" [ 2].

Problems associated with Web page design computer programming can play a significant role in Web-based surveys, and the computer code of the questionnaire can be a source of error with Web-based surveys. Couper (2001) explained how inaccuracies in computer programming which produced text boxes of different sizes affected survey results in a University of Michigan survey. The importance of code in Web survey design is again demonstrated in the article by Barron and Siepmann (1999) who included over ten pages of HTML code and JavaScript in their article on using Web-based surveys in research and teaching. Their examples of code could produce these various effects in surveys: questionnaires with frames; answer columns side by side; different versions of the questionnaire for various respondents; randomizing question order; error checking; removing character codes from text responses; and process tracing and timing.

 

++++++++++

Types of Web-based surveys

Web-based surveys are everywhere on the Internet. Couper (2000) stated that there is speculation Web surveys will replace traditional methods of data collection. Data that had once been collected by other survey modes is now being collected with Web surveys (Dillman and Bowker, 2001). An informal search for Web-based surveys on Yahoo! by Solomon (2001) revealed over 2,000 Web-based surveys in 59 different categories. Not all of these were serious surveys. Surveys on the Web run the gamut from entertainment questionnaires to those with a probability-based design.

Non-probability and Probability-based surveys

Couper (2000) described the various categories of Web-based surveys that he has encountered. He grouped these as either probability-based or non-probability-based. Couper included the following in the non-probability-based category:

  1. Entertainments surveys;
  2. Self-selected Web-surveys; and,
  3. Surveys made up of volunteer panels of Internet users.

People who spent a lot of time on the Internet have probably completed an entertainment survey. Such surveys consist of questionnaires that request a vote on particular questions and other instant polls. These surveys don't lead to generalizations of viewpoints across populations, and were not intended for that reason. Other non probability-based surveys include dedicated survey sites maintained by owners of Web sites. Such surveys could allow multiple submissions, and make no attempt to be representative of the whole Internet population.

Couper (2000) described several types of probability-based Web-based surveys:

  1. Intercept surveys;
  2. Surveys that obtain respondents from an e-mail request;
  3. Mixed-mode surveys where one of the options is a Web survey; and,
  4. Pre-recruited panels of a particular population as a probability sample.

An intercept survey polls every nth visitor to a Web site. Multiple submissions from the same computer can be prevented with cookies. Couper (2000) said this type of survey is frequently used for customer satisfaction surveys. Some Web surveys are completed by respondents who agree to complete a survey in response to an e-mail invitation to participate. Non-response is a big concern with this type of Web survey (Couper, 2000). Other Web-based surveys are part of mixed-mode surveys where participants are offered the choice of completion of a survey on the Web or by paper. With a pre-recruited panel as a probability sample, respondents are provided with passwords or personal identification numbers. Non-response is still a big issue even with pre-recruited panels, and this non-response occurs at various stages in the survey process (Couper, 2000). In some cases, probability-based samples of the full population obtain participation by providing equipment in exchange for participation in the survey (Couper, 2000). Even with equipment provided, Couper explained non-response was still a concern.

Differences in Presentation

Dillman and Bowker (2000) distinguished between Web-based surveys according to their methods of presentation, either screen-by-screen or those surveys that allow scrolling. Screen-by-screen Web surveys allow only one question to be viewed by the respondent at a time. The question displayed must be completed before proceeding to the next one. Other Web surveys allow horizontal or vertical scrolling, giving the respondent the view of the entire questionnaire. Even within these same categories of Web surveys, the surveys themselves can differ greatly because of variations in layout and patterns of navigation (Dillman and Bowker, 2000).

 

++++++++++

Advantages of Web-based surveys

"There is no other method of collecting survey data that offers so much potential for so little cost as Web surveys" [ 3]. Zanutto (2001) described many of the reasons for the popularity with Web surveys in her presentation for her course in survey design and construction. She explained that Web-based surveys are relatively cheap. An analysis of the cost of paper vs. Web surveys by Schaefer (2001), for the Students Life Experiences Survey conducted at the Illinois Institute of Technology, determined that the average cost of paper surveys was $US2.07 per student compared to the average cost of $US.88 for Web-based surveys. Zanutto described other advantages of Web surveys as a faster response rate; easier to send reminders to participants; easier to process data, since responses could be downloaded to a spreadsheet, data analysis package, or a database; dynamic error checking capability; option of putting questions in random order; the ability to make complex skip pattern questions easier to follow; the inclusion of pop-up instructions for selected questions; and, the use of drop-down boxes. These are possibilities that cannot be included in paper surveys. Couper (2000) saw the multimedia capability of Web surveys as a real advantage, as well as the option to customize survey options for particular groups of respondents. It is interesting to note that despite many of these advantages of Web surveys, Dillman,Tortora, et al. (1998) found that the response rate was greater for plain rather than fancy surveys that employed tables, graphics, and different colors. This led the authors of this study to question the use of fancy designs and layouts in Web questionnaires.

 

++++++++++

Concerns with Web-based Surveys

Web-based surveys are not without problems. Zanutto (2001) discussed a number of issues concerning Web surveys:

  1. Questionnaires do not look the same in different browsers and on different monitors. Therefore, respondents may see different views of the same question, and not receive the same visual stimulus.
  2. Respondents may have different levels of computer expertise. This lack of computer expertise can be a source of error or non-response.
  3. The surveyor is faced with concerns about data security on the server.

  4. The sample in a Web survey isn't really a random sample, and there is no method for selecting random samples from general e-mail addresses.
  5. Since information can be collected about respondents without their knowledge or permission, respondents may be concerned with privacy of the data they are entering. The surveyor can determine the time of day the survey was completed, how long the respondent took to complete each question, how long the respondent took to finish the entire survey, what browser was used, and the respondent's IP address.

Although some participants in Web-based surveys might be concerned with privacy issues, Bosnjak and Tuten (2001) saw the metadata that can be collected about participants, without their knowledge through cgi scripts, Java applets, and user log files, as a benefit to the surveyor, and they described the questionnaire design necessary to gather this data. Thus, privacy issues become a double-edged sword: a concern to the respondent, and a benefit to the surveyor.

Jeavons (n.d.) stated that Web surveys are quite unlike other survey methods of data collection in their execution, and this difference can lead to participants acting differently when responding to Web-based surveys. Using an analysis of log files, Jeavons was able to demonstrate the number of failures and number of repeats experienced by respondents to Web surveys with various questions and question types. He noticed that the first questions often caused immediate refusal or confusion with many repeated attempts to answer it.

 

++++++++++

Validity of Web-based Surveys

"When generalized to the context of survey research, validity refers to the accuracy of the specific conclusions and inferences drawn from non-experimental data" [ 4]. The White Paper on validity in Web surveys, prepared by Satmetrix in 2001, cited the work of Krosnick and Chang (2001) which found Web participants' responses, "contained fewer random and systematic errors than their telephone counterparts, as demonstrated by notably higher reliability coefficients" [ 5]. This paper offered three explanations for these differences. One is the recency effect, which can occur when questions are presented aurally, and respondents, lacking sufficient time to process all responses and place them in long-term memory, select the last response offered. A second explanation offered was social compliance which can occur in telephone interviews, where respondents may tend to agree because of the presence of the interviewer. For a third explanation, Satmetrix turned to the work of Dillman et al. (2001), which explained that Web surveys are comprehended and controlled by respondents at their own pace. Satmetrix concluded that, although there were concerns and limitations with Web-based surveys, these limitations could be overcome when data is collected from an identifiable, known population.

Sources of error

The use of Web-based surveys has increased dramatically, but the growth hasn't focused on survey error reduction (Dillman, 2000; Dillman and Bowker, 2001). The major sources of error in any survey include sampling, coverage, non-response, and what was actually being measured (Couper, 2000). Couper discussed how these sources of error are particularly relevant for Web surveys. Coverage is also a big concern in Web-based surveys. Estimates of household access to the Internet vary greatly, and household access does not mean that all age groups in the household actually are Internet users. The Internet population differs from the general population in many ways, and there is great variation in Internet access between some rural and urban areas and with different ethnic groups (Couper, 2000). However, there are some communities where connectivity is almost universal. Some university campuses, for example, have universal Internet access. This makes sample bias with Web surveys not as great a concern in those populations. Therefore, Web surveys are a more common survey method on university campuses that with the general population (Couper, 2000).

Non-response errors are the result of not all people in a sample willing to complete the survey, or failing to finish it. Generally, Web surveys have a lower response rate than mail surveys (Couper, 2000; Solomon, 2001), and failure to complete a questionnaire or abandonment is a major concern in Web surveys (Couper et al., 2001). Couper et al. (2001) hypothesized that progress indicators would improve response rate in Web surveys where only one question appeared on a screen at a time; however, their research results were inconclusive since their progress indicators took too long to load and slowed the download time for the survey.

There are many reasons why respondents fail to complete a Web-based survey. Bosnjak and Tuten (2001) cited research that explained some of the reasons for dropping out in Web-based surveys, and included open-ended questions, questions arranged in tables, fancy or graphically complex design, pull-down menus, unclear instructions, and the absence of navigation aids as reasons for survey abandonment. Dillman and Bowker (2001) showed how survey design was related to survey error. Jeavons (n.d.) determined that the first question was a significant dropout point for many Web survey respondents. Solomon (2001) wondered if lower response rate in Web-based surveys is due to our lack of knowledge of how to increase response rate in this new type of data collection. Solomon described two points in a Web survey when respondents stop completing the survey: (1) when respondents encounter a complex grid of questions and responses, and (2) when respondents were asked to their give their e-mail address. He noted that user logs do not show any difference in the failure to complete surveys based on gender, age or education.

 

++++++++++

Design of Web-based Surveys

The design of a survey can affect the response rate, the dropout rate, and even the responses themselves (Couper et al., 2001). Couper et al. studied the differences in response rate and responses using scrollable Web surveys and interactive Web surveys, a Web survey design that displays one question on a screen at a time. They found that if they altered the presentation of the single item screen to allow multiple items to appear on the screen, completion time for the survey was faster, there were fewer non-answered questions, and there was more similarity in answers than when questions were presented individually. They also tested the response rate with the same survey questions with radio boxes and entry check boxes, and had mixed results depending on the type of question asked. Some of their questions required addition, and they had hypothesized that the radio box version, with its dense screen layout and its horizontal format, would require more eye-hand coordination and bring more response errors or greater non-response than the vertical format. Addition is normally done in vertical format, and Couper et al. found that responses requiring addition were done more correctly with the vertical format and the entry check boxes. They concluded that, rather than advocating a generic design format for all Web-based questionnaires (i.e., radio boxes or entry check boxes; screen-by-screen or scrollable design) the design should reflect the purpose of the survey, and that some designs are more suitable for certain purposes or types of questions than others.

Web-based surveys are in the early stages of development, and researchers still have a lot to learn about conducting effective surveys on the Web (Solomon, 2001). However, many of the same principles that govern other surveys apply to Web surveys (Shannon et al., 2002). Frary (1996) provided tips for designing quality questionnaires that are useful for Web survey design. Frary emphasized the importance of keeping the questionnaire brief and concise; getting feedback on the initial list of questions with a field trial; placing confidential or personal questions at the end of the questionnaire; having response categories in progressive order, usually from lower to highest; and combining categories such as "seldom" and "never" together. Frary also provided recommendations of what to avoid: open-ended questions; the response category of "other", that prevented respondents from selecting a provided category for a trivial reason; response scale proliferation, i.e., using a six or seven point scale when a four or five point scale would be sufficient and more distinguishable; and asking participants to rank responses since research has shown respondents experience difficulty with ranking, especially with a list of more than six items.

Web-based surveys can take advantage of the power of the visual design elements far more than paper surveys, and the graphic nature of the Web makes the addition of graphics, color, and images quite inexpensive (Couper, 2001). Couper explained that, "the Web vastly expands the range of design opportunities and ... the skills brought to the design of Web surveys focus more on programming and general Web design than on survey design" [ 6]. He described the wide array of response options for Web-based surveys: radio boxes, check boxes, Likert scales, drop-down menus, and skip patterns, as well as the inclusion of graphics, color and sound.

General Principles of Design

Design in Web surveys is of greater importance than in other modes of surveying because of the visual emphasis of the Web and the way the survey appears in different browsers and on different computer screens (Couper, 2000). Couper believed that the audience and the purpose of the survey should affect the design, and that the design of a Web-based survey for teenagers and one for seniors might be designed quite differently. "The notion of a one-size-fits-all approach to Web survey design is premature " [ 7]. Solomon (2001) noted that Web-based survey development is still in its early stages, and, since HTML forms have their own unique design concerns, it is yet to be seen how knowledge from other surveying techniques will be transferred to this new mode of surveying.

Writing from a marketing perspective, Gaddis (1998) described how to design user-friendly, online surveys that would get responses. She explained that the first question was the most critical on the questionnaire and should be tied to the survey's purpose. This advice has been given by others (Burgess, 2001; Dillman, 2000; Zanutto, 2001). Gaddis' most important advice was to "edit, edit, edit" [ 8]. She stated that many ideas for effective online surveys come from traditional surveys:

  1. Pretest questions before they go online;
  2. Write an introduction for the survey which will bring cooperation from participants;
  3. Use filtering questions and have questionnaires appropriate for filtered groups;
  4. Divide long surveys into sections;
  5. Use open-ended questions sparingly; and,
  6. Use incentives to get people to respond.

Smith (1997), who used Web surveys as part of her doctoral research, discussed Web survey design, and offered suggestions for HTML code. She advised that long Web surveys be divided into sections, and that there should be "Clear" and "Reset" buttons for each of the survey sections, so that respondents don't have to reset the entire survey if they want to clear one answer. She believed "Clear" and "Submit" buttons need to be in separate locations so they aren't confused, and that word length needs to be specified for open-ended questions.

Principles for Constructing Web Surveys

Dillman, Tortora, and Bowker (1998) were concerned about the principles of what they called "respondent-friendly" Web survey design. They described respondent-friendly design to mean, "the construction of Web questionnaires in a manner that increases the likelihood that sampled individuals will respond to the survey request, and that they will do so accurately, i.e., by answering each question in the manner intended by the surveyor" [ 9]. Dillman, Tortora, and Bowker stated that it is essential that Web questionnaires, like paper questionnaires, have design features that are easy to understand, don't take a lot of time to comprehend, and are interesting to complete. They gave three criteria for Web survey design and explained eleven principles of design for Web questionnaires:

  1. Use a welcome screen that is motivating, that emphasizes the ease of responding, and that shows respondents how to move to the next page.
  2. Have the first question fully visible on the first screen, and ensure that is easy to understand and applies to all respondents. The first question is not the place for filtering questions or drop-down boxes.
  3. Use a conventional format similar to a paper questionnaire. Dillman, Tortora, and Bowker warn about the lack of spacing, crowded horizontal design, and unconventional layout. They cite research to show that brightness, fonts and spacing can greatly assist respondents in navigating questionnaires, and that conventional vertical layout with numbered questions, and distinct space between questions and responses make Web questionnaires respondent-friendly.
  4. Limit line length. Respondents are less likely to skip words when lines are short.
  5. Provide instructions for the necessary computer actions, i.e., erasing radio buttons, drop-down menus, and clearing open-ended questions.
  6. Provide these computer instructions with the questions where this action is necessary, not at the beginning of the questionnaire.
  7. Don't make it necessary for respondents to answer each question before going on to a next one.
  8. Use a scrolling design that allows respondents to see all questions unless skip patterns are important.
  9. Make sure that all responses can be displayed on one screen, using double rows if necessary and navigational aids to achieve this.
  10. Use symbols or words to give respondents some idea of their progress in the survey. Web survey respondents, like paper respondents, need some indication of how near they are to completion.
  11. Exercise caution with question structures that are known to have measurement problems in paper surveys, such as check-all-that-apply and open-ended questions.

Dillman (2000) expanded on these design principles, and Dillman and Bowker (2000) explained the relationship between these principles of design and traditional sources of error in sampling, coverage, measurement, and non-response.

Zanutto (2001) repeated many of these instructions in her presentation about Web survey design for her course on survey design. Her other suggestions included

  1. Use a cover letter with the questionnaire;
  2. Make the survey simple, and have it take no longer than 20 minutes;
  3. Give an estimated time that it will take to complete the survey;
  4. Be sure the first question is interesting, easy to answer, and related to the topic of the survey;
  5. Be concerned about privacy issues for the respondents and the data that is collected. (Don't use cookies, and make sure the data is secure on the server, even if it means using encryption software.)
  6. Allow an alternate mode of completion if people are concerned about privacy, i.e., print and mail in the survey.

Shannon et al. (2002) surveyed over 60 experienced members of the American Research Association regarding the use of electronic surveys. They wanted to know what these survey professionals thought about Web surveys. They discovered that most of these researchers did not believe that pencil and paper surveys and electronic surveys yield the same results. However, these researchers were positive about the use of online surveys because of the cost savings and the ease of data analysis. These experts were less positive about the ease of use of Web surveys, the response rate, errors in response, security of data and privacy issues. The recommendations from the professionals regarding design of electronic surveys included the use a simple design and making the questionnaire graphically pleasing.

Dillman and Bowker (2000) turned to the literature of human computer interaction to gain insight for Web survey design. They examined the placement of response boxes and skip directions for Web surveys. Traditionally, answer boxes are placed on the left for paper questionnaires, and this practice has been transferred to Web-based questionnaires. The Web, however, displays questions differently than they appear on paper, and there are differences in the way people work on paper and the way they work on a computer (Dillman and Bowker, 2000). Human computer interaction research suggests that people with greater computer skills and experience prefer right justified response boxes (Dillman and Bowker, 2000). Even though left alignment is more familiar, they hypothesized that right alignment would improve navigation, and reduce hand-eye-keyboard-mouse coordination. They were able to show that placing response boxes and skip directions on the right reduced skip pattern errors.

In addition to their work with right and left alignment of response boxes and the effect of alignment on response rate, Dillman and Bowker (2000) examined some of the other principles of design for Web surveys. They ensured that their Web surveys were designed for the least compliant browser, so that all respondents would have the same visual stimulus. They used the most basic HTML code with alternate code for compliance with all browsers, and they made sure that the code was validated on different browsers.

Dillman, Tortora, and Bowker (1999), in their examination of the impact of principles of Web survey design, showed that, although more Web designers have been using fancy features, such as HTML tables, several colors, dynamic HTML, animation, Java applets, and sound to get people to respond to surveys, these very same features that entice people to participate may keep them from completing the survey. Plain surveys had a higher response rate than fancy design (Dillman, Tortora, et al., 1998).

Even before the widespread use of Web-based surveys, Smith (1993) recognized the importance of the visual aspects of questionnaires. "Non-verbal aspects of surveys such as physical layout and visual presentation can also notably influence answers" [ 10]. Smith, who is frequently quoted by Web-based survey designers, noted how little things really do matter in survey design. He advised survey developers to pay close attention to physical layout and what might appear to be trivial issues, i.e., the placement of skip or filtered questions, overly-crowded design, visual and graphic images, and the misalignment of response boxes.

 

++++++++++

The Language of Survey Questionnaires

Even though the primary form of communication in these questionnaires is textual, much of the language of the survey is visual (Couper, 2001; Redline and Dillman, 1999). Unlike interviewer-administered questionnaires that are given aurally, Web-based surveys are presented in several languages: textual, graphic, and numeric (Redline and Dillman, 1999). The textual language of surveys includes the wording of the questions and the instructions in the responses. Couper (2001) included font size, font type, color, layout, symbols, images, animation, and other graphics as components of visual language. Although visual language is intended to add meaning and supplement the written language, Couper (2001) observed that it could actually draw attention away from text and alter the meaning of words.

Redline and Dillman (1999) distinguished between three different types of visual languages: graphic language, symbolic language, and numeric language, and emphasized the importance of these languages in a Web questionnaire. Redline and Dillman referred to these languages as the auxiliary languages of questionnaires. Graphic language, consisting of fonts, font sizes and variations (bold, italics,) borders, and tables, helps respondents move their eyes across the page and comprehend the questionnaire. Symbolic language is sometimes used in questionnaires when arrows or other symbols are employed to help guide the respondent through the survey questions. Numeric language is used in questionnaires in numbering questions, and sometimes in numbering response items. Redline and Dillman pointed out that these various languages of the questionnaire work together to affect the respondents' perception of the survey.

Sometimes, these different languages of questionnaires send conflicting messages to respondents (Couper, 2001; Redline and Dillman, 1999). Redline and Dillman indicated this is particularly true with instructions for skipping questions. Redline and Dillman stated that cognitive research about questionnaires suggests that respondents believe they are to answer every question and answer questions in sequence. The skip instructions tell respondents to disobey what they are culturally trained to do. Redline et al. (1999) demonstrated that the manipulation of these auxiliary languages (increasing font size, boldness, and arrows, as well as adjusting the placement of verbal instructions) affected response rate in questionnaires with skip questions. They were able to reduce errors of commission, those errors when a respondent is instructed to skip a question, but answers it, in survey responses by more than half.

 

++++++++++

Web-based Survey Respondents

The Role of Respondents

Respondents in any questionnaire are expected to comprehend the question, recall relevant information to answer the question, make a judgment, and select a response (Redline and Dillman, 1999). Redline and Dillman (1999) explained that respondents are required to perceive both the questions and the instructions in self-administered questionnaires. They emphasized that perception, comprehension, and judgment involve not only the written language of the questionnaire, but also other languages of the survey: numeric, symbolic, and graphic languages. They said that respondents grouped elements from these various languages together using the Law of Proximity, also known as the Gestalt Grouping Law, a law that states that people will group information together that is in close proximity, and that they will then draw inferences from the grouping (Redline and Dillman, 1999).

Culture, too, plays an important part in the perception of a survey. Culture affects the way symbols and graphics are perceived, and the way different people respond to certain questions (Dillman et al., 2000). Acquiescence is a predisposition in some cultures, and this characteristic can affect responses, particularly in agree/disagree questions (Javeline, 1999). Some individuals within any cultural group may also be more predisposed to agree to questions than others.

Types of respondents

Redline and Dillman (1999) cited work by Krosnick (1991) that classified survey respondents into two types: optimizers and satisficers. Optimizers devote their full attention to the completion of the survey. The satisficers go through the motions of answering the questions, but look for ways to expend as little effort as possible doing the survey.

Bornjak and Tuten (2001), through the use of metadata collected during Web-based surveys, were able to identify seven distinct response types for Wed-based surveys:

  1. Unit non-responders;
  2. Complete responders;
  3. Answering drop-outs;
  4. Lurkers;
  5. Lurking drop-outs;
  6. Item non-responders; and,
  7. Item non-responding dropouts.

This is a more detailed analysis of respondents than the traditional categories of non-response, unit non-response, and complete response. Bornjak and Tuten (2001) described lurkers as people who viewed all the questions, but answer none. Complete responders were the survey participants who viewed all questions and answered all questions. Unit non-responders did not participate in the survey. Unit non-responders may have viewed the welcome screen and went no further, or, for technical reasons, were unable to participate. Answering dropouts provided answers to all questions viewed, but they did not view all the questions, and quit before looking at all the questions. Item non-responders viewed the entire questionnaire, but answered only some of the questions. Item non-responding dropouts were a mixture of the dropout categories. Motivation was important in determining whether someone will complete a questionnaire or drop out (Bosnjak and Tuten, 2001).

Ways to Improve Response Rate

Response rates for all types of surveys have been on the decline since the 1990's (Dillman et al., 2001). Although response rates for Web surveys are noted to be lower than mail surveys, several ways to improve response rates have been verified in research studies. Solomon (2001) stated that personalized e-mail cover letters, follow-up reminders by e-mail, pre-notification of the intent of the survey, simpler formats, and plain design have all been shown to improve response rates for Web-based surveys. Dillman et al. (2001) showed that mixed-mode surveys can improve response rate as some people prefer to be surveyed in one mode as opposed to another. For example, some people prefer responding to a mail survey as opposed to a phone survey or a Web-based survey. Conducting surveys in two modes is costly, but it does reduce non-response error (Dillman et al., 2001). Collecting data by two different modes of communication, i.e., aural and visual, introduces measurement differences (Dillman et al., 2001). However, Carini et al. (2001) demonstrated that the responses of first-year and senior college students to traditional paper surveys and Web surveys did not show substantial differences even after controlling for student and school characteristics.

 

++++++++++

Uniqueness of Web-based Surveys

Web-based surveys are self-administered questionnaires. They are physical entities in themselves that can be manipulated, and respondents have different skills in this Web manipulation (Redline and Dillman, 1999). Navigation and flow are important in any questionnaire, but they are particularly important in Web-based surveys (Redline and Dillman, 1999). Web surveys are a visual stimulus, and respondents have control over how and even whether they read and comprehend each question (Dillman et al., 2001). Participants in Web surveys are less likely to take extreme positions in their responses than people that take part in a telephone survey (Satmetrix, 2001). Web surveys provide opportunities for variety in question structure, layout, and design not available in paper surveys (Couper, 2000; Couper 2001; Couper et al., 2001; Zanutto, 2001). It has been demonstrated there are various ways to manipulate both the verbal and the auxiliary languages of self-administered questionnaires to improve the design of skip instructions, and, in turn, improve the response rate, for skip pattern questions.

Dillman, Tortora, et al. (1998) demonstrated that plain Web surveys gave a better response rate than those with a fancy design containing colors, graphics and tables. They gave several explanations for their findings. They stated that longer questionnaires have lower response rate. In other words, there seems to be a time limit that people are willing to spend completing surveys. Web surveys with a fancy design take longer to download on slower Internet connections, making them take the same time to complete as longer questionnaires. In addition, not all features of fancy questionnaires may appear on old browsers or hardware.

 

++++++++++

Conclusion

Web-based surveys have had a profound influence on the survey process in a number of ways. The survey taking process has become more democratized as a result of Web surveys. Since the ability to gather data through Web surveys is quite widely available, ordinary citizens, as well as government organizations, university researchers, and big businesses, are now conducting surveys on the Web. Leadership in Web-based survey design is coming from people with a strong technology background, not just the experts in survey methodology. The visual aspect of surveys is even more important in Web-based surveys than with other surveys. What was visible in a paper survey can be made invisible in a Web and vice versa. Web surveys have reduced the cost of data collection and made data analysis more efficient. Although there are concerns about Web-based surveys and many aspects of conducting surveys on the Web have yet to studied, a number of researchers have produced a body of literature that is improving the design and effectiveness of the Web-based survey process. End of article

 

About the Author

Holly Gunn is a teacher-librarian with the Halifax Regional School Board in Nova Scotia, Canada.
E-mail: hgunn@accesscable.net

 

Acknowledgments

The research for this paper was carried out during a sabbatical leave granted by the Halifax Regional School Board.

 

Notes

1. Couper, 2000, Summary and Conclusion section, para. 1.

2. Dillman and Bowker, 2001, para. 3.

3. Dillman, 2000, p. 400.

4. Satmetrix, 2001, Defining Validity section, para. 3.

5. Krosnick and Chang, 2001, p. 5.

6. Couper, 2001, Introduction section, para. 9.

7. Couper, 2000, p. 10.

8. Gaddis, 1998, para. 7.

9. Dillman, Tortora, and Bowker, 1998, Criteria for Respondent Friendly Design section, para. 8.

10. Smith, 1993, para. 2.

 

References

Jonathan Baron and Michael Siepmann, 1999. "Techniques for Creating and Using Web Questionnaires in Research and Teaching," at http://www.psych.upenn.edu/~baron/examples/baron4.htm, accessed 6 November 2002.

Michael M. Bosnjak and Tracey L. Tuten, 2001. "Classifying Response Behaviors in Web-based Surveys," Journal of Computer-Mediated Communication, volume 6, number 3 (April), at http://www.ascusc.org/jcmc/vol6/issue3/boznjak.html, accessed 6 November 2002.

Thomas F. Burgess, 2001. A General Introduction to the Design of Questionnaires for Survey Research. University of Leeds, Information System Services, at http://www.leeds.ac.uk/iss/documentation/top/top2.pdf, accessed 6 November 2002.

Robert M. Carini, John C. Hayek, George D. Kuh, and Judith A. Ouimet, 2001. "College Student Responses to Web and Paper Surveys: Does Mode Matter," at http://www.indiana.edu/~nsse/acrobat/mode.pdf, accessed 6 November 2002.

Mick P. Couper, 2001. "Web Surveys: the Questionnaire Design Challenge," Proceedings of the 53rd session of the ISI, at http://134.75.100.178/isi2001/, accessed 6 November 2002.

Mick P. Couper, 2000. "Web Surveys a Review of Issues and Approaches," Public Opinion Quarterly, volume 64, number 4 (Winter), pp. 464-481.

Mick P. Couper, Michael W. Traugott, and Mark J. Lamias, 2001. "Web Survey Design and Administration," Public Opinion Quarterly, volume 65, number 2 (Summer), pp. 230-253.

Don A. Dillman, 2000. Mail and Internet Surveys: The Tailored Design Methods. Second edition. New York: Wiley.

Don A. Dillman and Dennis K. Bowker, 2001. "The Web Questionnaire Challenge to Survey Methodologists," at http://survey.sesrc.wsu.edu/dillman/zuma_paper_dillman_bowker.pdf, accessed 6 November 2002.

Don A. Dillman, Glenn Phelps, Robert D. Tortora, Karen Swift, Julie Kohrell, and Jodi Berck, 2001. "Response Rate and Measurement Differences in Mixed Mode Surveys Using Mail, Telephone, Interactive Voice Response and the Internet," at http://survey.sesrc.wsu.edu/dillman/papers/Mixed%20Mode%20ppr%20_with%20Gallup_%20POQ.pdf, accessed 6 November 2002.

Don Dillman, Cleo D. Redline, and Lisa Carley-Baxter, 1999. "Influence of Type of Question on Skip Pattern Compliance in Self-administered Questionnaires," at http://survey.sesrc.wsu.edu/dillman/papers/AMSTAT%20'99%20proceedings%20paper.pdf, accessed 6 November 2002.

Don A. Dillman, Robert D. Tortora, and Dennis Bowker, 1998. "Principles for Constructing Web Surveys," Pullman, Washington. SESRC Technical Report 98-50, at http://survey.sesrc.wsu.edu/dillman/papers/websurveyppr.pdf, accessed 6 November 2002.

Don A. Dillman, Robert D. Tortora, John Conradt, and Dennis Bowker, 1998. "Influence of Plain vs. Fancy Design on Response Rates for Web Surveys," presented at Joint Statistical Meetings, Dallas, Texas, at http://survey.sesrc.wsu.edu/dillman/papers/asa98ppr.pdf, accessed 6 November 2002.

Robert B. Frary, 1996. "Hints for Designing Effective Questionnaires," ERIC AE Digest. ERIC Clearinghouse on Assessment and Evaluation, at hhttp://ericae.net/pare/getvn.asp?v=5&n=3, accessed 6 November 2002.

Susanne E. Gaddis, 1998. "How to Design Online Surveys," Training and Development, volume 52, number 6 (June), pp. 67-72.

Debra Javeline, 1999. "Response Effects in Polite Cultures," Public Opinion Quarterly, volume 63, number 1 (Spring), pp. 1-28.

Andrew Jeavons, n.d.. "Ethology and the Web: Observing Respondent Behavior in Web Surveys," at http://w3.one.net/~andrewje/ethology.html, accessed 6 November 2002.

Cleo D. Redline and Don A. Dillman, 1999. "The Influence of Auxiliary, Symbolic, Numeric, and Verbal Language on Navigational Compliance in Self-administered Questionnaires," at http://survey.sesrc.wsu.edu/dillman/papers/Auxiliary,Symbolic,Numeric%20paper--with%20Cleo.pdf, accessed 6 November 2002.

Cleo Redline, Don Dillman, Richard Smiley, Lisa Carley-Baxter, and Arrick Jackson, 1999. "Making Visible the Invisible: An Experiment with Skip Pattern Instructions on Paper Questionnaires," at http://survey.sesrc.wsu.edu/dillman/papers/Making%20Visible%20the%20Invisible.pdf, accessed 6 November 2002.

Satmetrix, 2001. "Investigating Validity in Web surveys," at http://www.satmetrix.com/public/pdfs/validity_wp4.pdf, accessed 6 November 2002.

Edward Schaefer, 2001. "Web Surveying: How to Collect Important Assessment Data Without Any Paper," Office of Information & Institutional Research, Illinois Institute of Technology, at http://oiir.iit.edu/oiir/Presentations/WebSurveying/WebSurveying_20010424.pdf, accessed 6 November 2002.

David M. Shannon, Todd E. Johnson, Shelby Searcy, and Alan Lott, 2002. "Using Electronic Surveys: Advice from Survey Professionals," Practical Assessment Research & Evaluation, volume 8, number 1, at http://ericae.net/pare/getvn.asp?v=8&n=1, accessed 6 November 2002.

Christine B. Smith, 1997. "Casting the 'Net: Surveying an Internet Population," Journal of Computer-Mediated Communication, volume 3, number 1 (June), at http://jcmc.huji.ac.il/vol3/issue1/smith.html, accessed 6 November 2002.

Tom W. Smith, 1993. "Little Things Matter: A Sampler of How Differences in Questionnaire Format Can Affect Survey Responses," at http://www.icpsr.umich.edu:81/GSS/rnd1998/reports/m-reports/meth78.htm, accessed 6 November 2002.

David J. Solomon, 2001. "Conducting Web-based Surveys," Practical Assessment, Research & Evaluation, volume 7, number 19, at http://ericae.net/pare/getvn.asp?v=7&n=19, accessed 6 November 2002.

Elaine Zanutto, 2001. "Web & E-mail Surveys," at http://www-stat.wharton.upenn.edu/~zanutto/Annenberg2001/docs/websurveys01.pdf, accessed 6 November 2002.


Editorial history

Paper received 9 November 2002; accepted 29 November 2002.


Contents Index

Copyright ©2002, First Monday

Copyright ©2002, Holly Gunn

Web-based Surveys: Changing the Survey Process by Holly Gunn
First Monday, volume 7, number 12 (December 2002),
URL: http://firstmonday.org/issues/issue7_12/gunn/index.html