2 Theoretical background

Roos Haer and Nadine Meidert

Previous | Next

One of the most prevalent threats to Web surveys inference is breakoffs or so-called dropouts. These are respondents that quit prior to completing the survey (Bosjnak and Tuten 2001). Breakoff, as a part of the nonresponse rate, can harm the quality of survey statistics; the larger this rate, the larger the risk of nonresponse error. As a result, much effort of survey researchers has been focused on reducing this rate (Groves et al. 2004). Within the Web survey methodology literature, most research on how to improve data quality by reducing this rate is focused on the use of follow-ups, incentives, length, wording, and presentation of the questionnaire (Deutskens, Fowler, Couper, Lepkowski, Singer and Tourangeau, 2004, page 22). Most of these response-enhancing features focus on making changes in the content and its lay-out. To our knowledge, limited attention has so far been given to guidelines concerning the lay-out and wording of the welcome screen. For example, Dillman's (2007, page 377) recommendation on how to construct an effective welcome screen only highlights that this particular page has to be motivational, emphasizing the ease of responding, and should instruct the respondents about how to proceed to the next page. Detailed and practical instructions on how to design an effective welcome screen are lacking. This is surprising because most respondents drop out after the first screen (i.e. the so-called unit nonresponders) (Couper 2008; Bosjnak and Tuten 2001). Moreover, this particular splash screen provides the potential respondent with a first impression of the survey: it evokes emotions towards the questionnaire that might induce respondents to not only start with the Web survey, but also to provide answers faster, to overlook imperfections of the design, and perhaps even answer more honestly (Dillman, Gertseva and Mahon-Haft, 2005). Furthermore, it is a question of aesthetics as visual traits determine an individual's feelings and emotional reaction. In Web surveys, the welcome screen is the first visual contact to the respondent and thus its lay-out impacts the respondents' feelings toward the whole survey. It can be even assumed that an appealing survey design can distract from the bad quality of the questionnaire itself (Mahon-Haft and Dillman 2010).

To fill this niche in empirical research, we test three potential factors embedded on the welcome screen on the breakoff rates of a Web survey. These factors are; the use of background color, the number of words devoted to explaining the privacy rights and the data security to the potential respondent, and the announced length of the Web survey. These three elements of the welcome screen are chosen because they are not only essential elements of welcome screens, they also influence the first impression respondents might get of the survey.

2.1  Background color

Unlike paper surveys, the use of the Web opens a batch of visual possibilities. This visual potential is of importance since respondents attend to many features of survey questions not just the words that convey the question or the literal meaning of those words (Tourangeau, Couper and Conrad, 2007). These nonverbal features compromise numeric, symbolic, and graphic language (Redline and Dillman 2002; Dillman 2007). Because numbers or symbols are hardly ever used in designing effective welcome screens, the graphical non-verbal element (i.e., brightness, size, shape, spatial arrangement, contrast, figure/ground, and even color) might play an important role in increasing response rates.

With the flexibility of the Web, it is, for example, simple for the designer of the survey to create text and background combinations of a variety of differing colors (Hall and Hanna 2004). As a result, myriad of different color combinations proliferate Web surveys. The choice of a particular color relates to the visual contrast of the verbal information presented on the colored background. This is partly determined by their wavelengths. For example, saturated colors have different wavelengths that need to be focused at different depths behind the lens of the eye, which lead to visual fatigue (Couper 2008, page 164). In addition, research has shown that respondents tend to find short wavelength colors (blues and greens) more pleasant than long wavelength colors (reds and yellows) (Hall and Hanna 2004). For example, Pope and Baker (2005) varied the background color of a survey of college students, using blue or pink background for a survey on alcohol-related issues. The survey with the blue background took less time to complete (although the differences were not statistically significant).

Besides wavelength, colors also may direct communication in other ways. Color has meaning, whether through cultural conventions, learned associations, or the actions associated with color in the instrument itself (Couper 2008, page 168). That is to say, color can affect respondents emotionally. The color red, for example, is often associated with danger or hotness, especially when linked with blue for cold (see for example, Gorn, Chattopadhyay, Yi and Dahl, 1997). A few studies have focused on user emotions when filling out colored Web surveys. Weller and Livingston (1988) for example, found out that the color of the questionnaire did indeed affect the received responses. Specifically, the color pink produced less emotional response than the color blue.

Some studies have been conducted on the influence of color on response rates. For example, Etter, Cucherat and Perneger (2002) concluded in their meta-analysis of 10 experimental studies, that printing questionnaires on colored paper does not substantially influence the speed of response or the proportion of missing items. More importantly, when all colors (blue, green, or yellow) were pooled, no study in the meta-analysis found a statistically significant effect of colored paper (versus white paper) on response rate. The only color that had some minor effect (in comparison to white) was pink. Also the studies that have been conducted to examine influence of color in self-administered Web surveys on response rates show that background color may have some effect, although not in all cases and not always a very large effect (Couper 2008). Dillman, Conradt and Bowker (1998) and Hall and Hanna (2004) for example, show in their studies that a design of black letters on a white background is the most effective design concerning response rates. This is also confirmed with a recent meta-analysis for mail surveys conducted by Edwards, Roberts, Clarke, Diguiseppi, Wentz, Kwan, Cooper, Felix and Pratap (2009). They found that the odds of response were increased by a third using a white background. Extending this argument to the usage of color on welcome screens, we expect that the group of respondents receiving a welcome screen consisting of colors with long wavelengths that boast negative emotional response, will have a higher breakoff rate than the group of respondents receiving a simple welcome screen without many colors.

2.2 Privacy rights

One of the possible explanations for why Web surveys are troubled with low response rates compared to other modes of survey research, may relate to confidentiality concerns with respect to electronic mail and to the Web in general (Couper 2000). Although self-administered Web surveys have the ability to collect sensitive information with less desirability bias, concerns about the security of the Web may negate this benefit, potentially producing higher nonresponse rates (or less honest reporting).

It is therefore not a surprise that most Web survey researchers, depending on the legal regulations of their country, provide their potential respondents with information on what will be done with the information they give. In addition, they emphasize the voluntary aspect of the survey and often assure that they will never match respondent's names with the results in any way. These rights to privacy of respondents are not only mentioned in the invitation e-mail but also on the welcome screen. The welcome screen also plays an important part in reassuring respondents and motivating them to start the Web survey.

A few studies have examined how assurances of privacy and confidentiality have an effect on the response rate. Most of the early studies concerned the U.S. household decennial census and were based on the assumption that privacy assurances were a 'good' thing - it increases response rates by overcoming respondent's concerns (Singer, Hippler, and Schwarz 1992, 258; Singer, Von Thurn, Miller 1995, 66-67). However, these early studies show the contrary; these assurances reduced the willingness to participate (Fay, Bates, and Moore 1991; Singer, Mathiowetz, and Couper 1993; Singer, Van Hoewyk, and Neugebauer 2003; Hillygus, Nie, Prewitt and Pals 2006). For example, Singer et al. (1992) demonstrated that mentioning privacy rights negatively affects response, whether measured as item nonresponse, unit nonresponse, response rate, or response quality. These studies uncovered unanticipated consequences.  Assurances of confidentiality and privacy protection rights might actually increase participate concerns about the survey content. These assurances seem to change respondents' perception of the threat of the survey: they suggest that it might contain unpleasant, difficult or even embarrassing questions. In other words, these guarantees result in a priming response effect, i.e. it activates the concept of confidentiality and privacy rights in the respondent's memory, which is then given increased weight in the subsequent decision to participate or not. Extending this to the issue of mentioning the privacy rights on the welcome screen, we expect that the more words researchers use to explain these rights, the more likely potential respondents become aware of possible problems with the issue, and the less likely they will start the Web survey. However, we have no clear expectations concerning the influence of stressing the privacy rights on the breakoff rates during the Web survey.     

2.3 Announced length

The decision to fill out a Web survey and to carry it out till the end is to a great extent influenced by the effort required of the respondent (Vicente and Reis 2010). This is partly determined by the perceived length of the survey (Bradburn 1978). Common sense tells us that longer surveys increase the perceived costs of participation and make it more likely that people will break off the survey prematurely.

Several studies have examined the effect of questionnaire length on response rates in Web surveys with mixed result. The meta-analysis conducted by Cook, Heath and Thompson (2000) for example, found no significant correlation between questionnaire length and response rates in Web surveys. However, questionnaire length was found to affect response rates in subsequent studies (Vicente and Reis 2010, page 256). For example, Deutskens et al. (2004) and Ganassali (2008) affirmed that the breakoff rate was higher in the long version of their Web survey than in the short version. Also Marcus, Bosnjak, Linder, Pilishenko and Schütz (2007) tested the relationship between the length of the Web survey and response rates in a field experiment. They found a significant effect: 30.8% responded to the short survey but only 18.6% to the longer one. This strong effect was significant throughout several other models in which they control for alternative explanations, such as the salience of the survey topic or the use of incentives.

A related issue is the announcement a priori of questionnaire length. The relationship between this announcement and the response rate has, however, more to do with the perception of the length than with the actual length of the survey. The announced length is also an indicator of the respondent's perceived burden and influences the decision to participate and to continue to participate.  A few studies have experimented with this announcement. For example, Crawford et al. (2001) conducted an experiment to evaluate whether the previous announcement of questionnaire length would affect the percentage of people who begin the survey and whether breakoffs would be higher when the survey took longer than the promised completion time. As hypothesized, the authors found that respondents who were informed that the survey would take only eight to ten minutes to complete had a lower overall nonresponse rate than those who were told it would take 20 minutes. However, the 20-minutes group had a lower rate of breakoffs once they started the survey. These results are also found in other studies, such as those of Hogg and Mill (2003), Baker-Prewitt (2003), and of Galesic (2006).

The literature on the effect on the announced length on the response rates is closely related to the discussion on the advantages and disadvantages of using a progress indicator (see for example, Galesic and Bosnjak 2009; Heerwegh 2004). Yan, Conrad, Tourangeau and Couper (2010) for example, found that the effect of the progress indicators depend on respondents' expectations and the degree to which they were realized; the presence of a progress indicator led to fewer breakoffs when respondents expected a short task based on the invitation and when the questionnaire was indeed shorter when they expected the task to be longer.

In accordance with Crawford et al. (2001) and unlike the other two possible design factors, we expect that the announced length of the survey on the welcome screen influences not only the initial nonresponse but also the breakoff rate later in the survey. To be more precise, we expect that fewer respondents will start a Web survey when the announced length on the welcome screen is longer. In addition, these respondents are less likely to drop out once they have started since the real length of the survey will hardly exceed the perceived duration.

Previous | Next

Date modified: