4 Results

Roos Haer and Nadine Meidert

Previous | Next

Before presenting the statistical results, we first compared the shares of our sample (i.e. those students that participated in the survey) with that of the population (i.e. all students that received an invitation). The invitation was sent to 2,629 students, from which just over 47 percent were male. From those that participated (1,419), just over 44 percent were male. There does not seem to be a different response behavior between men and women. However, some differences are observable when comparing the different faculty departments. Whereas the Science faculty seems to be represented adequately in the sample (both the sample and the population around 30 percent), the faculty of Humanities seems to be overrepresented (43 percent in the sample, 29 percent in the population), and the faculty of Politics, Law and Economics underrepresented (25 percent in the sample, 40 percent in the population).

In Table 4.1, the breakoff rates and the absolute number of those students who participated in the Web survey are displayed per treatment group. Notice that 83 students broke off after seeing the first screen, while 218 others broke off on other pages of the Web survey (in total the overal breakoff rate is 300). This descriptive table also shows that the breakoff rates (whether directly after the welcome screen or the breakoff rate on all other pages) are generally lower for those respondents who received a welcome screen in which privacy rights were not emphasized and in which the announced length of the survey is underestimated. However, the influence of color on the welcome screen on the breakoff rates seems to be mixed.

Table 4.1
Breakoff in the different experimental groups

Table summary
This table displays breakoff in the different experimental groups. The information is grouped by Breakoff on welcome screen, Overall breakoff rate, Total respondent n per treatment group (appearing as column headers).
Breakoff on welcome screen Overall breakoff rate Total respondent n per treatment group
n % n %
White, short, link 2 1.07 35 18.72 187
White, short, screen 4 2.30 31 17.83 174
White, long, link 15 7.89 49 25.79 190
White, long, screen 18 10.40 50 28.90 173
Red, short, link 3 1.79 18 10.71 168
Red, short, screen 12 6.78 35 19.77 177
Red, long, link 13 7.10 37 20.22 183
Red, long, screen 16 9.58 46 27.54 167
Total n 83 301 1419
Mean of n (Stand. Dev.) 10.4 (6.4) 37.6 (10.7) 177.4 (8.5)

To test whether the patterns observed in Table 4.1 are robust and statistically significant, we conducted Logit regressions with three different dependent variables. First, a dichotomous variable comprising whether the respondent broke off directly on the welcome screen (coded as 1 or 0 otherwise). Second, a dichotomous variable taking the value of 1 if the respondent broke off on any other page than the welcome screen (otherwise coded as 0). Third, a dichotomous variable measuring whether the respondent broke off on any page of the survey (both the welcome screen and any other page, coded as 1 or 0 otherwise). However, in case of the latter measures we cannot clearly prove if the observed effect is primarily due to the welcome screen or to an interaction between the welcome screen and the whole Web survey (or particular pages).

The results of the Logit regressions are presented in Table 4.2. The second column of this table presents the effects that the different treatments have on the likelihood that respondents break off on the welcome screen. The third column shows the impact of the different treatments on the breakoff likelihood during the survey excluding those respondents that broke off on the welcome screen. The fourth column looks at the effect of the design features on the overall breakoff likelihood. We have also estimated the models including all possible interaction effects between the experimental variables. However, the results did not clearly show that a combination of various treatments consequently increases the effects. Furthermore, we also included interactions effects between the experimental variables and subgroup variables such as gender or faculty. Since unambiguous subgroup differences in effects could not be identified or the variations within the subgroups were too small to estimate the model, the results of these interactions are also not presented and discussed in the following results section where we present only the parsimonious models.

Table 4.2
Logit regression

Table summary
This table displays logit regression. The information is grouped by Breakoff on the welcome screen, Breakoff on any page except the welcome screen, Breakoff at any time of the survey (appearing as column headers).
(1) Breakoff on the welcome screen (2) Breakoff on any page except the welcome screen (3) Breakoff at any time of the survey
Background color: red 0.17 (0.23) -0.33** (0.15) -0.20 (0.13)
Announced duration: 20minutes 1.15*** (0.26) 0.23 (0.15) 0.53*** (0.13)
Data security information: available via link -0.52** (0.23) -0.14 (0.15) -0.28** (0.13)
Constant -3.34*** (0.27) -1.61*** (0.15) -1.37*** (0.13)
N 1 419 1 419 1 419
Pseudo R-squared 0.04 0.01 0.02
Prob > chi 2 0.00 0.05 0.00
Standard errors in parentheses *** p<0.01, ** p<0.05, * p<0.1

In line with previous research, we expected that those respondents who received a red welcome screen are more likely to break off than those who received a white screen. Although the positive Logit coefficient in the second column indicates that there is indeed a positive relation between the red background and having a higher level of breakoffs on the welcome screen, this relationship is not statistically significant. However, there is a statistically significant negative effect of the red welcome screen on the breakoff rate on any screen except the welcome screen, i.e. the combination of the red welcome screen and the other white screens of the questionnaire seems to encourage the participants to continue. When looking at the effect of the red welcome screen on the overall breakoff rate, the coefficient presented in the fourth column of the table suggests that the color of the welcome screen has no significant effect. This observation indicates that the welcome screen, although important, is just one screen of the Web survey. As one of our pre-testers suggested, it might be the case that the color red boasts such a negative feeling that respondents click immediately further without looking at the screen. This idea was tested with an Ordinary Least Squares (OLS) regression in which the color and the other treatments as control were regressed on the amount of time spent on the welcome screen. However, the results (available upon request) did not prove any statistical significant effect. Note that the Pseudo R square values reported for the model (and across all models) is quite low. However, this is a common result for Logit regressions analyzing experimental outcomes. For example, Marcus et al. (2007) report a Nagelkerke's R squared of 0.041 and Bandilla, Couper and Kaczmirekt (2012) a Pseudo R square of 0.05.

We also suspected that respondents receiving a welcome screen with the announcement that the survey only takes 20 minutes were less likely to start than those who received a welcome screen on which it was stated that the Web survey would only take eight minutes. This theoretical expectation is statistically supported by a positive and statistically significant Logit coefficient of 1.15. Additionally, we assumed that those respondents that started the 'long' survey were less inclined to break off during the Web survey. However, we did not find any support for this hypothesis. The non-significant coefficient of 0.23 means that there is no significant difference of the breakoff rates on any screen except the first page between those that received an announced 'eight minute' survey and those that received the announcement that the survey would take 20 minutes to complete. In sum, the positive and significant coefficient of 0.53 in the fourth column indicates that those respondents that received a welcome screen on which it was stated that the Web survey would take 20 minutes are more likely to breakoff the survey than those respondents who received the announcement that it would only take eight minutes. Overall, the coefficients of the announced duration are the most important ones across the different models, i.e. the most important factor that explains the breakoff rates is the announced length on the welcome screen. This result is in line with the study conducted by Galesic and Bosnjak (2009) who found out that the longer stated length, the fewer respondents started and completed the questionnaire.

The last design feature that we varied on the welcome screen was the amount of emphasis that was placed on the privacy rights of respondents. We expected that the more these rights were emphasized, the more respondents would become aware of possible problems with these rights, and the less likely they were willing to start the Web survey in the first place. The results of the Logit models support this idea. The negative coefficient of -0.52 indicate that when the privacy rights are explained via a link on the Web survey (only six respondents actually opened this hyperlink), i.e. few words are spend on the explanation of these rights on the screen itself, the breakoff rates on the welcome screen decreases. In other words, the priming of the privacy rights on the welcome screen increases nonresponse. In addition, explaining the privacy rights more in depth on the welcome screen has also influence on the breakoff rates during the complete survey. However, we are not sure whether the decline in response rate is due to the amount of emphasis on privacy rights or because of the length of the welcome screen (explaining the privacy rights on the welcome screen resulted in a longer screen). Further research should try to distinguish these two related processes.

Previous | Next

Date modified: