Keyword search

Sort Help
entries

Results

All (30)

All (30) (0 to 10 of 30 results)

  • Notices and consultations: 92-140-X2016001
    Description:

    The 2016 Census Program Content Test was conducted from May 2 to June 30, 2014. The Test was designed to assess the impact of any proposed content changes to the 2016 Census Program and to measure the impact of including a social insurance number (SIN) question on the data quality.

    This quantitative test used a split-panel design involving 55,000 dwellings, divided into 11 panels of 5,000 dwellings each: five panels were dedicated to the Content Test while the remaining six panels were for the SIN Test. Two models of test questionnaires were developed to meet the objectives, namely a model with all the proposed changes EXCEPT the SIN question and a model with all the proposed changes INCLUDING the SIN question. A third model of 'control' questionnaire with the 2011 content was also developed. The population living in a private dwelling in mail-out areas in one of the ten provinces was targeted for the test. Paper and electronic response channels were part of the Test as well.

    This report presents the Test objectives, the design and a summary of the analysis in order to determine potential content for the 2016 Census Program. Results from the data analysis of the Test were not the only elements used to determine the content for 2016. Other elements were also considered, such as response burden, comparison over time and users’ needs.

    Release date: 2016-04-01

  • Articles and reports: 11-522-X200800010920
    Description:

    On behalf of Statistics Canada, I would like to welcome you all, friends and colleagues, to Symposium 2008. This the 24th International Symposium organized by Statistics Canada on survey methodology.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010947
    Description:

    This paper addresses the efforts of the U.S. Energy Information Administration to design, test and implement new and substantially redesigned surveys. The need to change EIA's surveys has become increasingly important, as U.S. energy industries have moved from highly regulated to deregulated business. This has substantially affected both their ability and willingness to report data. The paper focuses on how EIA has deployed current tools for designing and testing surveys and the reasons that these methods have not always yielded the desired results. It suggests some new tools and methods that we would like to try to improve the quality of our data.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010950
    Description:

    The next census will be conducted in May 2011. Being a major survey, it presents a formidable challenge for Statistics Canada and requires a great deal of time and resources. Careful planning has been done to ensure that all deadlines are met. A number of steps have been planned in the questionnaire testing process. These tests apply to both census content and the proposed communications strategy. This paper presents an overview of the strategy, with a focus on combining qualitative studies with the 2008 quantitative study so that the results can be analyzed and the proposals properly evaluated.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010954
    Description:

    Over the past year, Statistics Canada has been developing and testing a new way to monitor the performance of interviewers conducting computer-assisted personal interviews (CAPI). A formal process already exists for monitoring centralized telephone interviews. Monitors listen to telephone interviews as they take place to assess the interviewer's performance using pre-defined criteria and provide feedback to the interviewer on what was well done and what needs improvement. For the CAPI program, we have developed and are testing a pilot approach whereby interviews are digitally recorded and later a monitor listens to these recordings to assess the field interviewer's performance and provide feedback in order to help improve the quality of the data. In this paper, we will present an overview of the CAPI monitoring project at Statistics Canada by describing the CAPI monitoring methodology and the plans for implementation.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010957
    Description:

    Business surveys differ from surveys of populations of individual persons or households in many respects. Two of the most important differences are (a) that respondents in business surveys do not answer questions about characteristics of themselves (such as their experiences, behaviours, attitudes and feelings) but about characteristics of organizations (such as their size, revenues, policies, and strategies) and (b) that they answer these questions as an informant for that organization. Academic business surveys differ from other business surveys, such as of national statistical agencies, in many respects as well. The one most important difference is that academic business surveys usually do not aim at generating descriptive statistics but at testing hypotheses, i.e. relations between variables. Response rates in academic business surveys are very low, which implies a huge risk of non-response bias. Usually no attempt is made to assess the extent of non-response bias and published survey results might, therefore, not be a correct reflection of actual relations within the population, which in return increases the likelihood that the reported test result is not correct.

    This paper provides an analysis of how (the risk of) non-response bias is discussed in research papers published in top management journals. It demonstrates that non-response bias is not assessed to a sufficient degree and that, if attempted at all, correction of non-response bias is difficult or very costly in practice. Three approaches to dealing with this problem are presented and discussed:(a) obtaining data by other means than questionnaires;(b) conducting surveys of very small populations; and(c) conducting surveys of very small samples.

    It will be discussed why these approaches are appropriate means of testing hypotheses in populations. Trade-offs regarding the selection of an approach will be discussed as well.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800011007
    Description:

    The Questionnaire Design Resource Centre (QDRC) is the focal point of expertise at Statistics Canada for questionnaire design and evaluation. As it stands now, cognitive interviewing to test questionnaires is most often done near the end of the questionnaire development process. By participating earlier in the questionnaire development process, the QDRC could test new survey topics using more adaptive cognitive methods for each step of the questionnaire development process. This would necessitate fewer participants for each phase of testing, thus reducing the cost and the recruitment challenge.

    Based on a review of the literature and Statistics Canada's existing questionnaire evaluation projects, this paper will describe how the QDRC could help clients in making appropriate improvements to their questionnaire in a timely manner.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800011008
    Description:

    In one sense, a questionnaire is never complete. Test results, paradata and research findings constantly provide reasons to update and improve the questionnaire. In addition, establishments change over time and questions need to be updated accordingly. In reality, it doesn't always work like this. At Statistics Sweden there are several examples of questionnaires that were designed at one point in time and rarely improved later on. However, we are currently trying to shift the perspective on questionnaire design from a linear to a cyclic one. We are developing a cyclic model in which the questionnaire can be improved continuously in multiple rounds. In this presentation, we will discuss this model and how we work with it.

    Release date: 2009-12-03

  • Articles and reports: 82-003-S200700010361
    Description:

    This article summarizes the background, history and rationale for the Canadian Health Measures Survey, and provides an overview of the objectives, methods and analysis plans.

    Release date: 2007-12-05

  • Articles and reports: 82-003-S200700010363
    Description:

    This overview describes the sampling strategy used to meet the collection and estimation requirements of the Canadian Health Measures Survey.

    Release date: 2007-12-05
Data (0)

Data (0) (0 results)

No content available at this time.

Analysis (29)

Analysis (29) (0 to 10 of 29 results)

  • Articles and reports: 11-522-X200800010920
    Description:

    On behalf of Statistics Canada, I would like to welcome you all, friends and colleagues, to Symposium 2008. This the 24th International Symposium organized by Statistics Canada on survey methodology.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010947
    Description:

    This paper addresses the efforts of the U.S. Energy Information Administration to design, test and implement new and substantially redesigned surveys. The need to change EIA's surveys has become increasingly important, as U.S. energy industries have moved from highly regulated to deregulated business. This has substantially affected both their ability and willingness to report data. The paper focuses on how EIA has deployed current tools for designing and testing surveys and the reasons that these methods have not always yielded the desired results. It suggests some new tools and methods that we would like to try to improve the quality of our data.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010950
    Description:

    The next census will be conducted in May 2011. Being a major survey, it presents a formidable challenge for Statistics Canada and requires a great deal of time and resources. Careful planning has been done to ensure that all deadlines are met. A number of steps have been planned in the questionnaire testing process. These tests apply to both census content and the proposed communications strategy. This paper presents an overview of the strategy, with a focus on combining qualitative studies with the 2008 quantitative study so that the results can be analyzed and the proposals properly evaluated.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010954
    Description:

    Over the past year, Statistics Canada has been developing and testing a new way to monitor the performance of interviewers conducting computer-assisted personal interviews (CAPI). A formal process already exists for monitoring centralized telephone interviews. Monitors listen to telephone interviews as they take place to assess the interviewer's performance using pre-defined criteria and provide feedback to the interviewer on what was well done and what needs improvement. For the CAPI program, we have developed and are testing a pilot approach whereby interviews are digitally recorded and later a monitor listens to these recordings to assess the field interviewer's performance and provide feedback in order to help improve the quality of the data. In this paper, we will present an overview of the CAPI monitoring project at Statistics Canada by describing the CAPI monitoring methodology and the plans for implementation.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010957
    Description:

    Business surveys differ from surveys of populations of individual persons or households in many respects. Two of the most important differences are (a) that respondents in business surveys do not answer questions about characteristics of themselves (such as their experiences, behaviours, attitudes and feelings) but about characteristics of organizations (such as their size, revenues, policies, and strategies) and (b) that they answer these questions as an informant for that organization. Academic business surveys differ from other business surveys, such as of national statistical agencies, in many respects as well. The one most important difference is that academic business surveys usually do not aim at generating descriptive statistics but at testing hypotheses, i.e. relations between variables. Response rates in academic business surveys are very low, which implies a huge risk of non-response bias. Usually no attempt is made to assess the extent of non-response bias and published survey results might, therefore, not be a correct reflection of actual relations within the population, which in return increases the likelihood that the reported test result is not correct.

    This paper provides an analysis of how (the risk of) non-response bias is discussed in research papers published in top management journals. It demonstrates that non-response bias is not assessed to a sufficient degree and that, if attempted at all, correction of non-response bias is difficult or very costly in practice. Three approaches to dealing with this problem are presented and discussed:(a) obtaining data by other means than questionnaires;(b) conducting surveys of very small populations; and(c) conducting surveys of very small samples.

    It will be discussed why these approaches are appropriate means of testing hypotheses in populations. Trade-offs regarding the selection of an approach will be discussed as well.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800011007
    Description:

    The Questionnaire Design Resource Centre (QDRC) is the focal point of expertise at Statistics Canada for questionnaire design and evaluation. As it stands now, cognitive interviewing to test questionnaires is most often done near the end of the questionnaire development process. By participating earlier in the questionnaire development process, the QDRC could test new survey topics using more adaptive cognitive methods for each step of the questionnaire development process. This would necessitate fewer participants for each phase of testing, thus reducing the cost and the recruitment challenge.

    Based on a review of the literature and Statistics Canada's existing questionnaire evaluation projects, this paper will describe how the QDRC could help clients in making appropriate improvements to their questionnaire in a timely manner.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800011008
    Description:

    In one sense, a questionnaire is never complete. Test results, paradata and research findings constantly provide reasons to update and improve the questionnaire. In addition, establishments change over time and questions need to be updated accordingly. In reality, it doesn't always work like this. At Statistics Sweden there are several examples of questionnaires that were designed at one point in time and rarely improved later on. However, we are currently trying to shift the perspective on questionnaire design from a linear to a cyclic one. We are developing a cyclic model in which the questionnaire can be improved continuously in multiple rounds. In this presentation, we will discuss this model and how we work with it.

    Release date: 2009-12-03

  • Articles and reports: 82-003-S200700010361
    Description:

    This article summarizes the background, history and rationale for the Canadian Health Measures Survey, and provides an overview of the objectives, methods and analysis plans.

    Release date: 2007-12-05

  • Articles and reports: 82-003-S200700010363
    Description:

    This overview describes the sampling strategy used to meet the collection and estimation requirements of the Canadian Health Measures Survey.

    Release date: 2007-12-05

  • Articles and reports: 82-003-S200700010364
    Description:

    This article describes how the Canadian Health Measures Survey has addressed the ethical, legal and social issues (ELSI) arising from the survey. The development of appropriate procedures and the rationale behind them are discussed in detail for some specific ELSI.

    Release date: 2007-12-05
Reference (1)

Reference (1) ((1 result))

  • Notices and consultations: 92-140-X2016001
    Description:

    The 2016 Census Program Content Test was conducted from May 2 to June 30, 2014. The Test was designed to assess the impact of any proposed content changes to the 2016 Census Program and to measure the impact of including a social insurance number (SIN) question on the data quality.

    This quantitative test used a split-panel design involving 55,000 dwellings, divided into 11 panels of 5,000 dwellings each: five panels were dedicated to the Content Test while the remaining six panels were for the SIN Test. Two models of test questionnaires were developed to meet the objectives, namely a model with all the proposed changes EXCEPT the SIN question and a model with all the proposed changes INCLUDING the SIN question. A third model of 'control' questionnaire with the 2011 content was also developed. The population living in a private dwelling in mail-out areas in one of the ten provinces was targeted for the test. Paper and electronic response channels were part of the Test as well.

    This report presents the Test objectives, the design and a summary of the analysis in order to determine potential content for the 2016 Census Program. Results from the data analysis of the Test were not the only elements used to determine the content for 2016. Other elements were also considered, such as response burden, comparison over time and users’ needs.

    Release date: 2016-04-01
Date modified: