Keyword search

Filter results by

Search Help
Currently selected filters that can be removed

Keyword(s)

Type

2 facets displayed. 0 facets selected.

Year of publication

1 facets displayed. 1 facets selected.

Geography

1 facets displayed. 0 facets selected.
Sort Help
entries

Results

All (17)

All (17) (0 to 10 of 17 results)

  • Articles and reports: 11-522-X200800010920
    Description:

    On behalf of Statistics Canada, I would like to welcome you all, friends and colleagues, to Symposium 2008. This the 24th International Symposium organized by Statistics Canada on survey methodology.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010946
    Description:

    In the mid 1990s the first question testing unit was set-up in the UK Office for National Statistics (ONS). The key objective of the unit was to develop and test the questions and questionnaire for the 2001 Census. Since the establishment of this unit the area has been expanded into a Data Collection Methodology (DCM) Centre of Expertise which now sits in the Methodology Directorate. The DCM centre has three branches which support DCM work for social surveys, business surveys, the Census and external organisations.

    In the past ten years DCM has achieved a variety of things. For example, introduced survey methodology involvement in the development and testing of business survey question(naire)s; introduced a mix-method approach to the development of questions and questionnaires; developed and implemented standards e.g. for the 2011 census questionnaire & showcards; and developed and delivered DCM training events.

    This paper will provide an overview of data collection methodology at the ONS from the perspective of achievements and challenges. It will cover areas such as methods, staff (e.g. recruitment, development and field security), and integration with the survey process.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010951
    Description:

    Missing values caused by item nonresponse represent one type of non-sampling error that occurs in surveys. When cases with missing values are discarded in statistical analyses estimates may be biased because of differences between responders with missing values and responders that do not have missing values. Also, when variables in the data have different patterns of missingness among sampled cases, and cases with missing values are discarded in statistical analyses, those analyses may yield inconsistent results because they are based on different subsets of sampled cases that may not be comparable. However, analyses that discard cases with missing values may be valid provided those values are missing completely at random (MCAR). Are those missing values MCAR?

    To compensate, missing values are often imputed or survey weights are adjusted using weighting class methods. Subsequent analyses based on those compensations may be valid provided that missing values are missing at random (MAR) within each of the categorizations of the data implied by the independent variables of the models that underlie those adjustment approaches. Are those missing values MAR?

    Because missing values are not observed, MCAR and MAR assumptions made by statistical analyses are infrequently examined. This paper describes a selection model from which statistical significance tests for the MCAR and MAR assumptions can be examined although the missing values are not observed. Data from the National Immunization Survey conducted by the U.S. Department of Health and Human Services are used to illustrate the methods.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010957
    Description:

    Business surveys differ from surveys of populations of individual persons or households in many respects. Two of the most important differences are (a) that respondents in business surveys do not answer questions about characteristics of themselves (such as their experiences, behaviours, attitudes and feelings) but about characteristics of organizations (such as their size, revenues, policies, and strategies) and (b) that they answer these questions as an informant for that organization. Academic business surveys differ from other business surveys, such as of national statistical agencies, in many respects as well. The one most important difference is that academic business surveys usually do not aim at generating descriptive statistics but at testing hypotheses, i.e. relations between variables. Response rates in academic business surveys are very low, which implies a huge risk of non-response bias. Usually no attempt is made to assess the extent of non-response bias and published survey results might, therefore, not be a correct reflection of actual relations within the population, which in return increases the likelihood that the reported test result is not correct.

    This paper provides an analysis of how (the risk of) non-response bias is discussed in research papers published in top management journals. It demonstrates that non-response bias is not assessed to a sufficient degree and that, if attempted at all, correction of non-response bias is difficult or very costly in practice. Three approaches to dealing with this problem are presented and discussed:(a) obtaining data by other means than questionnaires;(b) conducting surveys of very small populations; and(c) conducting surveys of very small samples.

    It will be discussed why these approaches are appropriate means of testing hypotheses in populations. Trade-offs regarding the selection of an approach will be discussed as well.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010971
    Description:

    Keynote address

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010989
    Description:

    At first sight, web surveys seem to be an interesting and attractive means of data collection. They provide simple, cheap and fast access to a large group of people. However, web surveys also suffer from methodological problems. Outcomes of web surveys may be severally biased, particularly if self-selection of respondents is applied instead of proper probability sampling. Under-coverage is also a serious problem. This raises the question whether web surveys can be used for data collection in official statistics. This paper addresses the problems under-coverage and self-selection in web surveys, and attempts to describe how Internet data collection can be incorporated in normal data collection practices of official statistics.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010990
    Description:

    The purpose of the Quebec Health and Social Services User Satisfaction Survey was to provide estimates of user satisfaction for three types of health care institutions (hospitals, medical clinics and CLSCs). Since a user could have visited one, two or all three types, and since the questionnaire could cover only one type, a procedure was established to select the type of institution at random. The selection procedure, which required variable selection probabilities, was unusual in that it was adjusted during the collection process to adapt increasingly to regional disparities in the use of health and social services.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010992
    Geography: Canada
    Description:

    The Canadian Community Health Survey (CCHS) was redesigned in 2007 so that it could use the continuous data collection method. Since then, a new sample has been selected every two months, and the data have also been collected over a two-month period. The survey uses two collection techniques: computer-assisted personal interviewing (CAPI) for the sample drawn from an area frame, and computer-assisted telephone interviewing (CATI) for the sample selected from a telephone list frame. Statistics Canada has recently implemented some data collection initiatives to reduce the response burden and survey costs while maintaining or improving data quality. The new measures include the use of a call management tool in the CATI system and a limit on the number of calls. They help manage telephone calls and limit the number of attempts made to contact a respondent. In addition, with the paradata that became available very recently, reports are now being generated to assist in evaluating and monitoring collection procedures and efficiency in real time. The CCHS has also been selected to implement further collection initiatives in the future. This paper provides a brief description of the survey, explains the advantages of continuous collection and outlines the impact that the new initiatives have had on the survey.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010993
    Description:

    Until now, years of experience in questionnaire design were required to estimate how long it would take a respondent, on the average, to complete a CATI questionnaire for a new survey. This presentation focuses on a new method which produces interview time estimates for questionnaires at the development stage. The method uses Blaise Audit Trail data and previous surveys. It was developed, tested and verified for accuracy on some large scale surveys.

    First, audit trail data was used to determine the average time previous respondents have taken to answer specific types of questions. These would include questions that require a yes/no answer, scaled questions, "mark all that apply" questions, etc. Second, for any given questionnaire, the paths taken by population sub-groups were mapped to identify the series of questions answered by different types of respondents, and timed to determine what the longest possible interview time would be. Finally, the overall expected time it takes to complete the questionnaire is calculated using estimated proportions of the population expected to answer each question.

    So far, we used paradata to accurately estimate average respondent interview completion times. We note that the method that we developed could also be used to estimate specific respondent interview completion times.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010996
    Description:

    In recent years, the use of paradata has become increasingly important to the management of collection activities at Statistics Canada. Particular attention has been paid to social surveys conducted over the phone, like the Survey of Labour and Income Dynamics (SLID). For recent SLID data collections, the number of call attempts was capped at 40 calls. Investigations of the SLID Blaise Transaction History (BTH) files were undertaken to assess the impact of the cap on calls.The purpose of the first study was to inform decisions as to the capping of call attempts, the second study focused on the nature of nonresponse given the limit of 40 attempts.

    The use of paradata as auxiliary information for studying and accounting for survey nonresponse was also examined. Nonresponse adjustment models using different paradata variables gathered at the collection stage were compared to the current models based on available auxiliary information from the Labour Force Survey.

    Release date: 2009-12-03
Data (0)

Data (0) (0 results)

No content available at this time.

Analysis (17)

Analysis (17) (10 to 20 of 17 results)

  • Articles and reports: 11-522-X200800010997
    Description:

    Over the past few years, Statistics Canada has conducted several analytical studies using paradata to learn more about various issues surrounding the data collection process and practices. In particular, these investigations have attempted to better understand how data collection progresses through its cycle, to identify strategic opportunities, to evaluate new collection initiatives and to improve the way the agency conducts and manages its surveys. The main objectives of this paper are to present the main results of these past and ongoing investigations describing Statistics Canada's experiences with regards to paradata. Future research plans that focus on identifying viable operational strategies that could improve efficiency or data quality are also discussed.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800011005
    Description:

    In 2006 Statistics New Zealand started developing a strategy aimed at coordinating new and existing initiatives focused on respondent load. The development of the strategy lasted more than a year and the resulting commitment to reduce respondent load has meant that the organisation has had to confront a number of issues that impact on the way we conduct our surveys.

    The next challenge for Statistics NZ is the transition from the project based initiatives outlined in the strategy to managing load on an ongoing basis.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800011015
    Description:

    Statistics South Africa (StatsSA) prides itself in the accuracy and validity of data collected, processed and disseminated. The introduction of a Real Time Management System (RTMS) and the Global Positioning System (GPS) into field operations is aimed at enhancing the process of data collection and minimising errors with regard to locating sampled dwelling units and tracking material from one point in the survey chain to another.

    The Quarterly Labour Force Survey (QLFS) is a pioneering project at Stats SA where the Master sample (MS) is linked to a GPS data base, where every record listed on the MS listing book has a corresponding GPS coordinate captured for it. These GPS points allows the Survey Officer to record spatially where different records are on the ground that are being listed (i.e. shops, houses, schools, churches etc.). The captured information is then linked to a shape file which populates where the structures are on the ground in relation to the manual listing records.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800011016
    Description:

    Now that we have come to the end of a day of workshops plus three very full days of sessions, I have the very pleasant task of offering a few closing remarks and, more importantly, of recognizing the efforts of those who have contributed to the success of this year's symposium. And it has clearly been a success.

    Release date: 2009-12-03

  • Stats in brief: 13-605-X200900111029
    Description:

    Quarterly international merchandise trade statistics are published approximately six weeks after the reference period. Two weeks later, these data are incorporated into the Income and Expenditure Accounts, at which point they are subject to revision. This note outlines the primary sources of the revisions.

    Release date: 2009-11-19

  • Articles and reports: 12-001-X200900110881
    Description:

    Regression diagnostics are geared toward identifying individual points or groups of points that have an important influence on a fitted model. When fitting a model with survey data, the sources of influence are the response variable Y, the predictor variables X, and the survey weights, W. This article discusses the use of the hat matrix and leverages to identify points that may be influential in fitting linear models due to large weights or values of predictors. We also contrast findings that an analyst will obtain if ordinary least squares is used rather than survey weighted least squares to determine which points are influential.

    Release date: 2009-06-22

  • Articles and reports: 12-001-X200900110888
    Description:

    In the selection of a sample, a current practice is to define a sampling design stratified on subpopulations. This reduces the variance of the Horvitz-Thompson estimator in comparison with direct sampling if the strata are highly homogeneous with respect to the variable of interest. If auxiliary variables are available for each individual, sampling can be improved through balanced sampling within each stratum, and the Horvitz-Thompson estimator will be more precise if the auxiliary variables are strongly correlated with the variable of interest. However, if the sample allocation is small in some strata, balanced sampling will be only very approximate. In this paper, we propose a method of selecting a sample that is balanced across the entire population while maintaining a fixed allocation within each stratum. We show that in the important special case of size-2 sampling in each stratum, the precision of the Horvitz-Thompson estimator is improved if the variable of interest is well explained by balancing variables over the entire population. An application to rotational sampling is also presented.

    Release date: 2009-06-22
Reference (0)

Reference (0) (0 results)

No content available at this time.

Date modified: