Quality assurance

Sort Help
entries

Results

All (250)

All (250) (40 to 50 of 250 results)

  • Articles and reports: 11-522-X201300014288
    Description:

    Probability-based surveys, those including with samples selected through a known randomization mechanism, are considered by many to be the gold standard in contrast to non-probability samples. Probability sampling theory was first developed in the early 1930’s and continues today to justify the estimation of population values from these data. Conversely, studies using non-probability samples have gained attention in recent years but they are not new. Touted as cheaper, faster (even better) than probability designs, these surveys capture participants through various “on the ground” methods (e.g., opt-in web survey). But, which type of survey is better? This paper is the first in a series on the quest for a quality framework under which all surveys, probability- and non-probability-based, may be measured on a more equal footing. First, we highlight a few frameworks currently in use, noting that “better” is almost always relative to a survey’s fit for purpose. Next, we focus on the question of validity, particularly external validity when population estimates are desired. Estimation techniques used to date for non-probability surveys are reviewed, along with a few comparative studies of these estimates against those from a probability-based sample. Finally, the next research steps in the quest are described, followed by a few parting comments.

    Release date: 2014-10-31

  • Articles and reports: 11F0019M2013351
    Geography: Canada
    Description:

    Measures of subjective well-being are increasingly prominent in international policy discussions about how best to measure "societal progress" and the well-being of national populations. This has implications for national statistical offices, as calls have been made for them to include measures of subjective well-being in their household surveys (Organization for Economic Cooperation and Development 2013). Statistics Canada has included measures of subjective well-being - particularly life satisfaction - in its surveys for twenty-five years, although the wording of these questions and the response categories have evolved over time. Statistics Canada's General Social Survey (GSS) and Canadian Community Health Survey (CCHS) offer a valuable opportunity to examine the stability of life satisfaction responses and their correlates from year to year using a consistent analytical framework.

    Release date: 2013-10-11

  • Articles and reports: 82-003-X201300811857
    Geography: Canada
    Description:

    Using data from the Canadian Cancer Registry, vital statistics and population statistics, this study examines the assumption of stable age-standardized sex- and cancer-site-specific incidence-to-mortality rate ratios across regions, which underlies the North American Association of Central Cancer Registries' (NAACCR) completeness of case indicator.

    Release date: 2013-08-21

  • 44. Survey Quality Archived
    Articles and reports: 12-001-X201200211751
    Description:

    Survey quality is a multi-faceted concept that originates from two different development paths. One path is the total survey error paradigm that rests on four pillars providing principles that guide survey design, survey implementation, survey evaluation, and survey data analysis. We should design surveys so that the mean squared error of an estimate is minimized given budget and other constraints. It is important to take all known error sources into account, to monitor major error sources during implementation, to periodically evaluate major error sources and combinations of these sources after the survey is completed, and to study the effects of errors on the survey analysis. In this context survey quality can be measured by the mean squared error and controlled by observations made during implementation and improved by evaluation studies. The paradigm has both strengths and weaknesses. One strength is that research can be defined by error sources and one weakness is that most total survey error assessments are incomplete in the sense that it is not possible to include the effects of all the error sources. The second path is influenced by ideas from the quality management sciences. These sciences concern business excellence in providing products and services with a focus on customers and competition from other providers. These ideas have had a great influence on many statistical organizations. One effect is the acceptance among data providers that product quality cannot be achieved without a sufficient underlying process quality and process quality cannot be achieved without a good organizational quality. These levels can be controlled and evaluated by service level agreements, customer surveys, paradata analysis using statistical process control, and organizational assessment using business excellence models or other sets of criteria. All levels can be improved by conducting improvement projects chosen by means of priority functions. The ultimate goal of improvement projects is that the processes involved should gradually approach a state where they are error-free. Of course, this might be an unattainable goal, albeit one to strive for. It is not realistic to hope for continuous measurements of the total survey error using the mean squared error. Instead one can hope that continuous quality improvement using management science ideas and statistical methods can minimize biases and other survey process problems so that the variance becomes an approximation of the mean squared error. If that can be achieved we have made the two development paths approximately coincide.

    Release date: 2012-12-19

  • Articles and reports: 12-001-X201200111680
    Description:

    Survey data are potentially affected by interviewer falsifications with data fabrication being the most blatant form. Even a small number of fabricated interviews might seriously impair the results of further empirical analysis. Besides reinterviews, some statistical approaches have been proposed for identifying this type of fraudulent behaviour. With the help of a small dataset, this paper demonstrates how cluster analysis, which is not commonly employed in this context, might be used to identify interviewers who falsify their work assignments. Several indicators are combined to classify 'at risk' interviewers based solely on the data collected. This multivariate classification seems superior to the application of a single indicator such as Benford's law.

    Release date: 2012-06-27

  • Articles and reports: 82-003-X201200111625
    Geography: Canada
    Description:

    This study compares estimates of the prevalence of cigarette smoking based on self-report with estimates based on urinary cotinine concentrations. The data are from the 2007 to 2009 Canadian Health Measures Survey, which included self-reported smoking status and the first nationally representative measures of urinary cotinine.

    Release date: 2012-02-15

  • Surveys and statistical programs – Documentation: 62F0026M2011001
    Description:

    This report describes the quality indicators produced for the 2009 Survey of Household Spending. These quality indicators, such as coefficients of variation, nonresponse rates, slippage rates and imputation rates, help users interpret the survey data.

    Release date: 2011-06-16

  • Articles and reports: 82-003-X201100111404
    Geography: Canada
    Description:

    This study assesses three child-reported parenting behaviour scales (nurturance, rejection and monitoring) in the National Longitudinal Survey of Children and Youth.

    Release date: 2011-02-16

  • Articles and reports: 82-003-X201000411391
    Geography: Canada
    Description:

    This analysis uses data from the Cognition Module of the 2009 Canadian Community Health Survey - Healthy Aging to validate a categorization of levels of cognitive functioning in the household population aged 45 or older.

    Release date: 2010-12-15

  • Surveys and statistical programs – Documentation: 62F0026M2010004
    Description:

    This report describes the quality indicators produced for the 2007 Survey of Household Spending. These quality indicators, such as coefficients of variation, nonresponse rates, slippage rates and imputation rates, help users interpret the survey data.

    Release date: 2010-12-13
Data (0)

Data (0) (0 results)

No content available at this time.

Analysis (171)

Analysis (171) (0 to 10 of 171 results)

  • Journals and periodicals: 75F0002M
    Description: This series provides detailed documentation on income developments, including survey design issues, data quality evaluation and exploratory research.
    Release date: 2024-04-26

  • Articles and reports: 13-604-M2024001
    Description: This documentation outlines the methodology used to develop the Distributions of household economic accounts published in January 2024 for the reference years 2010 to 2023. It describes the framework and the steps implemented to produce distributional information aligned with the National Balance Sheet Accounts and other national accounts concepts. It also includes a report on the quality of the estimated distributions.
    Release date: 2024-01-22

  • Articles and reports: 13-604-M2023001
    Description: This documentation outlines the methodology used to develop the Distributions of household economic accounts published in March 2023 for the reference years 2010 to 2022. It describes the framework and the steps implemented to produce distributional information aligned with the National Balance Sheet Accounts and other national accounts concepts. It also includes a report on the quality of the estimated distributions.
    Release date: 2023-03-31

  • Articles and reports: 13-604-M2022002
    Description:

    This documentation outlines the methodology used to develop the Distributions of household economic accounts published in August 2022 for the reference years 2010 to 2021. It describes the framework and the steps implemented to produce distributional information aligned with the National Balance Sheet Accounts and other national accounts concepts. It also includes a report on the quality of the estimated distributions.

    Release date: 2022-08-03

  • Articles and reports: 11-522-X202100100015
    Description: National statistical agencies such as Statistics Canada have a responsibility to convey the quality of statistical information to users. The methods traditionally used to do this are based on measures of sampling error. As a result, they are not adapted to the estimates produced using administrative data, for which the main sources of error are not due to sampling. A more suitable approach to reporting the quality of estimates presented in a multidimensional table is described in this paper. Quality indicators were derived for various post-acquisition processing steps, such as linkage, geocoding and imputation, by estimation domain. A clustering algorithm was then used to combine domains with similar quality levels for a given estimate. Ratings to inform users of the relative quality of estimates across domains were assigned to the groups created. This indicator, called the composite quality indicator (CQI), was developed and experimented with in the Canadian Housing Statistics Program (CHSP), which aims to produce official statistics on the residential housing sector in Canada using multiple administrative data sources.

    Keywords: Unsupervised machine learning, quality assurance, administrative data, data integration, clustering.

    Release date: 2021-10-22

  • Articles and reports: 11-522-X202100100023
    Description:

    Our increasingly digital society provides multiple opportunities to maximise our use of data for the public good – using a range of sources, data types and technologies to enable us to better inform the public about social and economic matters and contribute to the effective development and evaluation of public policy. Ensuring use of data in ethically appropriate ways is an important enabler for realising the potential to use data for public good research and statistics. Earlier this year the UK Statistics Authority launched the Centre for Applied Data Ethics to provide applied data ethics services, advice, training and guidance to the analytical community across the United Kingdom. The Centre has developed a framework and portfolio of services to empower analysts to consider the ethics of their research quickly and easily, at the research design phase thus promoting a culture of ethics by design. This paper will provide an overview of this framework, the accompanying user support services and the impact of this work.

    Key words: Data ethics, data, research and statistics

    Release date: 2021-10-22

  • Articles and reports: 13-604-M2021001
    Description:

    This documentation outlines the methodology used to develop the Distributions of household economic accounts published in September 2021 for the reference years 2010 to 2020. It describes the framework and the steps implemented to produce distributional information aligned with the National Balance Sheet Accounts and other national accounts concepts. It also includes a report on the quality of the estimated distributions.

    Release date: 2021-09-07

  • Stats in brief: 89-20-00062020001
    Description:

    In this video, you will be introduced to the fundamentals of data quality, which can be summed up in six dimensions—or six different ways to think about quality. You will also learn how each dimension can be used to evaluate the quality of data.

    Release date: 2020-09-23

  • Stats in brief: 89-20-00062020008
    Description:

    Accuracy is one of the six dimensions of Data Quality used at Statistics Canada.   Accuracy refers to how well the data reflects the truth or what actually happened.   In this video we will present methods to describe accuracy in terms of validity and correctness. We will also discuss methods to validate and check the accuracy of data values.

    Release date: 2020-09-23

  • Articles and reports: 13-604-M2020002
    Description:

    This documentation outlines the methodology used to develop the Distributions of household economic accounts published in June 2020 for the reference years 2010 to 2019. It describes the framework and the steps implemented to produce distributional information aligned with the National balance sheet accounts and other national accounts concepts. It also includes a report on the quality of the estimated distributions.

    Release date: 2020-06-26
Reference (78)

Reference (78) (30 to 40 of 78 results)

  • Surveys and statistical programs – Documentation: 11-522-X20010016229
    Description:

    This paper discusses the approach that Statistics Canada has taken to improve the quality of annual business surveys through their integration in the Unified Enterprise Survey (UES). The primary objective of the UES is to measure the final annual sales of goods and services accurately by province, in sufficient detail and in a timely manner.

    This paper describes the methodological approaches that the UES has used to improve financial and commodity data quality in four broad areas. These include improved coherence of the data collected from different levels of the enterprise, better coverage of industries, better depth of information (in the sense of more content detail and estimates for more detailed domains) and better consistency of the concepts and methods across industries.

    The approach, in achieving quality, has been to (a) establish a base measure of the quality of the business survey program prior to the UES, (b) measure the annual data quality of the UES, and (c) carry out specific studies to better understand the quality of UES data and methods.

    Release date: 2002-09-12

  • Surveys and statistical programs – Documentation: 62F0026M2002001
    Description:

    This report describes the quality indicators produced for the 2000 Survey of Household Spending. It covers the usual quality indicators that help users interpret the data, such as coefficients of variation, non-response rates, slippage rates and imputation rates.

    Release date: 2002-06-28

  • Surveys and statistical programs – Documentation: 62F0026M2001001
    Description:

    This report describes the quality indicators produced for the 1998 Survey of Household Spending. It covers the usual quality indicators that help users interpret data, such as coefficients of variation, nonresponse rates, imputation rates and the impact of imputed data on the estimates. Added to these are various less often used indicators such as slippage rates and measures of the representativity of the sample for particular characteristics that are useful for evaluating the survey methodology.

    Release date: 2001-10-15

  • Surveys and statistical programs – Documentation: 62F0026M2001002
    Description:

    This report describes the quality indicators produced for the 1999 Survey of Household Spending. It covers the usual quality indicators that help users interpret data, such as coefficients of variation, nonresponse rates, imputation rates and the impact of imputed data on the estimates. Added to these are various less often used indicators such as slippage rates and measures of the representativity of the sample for particular characteristics that are useful for evaluating the survey methodology.

    Release date: 2001-10-15

  • Surveys and statistical programs – Documentation: 11-522-X19990015638
    Description:

    The focus of Symposium'99 is on techniques and methods for combining data from different sources and on analysis of the resulting data sets. In this talk we illustrate the usefulness of taking such an "integrating" approach when tackling a complex statistical problem. The problem itself is easily described - it is how to approximate, as closely as possible, a "perfect census", and in particular, how to obtain census counts that are "free" of underenumeration. Typically, underenumeration is estimated by carrying out a post enumeration survey (PES) following the census. In the UK in 1991 the PEF failed to identify the full size of the underenumeration and so demographic methods were used to estimate the extent of the undercount. The problems with the "traditional" PES approach in 1991 resulted in a joint research project between the Office for National Statistics and the Department of Social Statistics at the University of Southampton aimed at developing a methodology which will allow a "One Number Census" in the UK in 2001. That is, underenumeration will be accounted for not just at high levels of aggregation, but right down to the lowest levels at which census tabulations are produced. In this way all census outputs will be internally consistent, adding to the national population estimates. The basis of this methodology is the integration of information from a number of data sources in order to achieve this "One Number".

    Release date: 2000-03-02

  • Surveys and statistical programs – Documentation: 11-522-X19990015640
    Description:

    This paper states how SN is preparing for a new era in the making of statistics, as it is triggered by technological and methodological developments. An essential feature of the turn to the new era is the farewell to the stovepipe way of data processing. The paper discusses how new technological and methodological tools will affect processes and their organization. Special emphasis is put on one of the major chances and challenges the new tools offer: establishing coherence in the content of statistics and in the presentation to users.

    Release date: 2000-03-02

  • Surveys and statistical programs – Documentation: 11-522-X19990015644
    Description:

    One method of enriching survey data is to supplement information collected directly from the respondent with that obtained from administrative systems. The aims of such a practice include being able to collect data which might not otherwise be possible, provision of better quality information for data items which respondents may not be able to report accurately (or not at all) reduction of respondent load, and maximising the utility of information held in administrative systems. Given the direct link with administrative information, the data set resulting from such techniques is potentially a powerful basis for policy-relevant analysis and evaluation. However, the processes involved in effectively combining data from different sources raise a number of challenges which need to be addressed by the parties involved. These include issues associated with privacy, data linking, data quality, estimation, and dissemination.

    Release date: 2000-03-02

  • Surveys and statistical programs – Documentation: 11-522-X19990015648
    Description:

    We estimate the parameters of a stochastic model for labour force careers involving distributions of correlated durations employed, unemployed (with and without job search) and not in the labour force. If the model is to account for sub-annual labour force patterns as well as advancement towards retirement, then no single data source is adequate to inform it. However, it is possible to build up an approximation from a number of different sources.

    Release date: 2000-03-02

  • Surveys and statistical programs – Documentation: 11-522-X19990015652
    Description:

    Objective: To create an occupational surveillance system by collecting, linking, evaluating and disseminating data relating to occupation and mortality with the ultimate aim of reducing or preventing excess risk among workers and the general population.

    Release date: 2000-03-02

  • Surveys and statistical programs – Documentation: 11-522-X19990015656
    Description:

    Time series studies have shown associations between air pollution concentrations and morbidity and mortality. These studies have largely been conducted within single cities, and with varying methods. Critics of these studies have questioned the validity of the data sets used and the statistical techniques applied to them; the critics have noted inconsistencies in findings among studies and even in independent re-analyses of data from the same city. In this paper we review some of the statistical methods used to analyze a subset of a national data base of air pollution, mortality and weather assembled during the National Morbidity and Mortality Air Pollution Study (NMMAPS).

    Release date: 2000-03-02
Date modified: