Information identified as archived is provided for reference, research or recordkeeping purposes. It is not subject to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please contact us to request a format other than those available.
This document brings together guidelines and checklists on many issues that need to be considered in the pursuit of quality objectives in the execution of statistical activities. Its focus is on how to assure quality through effective and appropriate design or redesign of a statistical project or program from inception through to data evaluation, dissemination and documentation. These guidelines draw on the collective knowledge and experience of many Statistics Canada employees. It is expected that Quality Guidelines will be useful to staff engaged in the planning and design of surveys and other statistical projects, as well as to those who evaluate and analyze the outputs of these projects.
Since the publication of the first edition of Quality Guidelines in 1985, there has been much discussion among national and international statistical agencies on the subject of quality, and this continues. While there is no standard definition of quality for official statistics, there is general acceptance among these agencies that quality embodies a broad notion of "fitness for use". Fitness for use encompasses not only the statistical quality concepts of variance and bias, but also other characteristics such as relevance and timeliness that determine how effectively statistical information can be used.
This broader definition of quality parallels similar views propounded by the Total Quality Management (TQM) movement. In part, to achieve and maintain a level of quality or fitness acceptable to users TQM advocates: knowing and understanding the clients’ needs; involving employees in decision making associated with meeting these needs; and continuously seeking to improve methods and processes. That attention to these three tenets will lead to quality improvement is as true for a statistical agency as it is for any other organization. Quality Guidelines reflects these three principles, as well as Statistics Canada's long standing efforts to develop and disseminate reliable and objective statistical information that satisfies and anticipates critical needs.
Statistics Canada defines quality or "fitness for use" of statistical information in terms of six constituent elements or dimensions: relevance, accuracy, timeliness, accessibility, interpretability, and coherence (Statistics Canada, 2002c).
The relevance of statistical information reflects the degree to which it meets the real needs of clients. It is concerned with whether the available information sheds light on the issues that are important to users. Assessing relevance is subjective and depends upon the varying needs of users. The Agency’s challenge is to weigh and balance the conflicting needs of current and potential users to produce a program that goes as far as possible in satisfying the most important needs within given resource constraints.
The accuracy of statistical information is the degree to which the information correctly describes the phenomena it was designed to measure. It is usually characterized in terms of error in statistical estimates and is traditionally decomposed into bias (systematic error) and variance (random error) components. It may also be described in terms of the major sources of error that potentially cause inaccuracy (e.g., coverage, sampling, nonresponse, response).
The timeliness of statistical information refers to the delay between the reference point (or the end of the reference period) to which the information pertains, and the date on which the information becomes available. It is typically involved in a trade-off against accuracy. The timeliness of information will influence its relevance.
The accessibility of statistical information refers to the ease with which it can be obtained from the Agency. This includes the ease with which the existence of information can be ascertained, as well as the suitability of the form or medium through which the information can be accessed. The cost of the information may also be an aspect of accessibility for some users.
The interpretability of statistical information reflects the availability of the supplementary information and metadata necessary to interpret and utilize it appropriately. This information normally includes the underlying concepts, variables and classifications used, the methodology of data collection and processing, and indications or measures of the accuracy of the statistical information.
The coherence of statistical information reflects the degree to which it can be successfully brought together with other statistical information within a broad analytic framework and over time. The use of standard concepts, classifications and target populations promotes coherence, as does the use of common methodology across surveys. Coherence does not necessarily imply full numerical consistency.
These dimensions of quality are overlapping and interrelated. There is no general model that brings them together to optimize or to prescribe a level of quality. Achieving an acceptable level of quality is the result of addressing, managing and balancing these elements of quality over time with careful attention to program objectives, costs, respondent burden and other factors that may affect information quality or user expectations. This balance is a critical aspect of the design of the Agency's surveys.
The term survey is used here generically to cover any activity that collects or acquires statistical data. Included are:
The guidelines are written with censuses and sample surveys as the main focus. While many of the guidelines will apply also to the processing of administrative records also, an additional section (Administrative data use) on the topic has been added in order to highlight considerations specific to that activity. The quality of derived statistical activities is, of course, largely determined by the quality of the component parts, and as such, derived statistical activities are not the direct focus of this document.
The term design is used here to cover the delineation of all aspects of a survey from the establishment of a need for data to the production of final outputs (the microdata file, statistical series, and analysis).
The core of this document (Survey steps) concentrates on quality issues as they relate to the design of individual surveys. It is, however, important to keep in mind that the context in which each individual survey is developed imposes constraints on its design. Each new survey, while aiming to satisfy some immediate information needs, is also contributing information to a base of statistical data that may be used for a range of purposes that go well beyond those identified at the time of the survey’s design. It is therefore important to ensure that the output from each individual survey can, to the extent possible, be integrated with, and used in conjunction with, data on related topics derived from other surveys. This implies a need to consider and respect the statistical standards on content or subject-matter that have been put in place to achieve coherence and harmony of data within the national statistical system. These include statistical frameworks (such as the System of National Accounts), statistical classification systems (such as those for industry or geography), as well as other concepts and definitions that specify the statistical variables to be measured. The usefulness of new statistical data is enhanced to the extent that they can be utilized in conjunction with existing data.
The design process also takes place within an organizational context. These guidelines are written in the context of a centralized statistical agency within which the design of a survey is normally conducted through a multi-disciplinary project team. The principal players in the project team are a project manager and a group of specialists. The specialists generally include a subject matter specialist, a methodologist, an informatics specialist, and an operations specialist. Sometimes one player will play more than one role, and sometimes several other roles must be added to the team; for example, specialists may be needed for geographic systems, public communications and dissemination.
The Management context section outlines the management context within which these Quality Guidelines are applied. Based on the Quality assurance framework, this description draws together policies, managerial processes, consultative mechanisms, and technical procedures that have a bearing on the management of quality at Statistics Canada. While the Survey steps section focuses mainly on the conduct of individual statistical activities, the Management context section provides a broader corporate perspective on quality assurance.
Statistics Canada (2002c). Statistics
Canada’s Quality Assurance Framework - 2002. Statistics Canada
Catalogue no. 12-586-XIE.