Keyword search
Filter results by
Search HelpKeyword(s)
Subject
- Selected: Statistical methods (54)
- Administrative data (1)
- Collection and questionnaires (4)
- Data analysis (3)
- Disclosure control and data dissemination (1)
- Editing and imputation (6)
- Frames and coverage (1)
- History and context (3)
- Inference and foundations (1)
- Quality assurance (1)
- Response and nonresponse (5)
- Statistical techniques (5)
- Survey design (12)
- Time series (1)
- Weighting and estimation (17)
- Other content related to Statistical methods (4)
Type
Survey or statistical program
- Survey of Labour and Income Dynamics (5)
- Census of Population (2)
- Survey of Service Industries: Film and Video Distribution (1)
- Annual Survey of Service Industries: Heritage Institutions (1)
- Survey of Service Industries: Performing Arts (1)
- Uniform Crime Reporting Survey (1)
- Survey of Household Spending (1)
- Time Use Survey (1)
Results
All (54)
All (54) (0 to 10 of 54 results)
- 1. Population-based case control studies ArchivedArticles and reports: 12-001-X20060029546Description:
We discuss methods for the analysis of case-control studies in which the controls are drawn using a complex sample survey. The most straightforward method is the standard survey approach based on weighted versions of population estimating equations. We also look at more efficient methods and compare their robustness to model mis-specification in simple cases. Case-control family studies, where the within-cluster structure is of interest in its own right, are also discussed briefly.
Release date: 2006-12-21 - Articles and reports: 12-001-X20060029547Description:
Calibration weighting can be used to adjust for unit nonresponse and/or coverage errors under appropriate quasi-randomization models. Alternative calibration adjustments that are asymptotically identical in a purely sampling context can diverge when used in this manner. Introducing instrumental variables into calibration weighting makes it possible for nonresponse (say) to be a function of a set of characteristics other than those in the calibration vector. When the calibration adjustment has a nonlinear form, a variant of the jackknife can remove the need for iteration in variance estimation.
Release date: 2006-12-21 - Articles and reports: 12-001-X20060029548Description:
The theory of multiple imputation for missing data requires that imputations be made conditional on the sampling design. However, most standard software packages for performing model-based multiple imputation assume simple random samples, leading many practitioners not to account for complex sample design features, such as stratification and clustering, in their imputations. Theory predicts that analyses of such multiply-imputed data sets can yield biased estimates from the design-based perspective. In this article, we illustrate through simulation that (i) the bias can be severe when the design features are related to the survey variables of interest, and (ii) the bias can be reduced by controlling for the design features in the imputation models. The simulations also illustrate that conditioning on irrelevant design features in the imputation models can yield conservative inferences, provided that the models include other relevant predictors. These results suggest a prescription for imputers: the safest course of action is to include design variables in the specification of imputation models. Using real data, we demonstrate a simple approach for incorporating complex design features that can be used with some of the standard software packages for creating multiple imputations.
Release date: 2006-12-21 - Articles and reports: 12-001-X20060029549Description:
In this article, we propose a Bernoulli-type bootstrap method that can easily handle multi-stage stratified designs where sampling fractions are large, provided simple random sampling without replacement is used at each stage. The method provides a set of replicate weights which yield consistent variance estimates for both smooth and non-smooth estimators. The method's strength is in its simplicity. It can easily be extended to any number of stages without much complication. The main idea is to either keep or replace a sampling unit at each stage with preassigned probabilities, to construct the bootstrap sample. A limited simulation study is presented to evaluate performance and, as an illustration, we apply the method to the 1997 Japanese National Survey of Prices.
Release date: 2006-12-21 - Articles and reports: 12-001-X20060029550Description:
In this paper, the geometric, optimization-based, and Lavallée and Hidiroglou (LH) approaches to stratification are compared. The geometric stratification method is an approximation, whereas the other two approaches, which employ numerical methods to perform stratification, may be seen as optimal stratification methods. The algorithm of the geometric stratification is very simple compared to the two other approaches, but it does not take into account the construction of a take-all stratum, which is usually constructed when a positively skewed population is stratified. In the optimization-based stratification, one may consider any form of optimization function and its constraints. In a comparative numerical study based on five positively skewed artificial populations, the optimization approach was more efficient in each of the cases studied compared to the geometric stratification. In addition, the geometric and optimization approaches are compared with the LH algorithm. In this comparison, the geometric stratification approach was found to be less efficient than the LH algorithm, whereas efficiency of the optimization approach was similar to the efficiency of the LH algorithm. Nevertheless, strata boundaries evaluated via the geometric stratification may be seen as efficient starting points for the optimization approach.
Release date: 2006-12-21 - Articles and reports: 12-001-X20060029551Description:
To select a survey sample, it happens that one does not have a frame containing the desired collection units, but rather another frame of units linked in a certain way to the list of collection units. It can then be considered to select a sample from the available frame in order to produce an estimate for the desired target population by using the links existing between the two. This can be designated by Indirect Sampling.
Estimation for the target population surveyed by Indirect Sampling can constitute a big challenge, in particular if the links between the units of the two are not one-to-one. The problem comes especially from the difficulty to associate a selection probability, or an estimation weight, to the surveyed units of the target population. In order to solve this type of estimation problem, the Generalized Weight Share Method (GWSM) has been developed by Lavallée (1995) and Lavallée (2002). The GWSM provides an estimation weight for every surveyed unit of the target population.
This paper first describes Indirect Sampling, which constitutes the foundations of the GWSM. Second, an overview of the GWSM is given where we formulate the GWSM in a theoretical framework using matrix notation. Third, we present some properties of the GWSM such as unbiasedness and transitivity. Fourth, we consider the special case where the links between the two populations are expressed by indicator variables. Fifth, some special typical linkages are studied to assess their impact on the GWSM. Finally, we consider the problem of optimality. We obtain optimal weights in a weak sense (for specific values of the variable of interest), and conditions for which these weights are also optimal in a strong sense and independent of the variable of interest.
Release date: 2006-12-21 - Articles and reports: 12-001-X20060029552Description:
A survey of tourist visits originating intra and extra-region in Brittany was needed. For concrete material reasons, "border surveys" could no longer be used. The major problem is the lack of a sampling frame that allows for direct contact with tourists. This problem was addressed by applying the indirect sampling method, the weighting for which is obtained using the generalized weight share method developed recently by Lavallée (1995), Lavallée (2002), Deville (1999) and also presented recently in Lavallée and Caron (2001). This article shows how to adapt the method to the survey. A number of extensions are required. One of the extensions, designed to estimate the total of a population from which a Bernouilli sample has been taken, will be developed.
Release date: 2006-12-21 - Articles and reports: 12-001-X20060029553Description:
Félix-Medina and Thompson (2004) proposed a variant of Link-tracing sampling in which it is assumed that a portion of the population, not necessarily the major portion, is covered by a frame of disjoint sites where members of the population can be found with high probabilities. A sample of sites is selected and the people in each of the selected sites are asked to nominate other members of the population. They proposed maximum likelihood estimators of the population sizes which perform acceptably provided that for each site the probability that a member is nominated by that site, called the nomination probability, is not small. In this research we consider Félix-Medina and Thompson's variant and propose three sets of estimators of the population sizes derived under the Bayesian approach. Two of the sets of estimators were obtained using improper prior distributions of the population sizes, and the other using Poisson prior distributions. However, we use the Bayesian approach only to assist us in the construction of estimators, while inferences about the population sizes are made under the frequentist approach. We propose two types of partly design-based variance estimators and confidence intervals. One of them is obtained using a bootstrap and the other using the delta method along with the assumption of asymptotic normality. The results of a simulation study indicate that (i) when the nomination probabilities are not small each of the proposed sets of estimators performs well and very similarly to maximum likelihood estimators; (ii) when the nomination probabilities are small the set of estimators derived using Poisson prior distributions still performs acceptably and does not have the problems of bias that maximum likelihood estimators have, and (iii) the previous results do not depend on the size of the fraction of the population covered by the frame.
Release date: 2006-12-21 - Articles and reports: 12-001-X20060029554Description:
Survey sampling to estimate a Consumer Price Index (CPI) is quite complicated, generally requiring a combination of data from at least two surveys: one giving prices, one giving expenditure weights. Fundamentally different approaches to the sampling process - probability sampling and purposive sampling - have each been strongly advocated and are used by different countries in the collection of price data. By constructing a small "world" of purchases and prices from scanner data on cereal and then simulating various sampling and estimation techniques, we compare the results of two design and estimation approaches: the probability approach of the United States and the purposive approach of the United Kingdom. For the same amount of information collected, but given the use of different estimators, the United Kingdom's methods appear to offer better overall accuracy in targeting a population superlative consumer price index.
Release date: 2006-12-21 - 10. An evaluation of matrix sampling methods using data from the National Health and Nutrition Examination Survey ArchivedArticles and reports: 12-001-X20060029555Description:
Researchers and policy makers often use data from nationally representative probability sample surveys. The number of topics covered by such surveys, and hence the amount of interviewing time involved, have typically increased over the years, resulting in increased costs and respondent burden. A potential solution to this problem is to carefully form subsets of the items in a survey and administer one such subset to each respondent. Designs of this type are called "split-questionnaire" designs or "matrix sampling" designs. The administration of only a subset of the survey items to each respondent in a matrix sampling design creates what can be considered missing data. Multiple imputation (Rubin 1987), a general-purpose approach developed for handling data with missing values, is appealing for the analysis of data from a matrix sample, because once the multiple imputations are created, data analysts can apply standard methods for analyzing complete data from a sample survey. This paper develops and evaluates a method for creating matrix sampling forms, each form containing a subset of items to be administered to randomly selected respondents. The method can be applied in complex settings, including situations in which skip patterns are present. Forms are created in such a way that each form includes items that are predictive of the excluded items, so that subsequent analyses based on multiple imputation can recover some of the information about the excluded items that would have been collected had there been no matrix sampling. The matrix sampling and multiple-imputation methods are evaluated using data from the National Health and Nutrition Examination Survey, one of many nationally representative probability sample surveys conducted by the National Center for Health Statistics, Centers for Disease Control and Prevention. The study demonstrates the feasibility of the approach applied to a major national health survey with complex structure, and it provides practical advice about appropriate items to include in matrix sampling designs in future surveys.
Release date: 2006-12-21
Data (0)
Data (0) (0 results)
No content available at this time.
Analysis (43)
Analysis (43) (0 to 10 of 43 results)
- 1. Population-based case control studies ArchivedArticles and reports: 12-001-X20060029546Description:
We discuss methods for the analysis of case-control studies in which the controls are drawn using a complex sample survey. The most straightforward method is the standard survey approach based on weighted versions of population estimating equations. We also look at more efficient methods and compare their robustness to model mis-specification in simple cases. Case-control family studies, where the within-cluster structure is of interest in its own right, are also discussed briefly.
Release date: 2006-12-21 - Articles and reports: 12-001-X20060029547Description:
Calibration weighting can be used to adjust for unit nonresponse and/or coverage errors under appropriate quasi-randomization models. Alternative calibration adjustments that are asymptotically identical in a purely sampling context can diverge when used in this manner. Introducing instrumental variables into calibration weighting makes it possible for nonresponse (say) to be a function of a set of characteristics other than those in the calibration vector. When the calibration adjustment has a nonlinear form, a variant of the jackknife can remove the need for iteration in variance estimation.
Release date: 2006-12-21 - Articles and reports: 12-001-X20060029548Description:
The theory of multiple imputation for missing data requires that imputations be made conditional on the sampling design. However, most standard software packages for performing model-based multiple imputation assume simple random samples, leading many practitioners not to account for complex sample design features, such as stratification and clustering, in their imputations. Theory predicts that analyses of such multiply-imputed data sets can yield biased estimates from the design-based perspective. In this article, we illustrate through simulation that (i) the bias can be severe when the design features are related to the survey variables of interest, and (ii) the bias can be reduced by controlling for the design features in the imputation models. The simulations also illustrate that conditioning on irrelevant design features in the imputation models can yield conservative inferences, provided that the models include other relevant predictors. These results suggest a prescription for imputers: the safest course of action is to include design variables in the specification of imputation models. Using real data, we demonstrate a simple approach for incorporating complex design features that can be used with some of the standard software packages for creating multiple imputations.
Release date: 2006-12-21 - Articles and reports: 12-001-X20060029549Description:
In this article, we propose a Bernoulli-type bootstrap method that can easily handle multi-stage stratified designs where sampling fractions are large, provided simple random sampling without replacement is used at each stage. The method provides a set of replicate weights which yield consistent variance estimates for both smooth and non-smooth estimators. The method's strength is in its simplicity. It can easily be extended to any number of stages without much complication. The main idea is to either keep or replace a sampling unit at each stage with preassigned probabilities, to construct the bootstrap sample. A limited simulation study is presented to evaluate performance and, as an illustration, we apply the method to the 1997 Japanese National Survey of Prices.
Release date: 2006-12-21 - Articles and reports: 12-001-X20060029550Description:
In this paper, the geometric, optimization-based, and Lavallée and Hidiroglou (LH) approaches to stratification are compared. The geometric stratification method is an approximation, whereas the other two approaches, which employ numerical methods to perform stratification, may be seen as optimal stratification methods. The algorithm of the geometric stratification is very simple compared to the two other approaches, but it does not take into account the construction of a take-all stratum, which is usually constructed when a positively skewed population is stratified. In the optimization-based stratification, one may consider any form of optimization function and its constraints. In a comparative numerical study based on five positively skewed artificial populations, the optimization approach was more efficient in each of the cases studied compared to the geometric stratification. In addition, the geometric and optimization approaches are compared with the LH algorithm. In this comparison, the geometric stratification approach was found to be less efficient than the LH algorithm, whereas efficiency of the optimization approach was similar to the efficiency of the LH algorithm. Nevertheless, strata boundaries evaluated via the geometric stratification may be seen as efficient starting points for the optimization approach.
Release date: 2006-12-21 - Articles and reports: 12-001-X20060029551Description:
To select a survey sample, it happens that one does not have a frame containing the desired collection units, but rather another frame of units linked in a certain way to the list of collection units. It can then be considered to select a sample from the available frame in order to produce an estimate for the desired target population by using the links existing between the two. This can be designated by Indirect Sampling.
Estimation for the target population surveyed by Indirect Sampling can constitute a big challenge, in particular if the links between the units of the two are not one-to-one. The problem comes especially from the difficulty to associate a selection probability, or an estimation weight, to the surveyed units of the target population. In order to solve this type of estimation problem, the Generalized Weight Share Method (GWSM) has been developed by Lavallée (1995) and Lavallée (2002). The GWSM provides an estimation weight for every surveyed unit of the target population.
This paper first describes Indirect Sampling, which constitutes the foundations of the GWSM. Second, an overview of the GWSM is given where we formulate the GWSM in a theoretical framework using matrix notation. Third, we present some properties of the GWSM such as unbiasedness and transitivity. Fourth, we consider the special case where the links between the two populations are expressed by indicator variables. Fifth, some special typical linkages are studied to assess their impact on the GWSM. Finally, we consider the problem of optimality. We obtain optimal weights in a weak sense (for specific values of the variable of interest), and conditions for which these weights are also optimal in a strong sense and independent of the variable of interest.
Release date: 2006-12-21 - Articles and reports: 12-001-X20060029552Description:
A survey of tourist visits originating intra and extra-region in Brittany was needed. For concrete material reasons, "border surveys" could no longer be used. The major problem is the lack of a sampling frame that allows for direct contact with tourists. This problem was addressed by applying the indirect sampling method, the weighting for which is obtained using the generalized weight share method developed recently by Lavallée (1995), Lavallée (2002), Deville (1999) and also presented recently in Lavallée and Caron (2001). This article shows how to adapt the method to the survey. A number of extensions are required. One of the extensions, designed to estimate the total of a population from which a Bernouilli sample has been taken, will be developed.
Release date: 2006-12-21 - Articles and reports: 12-001-X20060029553Description:
Félix-Medina and Thompson (2004) proposed a variant of Link-tracing sampling in which it is assumed that a portion of the population, not necessarily the major portion, is covered by a frame of disjoint sites where members of the population can be found with high probabilities. A sample of sites is selected and the people in each of the selected sites are asked to nominate other members of the population. They proposed maximum likelihood estimators of the population sizes which perform acceptably provided that for each site the probability that a member is nominated by that site, called the nomination probability, is not small. In this research we consider Félix-Medina and Thompson's variant and propose three sets of estimators of the population sizes derived under the Bayesian approach. Two of the sets of estimators were obtained using improper prior distributions of the population sizes, and the other using Poisson prior distributions. However, we use the Bayesian approach only to assist us in the construction of estimators, while inferences about the population sizes are made under the frequentist approach. We propose two types of partly design-based variance estimators and confidence intervals. One of them is obtained using a bootstrap and the other using the delta method along with the assumption of asymptotic normality. The results of a simulation study indicate that (i) when the nomination probabilities are not small each of the proposed sets of estimators performs well and very similarly to maximum likelihood estimators; (ii) when the nomination probabilities are small the set of estimators derived using Poisson prior distributions still performs acceptably and does not have the problems of bias that maximum likelihood estimators have, and (iii) the previous results do not depend on the size of the fraction of the population covered by the frame.
Release date: 2006-12-21 - Articles and reports: 12-001-X20060029554Description:
Survey sampling to estimate a Consumer Price Index (CPI) is quite complicated, generally requiring a combination of data from at least two surveys: one giving prices, one giving expenditure weights. Fundamentally different approaches to the sampling process - probability sampling and purposive sampling - have each been strongly advocated and are used by different countries in the collection of price data. By constructing a small "world" of purchases and prices from scanner data on cereal and then simulating various sampling and estimation techniques, we compare the results of two design and estimation approaches: the probability approach of the United States and the purposive approach of the United Kingdom. For the same amount of information collected, but given the use of different estimators, the United Kingdom's methods appear to offer better overall accuracy in targeting a population superlative consumer price index.
Release date: 2006-12-21 - 10. An evaluation of matrix sampling methods using data from the National Health and Nutrition Examination Survey ArchivedArticles and reports: 12-001-X20060029555Description:
Researchers and policy makers often use data from nationally representative probability sample surveys. The number of topics covered by such surveys, and hence the amount of interviewing time involved, have typically increased over the years, resulting in increased costs and respondent burden. A potential solution to this problem is to carefully form subsets of the items in a survey and administer one such subset to each respondent. Designs of this type are called "split-questionnaire" designs or "matrix sampling" designs. The administration of only a subset of the survey items to each respondent in a matrix sampling design creates what can be considered missing data. Multiple imputation (Rubin 1987), a general-purpose approach developed for handling data with missing values, is appealing for the analysis of data from a matrix sample, because once the multiple imputations are created, data analysts can apply standard methods for analyzing complete data from a sample survey. This paper develops and evaluates a method for creating matrix sampling forms, each form containing a subset of items to be administered to randomly selected respondents. The method can be applied in complex settings, including situations in which skip patterns are present. Forms are created in such a way that each form includes items that are predictive of the excluded items, so that subsequent analyses based on multiple imputation can recover some of the information about the excluded items that would have been collected had there been no matrix sampling. The matrix sampling and multiple-imputation methods are evaluated using data from the National Health and Nutrition Examination Survey, one of many nationally representative probability sample surveys conducted by the National Center for Health Statistics, Centers for Disease Control and Prevention. The study demonstrates the feasibility of the approach applied to a major national health survey with complex structure, and it provides practical advice about appropriate items to include in matrix sampling designs in future surveys.
Release date: 2006-12-21
Reference (11)
Reference (11) (0 to 10 of 11 results)
- Surveys and statistical programs – Documentation: 62F0026M2006001Geography: Province or territoryDescription:
This guide presents information of interest to users of data from the Survey of Household Spending, which gathers information on the spending habits, dwelling characteristics and household equipment of Canadian households. The survey covers private households in the 10 provinces. (The territories are surveyed every second year, starting in 1999.)
This guide includes definitions of survey terms and variables, as well as descriptions of survey methodology and data quality. One section describes the various statistics that can be created using expenditure data (e.g., budget share, market share, aggregates and medians).
Release date: 2006-12-12 - Surveys and statistical programs – Documentation: 68-514-XDescription:
Statistics Canada's approach to gathering and disseminating economic data has developed over several decades into a highly integrated system for collection and estimation that feeds the framework of the Canadian System of National Accounts.
The key to this approach was creation of the Unified Enterprise Survey, the goal of which was to improve the consistency, coherence, breadth and depth of business survey data.
The UES did so by bringing many of Statistics Canada's individual annual business surveys under a common framework. This framework included a single survey frame, a sample design framework, conceptual harmonization of survey content, means of using relevant administrative data, common data collection, processing and analysis tools, and a common data warehouse.
Release date: 2006-11-20 - Surveys and statistical programs – Documentation: 89-622-X2006003Description:
The General Social Survey (GSS) is an annual survey that monitors changes and emerging trends in Canadian Society. For the fourth time in Canada, the GSS has collected national level time use data. The GSS is funded through a government initiative aimed to fill data gaps for policy research. In this paper we present the policy framework that supports the survey, and discuss the impact of that framework on the content decisions that GSS has made. Following a brief review of the major findings from the first three cycles of time use data we discuss the lessons learned and best practices in the development, collection and processing of these data in Canada. Finally, we compare the methods and content of the Canadian time use survey with the US survey.
Release date: 2006-11-20 - 4. Producing Hours Worked for the SNA in Order to Measure Productivity: The Canadian Experience ArchivedSurveys and statistical programs – Documentation: 15-206-X2006004Description:
This paper provides a brief description of the methodology currently used to produce the annual volume of hours worked consistent with the System of National Accounts (SNA). These data are used for labour input in the annual and quarterly measures of labour productivity, as well as in the annual measures of multifactor productivity. For this purpose, hours worked are broken down by educational level and age group, so that changes in the composition of the labour force can be taken into account. They are also used to calculate hourly compensation and the unit labour cost and for simulations of the SNA Input-Output Model; as such, they are integrated as labour force inputs into most SNA satellite accounts (i.e., environment, tourism).
Release date: 2006-10-27 - 5. Preview of Products and Services, 2006 Census ArchivedSurveys and statistical programs – Documentation: 92-565-XDescription:
The Preview of Products and Services offers a complete overview of the proposed products and services that will be released based on the 2006 Census of Population and 2006 Census of Agriculture results. Information (where applicable) will include major characteristics and content, "What's new?" in comparison to 2001, levels of geography, availability/delivery methods, release timeframe and pricing.
The preview is now exclusively an Internet product for 2006 and is no longer available in a formalized print format (i.e. newsletter publication); however, "print-friendly" format is available via the Internet. This product will be updated periodically as details regarding products and services become finalized.
Release date: 2006-10-17 - 6. Death clearance overview, 2006 edition ArchivedSurveys and statistical programs – Documentation: 82-225-X20060099205Description:
The Death Clearance Overview document describes the Death Clearance module of the Canadian Cancer Registry, its structure, its function and its role in the operation of the national cancer registry. Inputs and outputs are listed and briefly described, as well as the different steps constituting the Death Clearance process.
Release date: 2006-07-07 - Notices and consultations: 87-004-X20030039213Description:
The Culture Statistics Program (CSP) has been Statistic Canada's chief source for analysis of the culture sector since the program's inception in 1972 and this role will continue. However, the CSP is making substantial changes to the way it collects culture data and, in effect, the data themselves. This article is intended to inform users of these data, of the scope of these upcoming changes and how the CSP is managing the challenges presented by this transition.
Release date: 2006-06-12 - 8. Survey of Labour and Income Dynamics (SLID): Preliminary Interview Questionnaire for Reference Year 2004 ArchivedSurveys and statistical programs – Documentation: 75F0002M2006001Description:
A Preliminary interview of background information is collected for all respondents aged 16 and over, who enter the sample for the Survey of Labour and Income Dynamics (SLID). For the majority of the longitudinal respondents, this occurs when a new panel is introduced and the preliminary information is collected during the first Labour interview. However, all persons living with a longitudinal respondent are also interviewed for SLID. Thus Preliminary interviews are conducted for new household members during their first Labour interview after they join the household. Longitudinal persons who have turned 16 while their household is in the SLID sample are then eligible for SLID interviews so they are asked the Preliminary interview questions during their first Labour interview.
The purpose of this document is to present the questions, possible responses and question flows for the 2005 Preliminary questionnaire (for the 2004 reference year).
Release date: 2006-04-06 - Surveys and statistical programs – Documentation: 75F0002M2006003Description:
The Survey of Income and Labour Dynamics (SLID) interview is conducted using computer-assisted interviewing (CAI). CAI is paperless interviewing. This document is therefore a written approximation of the CAI interview, or the questionnaire.
In previous years, SLID conducted a Labour interview each January and a separate Income interview in May. In 2005 (reference year 2004) the two interviews were combined and collected in one interview in January.
A labour and income interview is collected for all respondents 16 years of age and over. Respondents have the option of answering income questions during the interview, or of giving Statistics Canada permission to use their income tax records.
In January 2005, data was collected for reference year 2004 from panels 3 and 4. Panel 3, in its sixth and final year, consisted of approximately 17,000 households and panel 4, in its third year, also consisted of approximately 17,000 households.
This document outlines the structure of the January 2005 Labour and Income interview (for the 2004 reference year) including question wording, possible responses, and flows of questions.
Release date: 2006-04-06 - 10. Survey of Labour and Income Dynamics (SLID): Entry Exit Component Interview Questionnaire for Reference Year 2004 ArchivedSurveys and statistical programs – Documentation: 75F0002M2006002Description:
In previous years, the Survey of Labour and Income Dynamics (SLID) conducted a Labour interview each January and a separate Income interview in May. In 2005 (reference year 2004) the two interviews were combined and collected in one interview in January.
The data are collected using computer-assisted interviewing. Thus there are no paper questionnaires required for data collection. The questions, responses and interview flow for Labour and Income are documented in other SLID research papers. This document presents the information for the 2005 Entry Exit portion of the Labour Income interview (for the 2004 reference year).
The Entry Exit Component consists of five separate modules. The Entry module is the first set of data collected. It is information collected to update the place of residence, housing conditions and expenses, as well as the household composition. For each person identified in Entry, the Demographics module collects (or updates) the person's name, date of birth, sex and marital status. Then the Relationships module identifies (or updates) the relationship between each respondent and every other household member. The Exit module includes questions on who to contact for the next interview and the names, phone numbers and addresses of two contacts to be used only if future tracing of respondents is required. An overview of the Tracing component is also included in this document.
Release date: 2006-03-27
- Date modified: