Filter results by

Search Help
Currently selected filters that can be removed

Keyword(s)

Survey or statistical program

497 facets displayed. 0 facets selected.

Content

1 facets displayed. 0 facets selected.
Sort Help
entries

Results

All (9,968)

All (9,968) (7,300 to 7,310 of 9,968 results)

  • Articles and reports: 11-522-X20020016743
    Description:

    There is much interest in using data from longitudinal surveys to help understand life history processes such as education, employment, fertility, health and marriage. The analysis of data on the durations of spells or sojourns that individuals spend in certain states (e.g., employment, marriage) is a primary tool in studying such processes. This paper examines methods for analysing duration data that address important features associated with longitudinal surveys: the use of complex survey designs in heterogeneous populations; missing or inaccurate information about the timing of events; and the possibility of non-ignorable dropout or censoring mechanisms. Parametric and non-parametric techniques for estimation and for model checking are considered. Both new and existing methodology are proposed and applied to duration data from Canada's Survey of Labour and Income Dynamics (SLID).

    Release date: 2004-09-13

  • Articles and reports: 11-522-X20020016744
    Description:

    A developmental trajectory describes the course of a behaviour over age or time. This technical paper provides an overview of a semi-parametric, group-based method for analysing developmental trajectories. This methodology provides an alternative to assuming a homogenous population of trajectories as is done in standard growth modelling.

    Four capabilities are described: (1) the capability to identify, rather than assume, distinctive groups of trajectories; (2) the capability to estimate the proportion of the population following each such trajectory group; (3) the capability to relate group membership probability to individual characteristics and circumstances; and (4) the capability to use the group membership probabilities for various other purposes, such as creating profiles of group members.

    In addition, two important extensions of the method are described: the capability to add time-varying covariates to trajectory models and the capability to estimate joint trajectory models of distinct but related behaviours. The former provides the statistical capacity for testing if a contemporary factor, such as an experimental intervention or a non-experimental event like pregnancy, deflects a pre-existing trajectory. The latter provides the capability to study the unfolding of distinct but related behaviours such as problematic childhood behaviour and adolescent drug abuse.

    Release date: 2004-09-13

  • Articles and reports: 11-522-X20020016745
    Description:

    The attractiveness of the Regression Discontinuity Design (RDD) rests on its close similarity to a normal experimental design. On the other hand, it is of limited applicability since it is not often the case that units are assigned to the treatment group on the basis of an observable (to the analyst) pre-program measure. Besides, it only allows identification of the mean impact on a very specific subpopulation. In this technical paper, we show that the RDD straightforwardly generalizes to the instances in which the units' eligibility is established on an observable pre-program measure with eligible units allowed to freely self-select into the program. This set-up also proves to be very convenient for building a specification test on conventional non-experimental estimators of the program mean impact. The data requirements are clearly described.

    Release date: 2004-09-13

  • Articles and reports: 11-522-X20020016746
    Description:

    In 1961, the European Commission launched a harmonized qualitative survey program to the consumers and the heads of companies (industry, services, construction, retail trade, investments) that covers more than 40 countries today. These qualitative surveys are aimed at understanding the economic situation of these companies. Results are available a few days after the end of the reference period, well before the results of the quantitative surveys.

    Although qualitative, these surveys have quickly become an essential tool of the cyclical diagnosis and of the short-term economic forecast. This product shows how these surveys are used by the European Commission, in particular by the Directorate-General for economic and financial Affairs (DG ECFIN) and the Statistical Office of the European Communities (EUROSTAT), to evaluate the economic situation of the Euro zone.

    The first part of this product briefly presents the harmonized European business and consumer survey program. In the second part, we look at how DG ECFIN calculates a coincident indicator of the economic activity, using a dynamic factorial analysis of the questions of the survey in industry. This type of indicator makes it possible, in addition, to study the convergence of the economic cycles of the member states. The quantitative short-term indicators for the Euro zone are often criticized for the delay with which they are published. In the third part, we look at how EUROSTAT plans to publish flash estimates of the industrial product price index (IPPI) resulting from econometric models integrating the business survey series. Lastly, we show how these surveys can be used to forecast the gross domestic product (GDP) and to define proxies for some non-available key indicators (new orders in industry, etc.).

    Release date: 2004-09-13

  • Articles and reports: 11-522-X20020016747
    Description:

    This project seeks to shed light not only on the degree to which individuals are stuck in the low-income range, but also on those who have sufficient opportunity to move into the upper part of the income distribution. It also seeks to compare patterns of mobility through the income distribution in North America and Europe, shedding light on the impact of different models of integration. Cross-National Equivalent File data from the British Household Panel Survey (BHPS) for the United Kingdom, the German Socio-Economic Panel (GSOEP) for Germany, the Panel Study of Income Dynamics (PSID) for the United States and the Survey of Labour Income Dynamics (SLID) for Canada offer a comparative analysis of the dynamics of household income during the 1990s, paying particular attention to both low- and high-income dynamics. Canadian administrative data drawn from income tax files are also used. These panel datasets range in length from six years (for the SLID) to almost 20 years (for the PSID and the Canadian administrative data). The analysis focuses on developments during the 1990s, but also explores the sensitivity of the results to changes in the length of the period analysed.

    The analysis begins by offering a broad descriptive overview of the major characteristics and events (demographic versus labour market) that determine levels and changes in adjusted household incomes. Attention is paid to movements into and out of low- and high- income ranges. A number of definitions are used, incorporating absolute and relative notions of poverty. The sensitivity of the results to the use of various equivalence scales is examined. An overview offers a broad picture of the state of household income in each country and the relative roles of family structure, the labour market and welfare state in determining income mobility. The paper employs discrete time-hazard methods to model the dynamics of entry to and exit from both low and high income.

    Both observed and unobserved heterogeneity are controlled for with the intention of highlighting differences in the determinants of the transition rates between the countries. This is done in a way that assesses the importance of the relative roles of family, market and state. Attention is also paid to important institutional changes, most notably the increasing integration of product and labour markets in North America and Europe.

    Release date: 2004-09-13

  • Articles and reports: 11-522-X20020016748
    Description:

    Practitioners often use data collected from complex surveys (such as labour force and health surveys involving stratified cluster sampling) to fit logistic regression and other models of interest. A great deal of effort over the last two decades has been spent on developing methods to analyse survey data that take account of design features. This paper looks at an alternative method known as inverse sampling.

    Specialized programs, such as SUDAAN and WESVAR, are also available to implement some of the methods developed to take into account the design features. However, these methods require additional information such as survey weights, design effects or cluster identification of microdata and thus, another method is necessary.

    Inverse sampling (Hinkins et al., Survey Methodology, 1977) provides an alternative approach by undoing the complex data structures so that standard methods can be applied. Repeated subsamples with simple random structure are drawn and each subsample is analysed by standard methods and is combined to increase the efficiency. Although computer-intensive, this method has the potential to preserve confidentiality of microdata files. A drawback of the method is that it can lead to biased estimates of regression parameters when the subsample sizes are small (as in the case of stratified cluster sampling).

    In this paper, we propose using the estimating equation approach that combines the subsamples before estimation and thus leads to nearly unbiased estimates of regression parameters regardless of subsample sizes. This method is computationally less intensive than the original method. We apply the method to cluster-correlated data generated from a nested error linear regression model to illustrate its advantages. A real dataset from a Statistics Canada survey will also be analysed using the estimating equation method.

    Release date: 2004-09-13

  • Articles and reports: 11-522-X20020016749
    Description:

    Survey sampling is a statistical domain that has been slow to take advantage of flexible regression methods. In this technical paper, two approaches are discussed that could be used to make these regression methods accessible: adapt the techniques to the complex survey design that has been used or sample the survey data so that the standard techniques are applicable.

    In following the former route, we introduce techniques that account for the complex survey structure of the data for scatterplot smoothing and additive models. The use of penalized least squares in the sampling context is studied as a tool for the analysis of a general trend in a finite population. We focus on smooth regression with a normal error model. Ties in covariates abound for large scale surveys resulting in the application of scatterplot smoothers to means. The estimation of smooths (for example, smoothing splines) depends on the sampling design only via the sampling weights, meaning that standard software can be used for estimation. Inference for these curves is more challenging, as a result of correlations induced by the sampling design. We propose and illustrate tests that account for the sampling design. Illustrative examples are given using the Ontario health survey, including scatterplot smoothing, additive models and model diagnostics. In an attempt to resolve the problem by appropriate sampling of the survey data file, we discuss some of the hurdles that are faced when using this approach.

    Release date: 2004-09-13

  • Articles and reports: 11-522-X20020016750
    Description:

    Analyses of data from social and economic surveys sometimes use generalized variance function models to approximate the design variance of point estimators of population means and proportions. Analysts may use the resulting standard error estimates to compute associated confidence intervals or test statistics for the means and proportions of interest. In comparison with design-based variance estimators computed directly from survey microdata, generalized variance function models have several potential advantages, as will be discussed in this paper, including operational simplicity; increased stability of standard errors; and, for cases involving public-use datasets, reduction of disclosure limitation problems arising from the public release of stratum and cluster indicators.

    These potential advantages, however, may be offset in part by several inferential issues. First, the properties of inferential statistics based on generalized variance functions (e.g., confidence interval coverage rates and widths) depend heavily on the relative empirical magnitudes of the components of variability associated, respectively, with:

    (a) the random selection of a subset of items used in estimation of the generalized variance function model(b) the selection of sample units under a complex sample design (c) the lack of fit of the generalized variance function model (d) the generation of a finite population under a superpopulation model.

    Second, under conditions, one may link each of components (a) through (d) with different empirical measures of the predictive adequacy of a generalized variance function model. Consequently, these measures of predictive adequacy can offer us some insight into the extent to which a given generalized variance function model may be appropriate for inferential use in specific applications.

    Some of the proposed diagnostics are applied to data from the US Survey of Doctoral Recipients and the US Current Employment Survey. For the Survey of Doctoral Recipients, components (a), (c) and (d) are of principal concern. For the Current Employment Survey, components (b), (c) and (d) receive principal attention, and the availability of population microdata allow the development of especially detailed models for components (b) and (c).

    Release date: 2004-09-13

  • Articles and reports: 11-522-X20020016751
    Description:

    Closing remarks

    Release date: 2004-09-13

  • Articles and reports: 11-522-X20020016752
    Description:

    Opening remarks of the Symposium 2002: Modelling Survey Data for Social and Economic Research, presented by David Binder.

    Release date: 2004-09-13
Stats in brief (2,662)

Stats in brief (2,662) (0 to 10 of 2,662 results)

Articles and reports (6,983)

Articles and reports (6,983) (60 to 70 of 6,983 results)

  • Articles and reports: 11-522-X202200100001
    Description: Record linkage aims at identifying record pairs related to the same unit and observed in two different data sets, say A and B. Fellegi and Sunter (1969) suggest each record pair is tested whether generated from the set of matched or unmatched pairs. The decision function consists of the ratio between m(y) and u(y),probabilities of observing a comparison y of a set of k>3 key identifying variables in a record pair under the assumptions that the pair is a match or a non-match, respectively. These parameters are usually estimated by means of the EM algorithm using as data the comparisons on all the pairs of the Cartesian product ?=A×B. These observations (on the comparisons and on the pairs status as match or non-match) are assumed as generated independently of other pairs, assumption characterizing most of the literature on record linkage and implemented in software tools (e.g. RELAIS, Cibella et al. 2012). On the contrary, comparisons y and matching status in ? are deterministically dependent. As a result, estimates on m(y) and u(y) based on the EM algorithm are usually bad. This fact jeopardizes the effective application of the Fellegi-Sunter method, as well as automatic computation of quality measures and possibility to apply efficient methods for model estimation on linked data (e.g. regression functions), as in Chambers et al. (2015). We propose to explore ? by a set of samples, each one drawn so to preserve independence of comparisons among the selected record pairs. Simulations are encouraging.
    Release date: 2024-03-25

  • Articles and reports: 11-522-X202200100002
    Description: The authors used the Splink probabilistic linkage package developed by the UK Ministry of Justice, to link census data from England and Wales to itself to find duplicate census responses. A large gold standard of confirmed census duplicates was available meaning that the results of the Splink implementation could be quality assured. This paper describes the implementation and features of Splink, gives details of the settings and parameters that we used to tune Splink for our particular project, and gives the results that we obtained.
    Release date: 2024-03-25

  • Articles and reports: 11-522-X202200100003
    Description: Estimation at fine levels of aggregation is necessary to better describe society. Small area estimation model-based approaches that combine sparse survey data with rich data from auxiliary sources have been proven useful to improve the reliability of estimates for small domains. Considered here is a scenario where small area model-based estimates, produced at a given aggregation level, needed to be disaggregated to better describe the social structure at finer levels. For this scenario, an allocation method was developed to implement the disaggregation, overcoming challenges associated with data availability and model development at such fine levels. The method is applied to adult literacy and numeracy estimation at the county-by-group-level, using data from the U.S. Program for the International Assessment of Adult Competencies. In this application the groups are defined in terms of age or education, but the method could be applied to estimation of other equity-deserving groups.
    Release date: 2024-03-25

  • Articles and reports: 11-522-X202200100004
    Description: In accordance with Statistics Canada’s long-term Disaggregated Data Action Plan (DDAP), several initiatives have been implemented into the Labour Force Survey (LFS). One of the more direct initiatives was a targeted increase in the size of the monthly LFS sample. Furthermore, a regular Supplement program was introduced, where an additional series of questions are asked to a subset of LFS respondents and analyzed in a monthly or quarterly production cycle. Finally, the production of modelled estimates based on Small Area Estimation (SAE) methodologies resumed for the LFS and will include a wider scope with more analytical value than what had existed in the past. This paper will give an overview of these three initiatives.
    Release date: 2024-03-25

  • Articles and reports: 11-522-X202200100005
    Description: Sampling variance smoothing is an important topic in small area estimation. In this paper, we propose sampling variance smoothing methods for small area proportion estimation. In particular, we consider the generalized variance function and design effect methods for sampling variance smoothing. We evaluate and compare the smoothed sampling variances and small area estimates based on the smoothed variance estimates through analysis of survey data from Statistics Canada. The results from real data analysis indicate that the proposed sampling variance smoothing methods work very well for small area estimation.
    Release date: 2024-03-25

  • Articles and reports: 11-522-X202200100006
    Description: The Australian Bureau of Statistics (ABS) is committed to improving access to more microdata, while ensuring privacy and confidentiality is maintained, through its virtual DataLab which supports researchers to undertake complex research more efficiently. Currently, the DataLab research outputs need to follow strict rules to minimise disclosure risks for clearance. However, the clerical-review process is not cost effective and has potential to introduce errors. The increasing number of statistical outputs from different projects can potentially introduce differencing risks even though these outputs from different projects have met the strict output rules. The ABS has been exploring the possibility of providing automatic output checking using the ABS cellkey methodology to ensure that all outputs across different projects are protected consistently to minimise differencing risks and reduce costs associated with output checking.
    Release date: 2024-03-25

  • Articles and reports: 11-522-X202200100007
    Description: With the availability of larger and more diverse data sources, Statistical Institutes in Europe are inclined to publish statistics on smaller groups than they used to do. Moreover, high impact global events like the Covid crisis and the situation in Ukraine may also ask for statistics on specific subgroups of the population. Publishing on small, targeted groups not only raises questions on statistical quality of the figures, it also raises issues concerning statistical disclosure risk. The principle of statistical disclosure control does not depend on the size of the groups the statistics are based on. However, the risk of disclosure does depend on the group size: the smaller a group, the higher the risk. Traditional ways to deal with statistical disclosure control and small group sizes include suppressing information and coarsening categories. These methods essentially increase the (mean) group sizes. More recent approaches include perturbative methods that have the intention to keep the group sizes small in order to preserve as much information as possible while reducing the disclosure risk sufficiently. In this paper we will mention some European examples of special focus group statistics and discuss the implications on statistical disclosure control. Additionally, we will discuss some issues that the use of perturbative methods brings along: its impact on disclosure risk and utility as well as the challenges in proper communication thereof.
    Release date: 2024-03-25

  • Articles and reports: 11-522-X202200100008
    Description: The publication of more disaggregated data can increase transparency and provide important information on underrepresented groups. Developing more readily available access options increases the amount of information available to and produced by researchers. Increasing the breadth and depth of the information released allows for a better representation of the Canadian population, but also puts a greater responsibility on Statistics Canada to do this in a way that preserves confidentiality, and thus it is helpful to develop tools which allow Statistics Canada to quantify the risk from the additional data granularity. In an effort to evaluate the risk of a database reconstruction attack on Statistics Canada’s published Census data, this investigation follows the strategy of the US Census Bureau, who outlined a method to use a Boolean satisfiability (SAT) solver to reconstruct individual attributes of residents of a hypothetical US Census block, based just on a table of summary statistics. The technique is expanded to attempt to reconstruct a small fraction of Statistics Canada’s Census microdata. This paper will discuss the findings of the investigation, the challenges involved in mounting a reconstruction attack, and the effect of an existing confidentiality measure in mitigating these attacks. Furthermore, the existing strategy is compared to other potential methods used to protect data – in particular, releasing tabular data perturbed by some random mechanism, such as those suggested by differential privacy.
    Release date: 2024-03-25

  • Articles and reports: 11-522-X202200100009
    Description: Education and training is acknowledged as fundamental for the development of a society. It is a complex multidimensional phenomenon, which determinants are ascribable to several interrelated familiar and socio-economic conditions. To respond to the demand of supporting statistical information for policymaking and its monitoring and evaluation process, the Italian National Statistical Institute (Istat) is renewing the education and training statistical production system, implementing a new thematic statistical register. It will be part of the Istat Integrated System of Registers, thus allowing relating the education and training phenomenon to other relevant phenomena, e.g. transition to work.
    Release date: 2024-03-25

  • Articles and reports: 11-522-X202200100010
    Description: Growing Up in Québec is a longitudinal population survey that began in the spring of 2021 at the Institut de la statistique du Québec. Among the children targeted by this longitudinal follow-up, some will experience developmental difficulties at some point in their lives. Those same children often have characteristics associated with higher sample attrition (low-income family, parents with a low level of education). This article describes the two main challenges we encountered when trying to ensure sufficient representativeness of these children, in both the overall results and the subpopulation analyses.
    Release date: 2024-03-25
Journals and periodicals (323)

Journals and periodicals (323) (90 to 100 of 323 results)

  • Journals and periodicals: 88-204-X
    Description:

    This report provides statistical information of the federal government's activities in science and technology. It covers expenditures and person-years by type of science, performing sectors, provinces and federal departments and agencies. Technical notes, definitions, bibliography and subject index are included.

    Release date: 2014-06-06

  • Journals and periodicals: 61-205-X
    Description:

    This publication presents capital and repair expenditures on construction and on machinery and equipment for divisions and industries at the Canada level and by division at the provincial level. The report also provides the split between private and public investment. The tabulations focus on capital spending intentions for the coming year, preliminary estimates of actual investment for the current year and the actual investment for the previous year. The investment data are gathered from about 25,000 establishments and establishment groups in Canadian businesses, institutions and governments.

    Release date: 2014-02-28

  • Journals and periodicals: 89-555-X
    Description:

    The Programme for the International Assessment of Adult Competencies (PIAAC), an initiative of OECD, provides internationally comparable measures of three skills that are essential to processing information: literacy, numeracy, and problem-solving in technology-rich environments (referred to as PS-TRE). Canada is one of 24 countries and sub-national regions participating in this initiative. This study aims to provide a picture of the competencies of the Canadian population aged 16 to 65 in all three skill domains.

    Release date: 2013-10-18

  • Journals and periodicals: 89-637-X
    Geography: Canada
    Description:

    The Aboriginal Peoples Survey is a national survey of Aboriginal peoples (First Nations people living off-reserve, Métis and Inuit) living in urban, rural and northern locations throughout Canada. The survey provides valuable data on the social and economic conditions of Aboriginal children and youth (6-14 years) and Aboriginal people (15 years and over). It was conducted previously in 1991 and in 2001. The survey was designed and implemented in partnership with national Aboriginal organizations. The purpose of the Aboriginal Peoples Survey was to provide data on the social and economic conditions of Aboriginal people in Canada. More specifically, its purpose was to identify the needs of Aboriginal people and focus on issues such as health, language, employment, income, schooling, housing, and mobility. More detailed information about the survey is available in the APS 2006 Concepts and Methods Guide.

    Release date: 2013-03-27

  • Journals and periodicals: 11-526-X
    Description:

    Statistics Canada periodically conducts the Household and the Environment Survey to measure household actions that have, or are perceived to have, positive or negative impacts on the environment. The survey provides baseline information to use in measuring progress towards sound environmental practices at the household level. The subjects examined include consumption and conservation of energy, consumption and conservation of water, indoor environment, use of pesticides and fertilizers, outdoor air quality and consumer decisions.

    Release date: 2013-03-18

  • Journals and periodicals: 11-402-X
    Geography: Canada
    Description:

    Presented in almanac style, the 2012 Canada Year Book contains more than 500 pages of tables, charts and succinct analytical articles on every major area of Statistics Canada's expertise. The Canada Year Book is the premier reference on the social and economic life of Canada and its citizens.

    Release date: 2012-12-24

  • Journals and periodicals: 89-651-X
    Description:

    This article presents employment and unemployment rates, and some information regarding salaries and industrial sectors of employees, for official-language minorities. These data are based on the Labour Force Survey and enable comparisons between official-language minority and majority according to their situation in the labour market for provinces or groups of provinces.

    Release date: 2012-11-01

  • Journals and periodicals: 88-001-X
    Description:

    This series, which consists of about six issues per year, presents a variety of science and technology statistics. Each issue concerns a different topic, for example: research and development expenditures and personnel in business enterprises, science and technology expenditures and personnel in the federal government or provincial governments; and estimates of higher education expenditures on research and development.

    Release date: 2012-09-20

  • Journals and periodicals: 67-001-X
    Description:

    This publication presents Canada's transactions with non-residents on a quarterly basis. These transactions are grouped under two main accounts: the current account which includes goods, services, investment income and current transfers; and the capital and financial account which includes information on a country's investing and financing activities. The transactions are further broken down by major geographical region: United States, United Kingdom, other countries of the European Union, Japan, other countries of the Organization for Economic Co-operation and Development, and all other countries. The data are presented quarterly and annually for the six most recent years.

    Each publication includes several pages of data analysis accompanied by graphics, definitions, CANSIM data bank numbers, data quality measures and a list of occasional articles and research papers. The first quarter issue includes revisions to quarterly and annual data for the most recent four years. Statistics are derived from surveys, administrative data and other sources.

    Release date: 2012-09-04

  • Journals and periodicals: 75-001-X
    Geography: Canada
    Description: This publication brings together and analyzes a wide range of labour and income data. Topics include youth in the labour market, pensions and retirement, work arrangements, education and training, and trends in family income. One section highlights new products, surveys, research projects and conferences. Another section uses charts and text to describe a variety of subjects related to labour and income. Each winter print issue contains an index of all published articles.

    To find the latest updates on labour market and household issues such as gambling, minimum wage, retirement and unionization, please visit: Topics of interest on labour and income.

    Release date: 2012-08-22
Date modified: