Filter results by

Search Help
Currently selected filters that can be removed

Keyword(s)

Survey or statistical program

497 facets displayed. 0 facets selected.

Content

1 facets displayed. 0 facets selected.
Sort Help
entries

Results

All (9,956)

All (9,956) (7,290 to 7,300 of 9,956 results)

  • Articles and reports: 11-522-X20020016745
    Description:

    The attractiveness of the Regression Discontinuity Design (RDD) rests on its close similarity to a normal experimental design. On the other hand, it is of limited applicability since it is not often the case that units are assigned to the treatment group on the basis of an observable (to the analyst) pre-program measure. Besides, it only allows identification of the mean impact on a very specific subpopulation. In this technical paper, we show that the RDD straightforwardly generalizes to the instances in which the units' eligibility is established on an observable pre-program measure with eligible units allowed to freely self-select into the program. This set-up also proves to be very convenient for building a specification test on conventional non-experimental estimators of the program mean impact. The data requirements are clearly described.

    Release date: 2004-09-13

  • Articles and reports: 11-522-X20020016746
    Description:

    In 1961, the European Commission launched a harmonized qualitative survey program to the consumers and the heads of companies (industry, services, construction, retail trade, investments) that covers more than 40 countries today. These qualitative surveys are aimed at understanding the economic situation of these companies. Results are available a few days after the end of the reference period, well before the results of the quantitative surveys.

    Although qualitative, these surveys have quickly become an essential tool of the cyclical diagnosis and of the short-term economic forecast. This product shows how these surveys are used by the European Commission, in particular by the Directorate-General for economic and financial Affairs (DG ECFIN) and the Statistical Office of the European Communities (EUROSTAT), to evaluate the economic situation of the Euro zone.

    The first part of this product briefly presents the harmonized European business and consumer survey program. In the second part, we look at how DG ECFIN calculates a coincident indicator of the economic activity, using a dynamic factorial analysis of the questions of the survey in industry. This type of indicator makes it possible, in addition, to study the convergence of the economic cycles of the member states. The quantitative short-term indicators for the Euro zone are often criticized for the delay with which they are published. In the third part, we look at how EUROSTAT plans to publish flash estimates of the industrial product price index (IPPI) resulting from econometric models integrating the business survey series. Lastly, we show how these surveys can be used to forecast the gross domestic product (GDP) and to define proxies for some non-available key indicators (new orders in industry, etc.).

    Release date: 2004-09-13

  • Articles and reports: 11-522-X20020016747
    Description:

    This project seeks to shed light not only on the degree to which individuals are stuck in the low-income range, but also on those who have sufficient opportunity to move into the upper part of the income distribution. It also seeks to compare patterns of mobility through the income distribution in North America and Europe, shedding light on the impact of different models of integration. Cross-National Equivalent File data from the British Household Panel Survey (BHPS) for the United Kingdom, the German Socio-Economic Panel (GSOEP) for Germany, the Panel Study of Income Dynamics (PSID) for the United States and the Survey of Labour Income Dynamics (SLID) for Canada offer a comparative analysis of the dynamics of household income during the 1990s, paying particular attention to both low- and high-income dynamics. Canadian administrative data drawn from income tax files are also used. These panel datasets range in length from six years (for the SLID) to almost 20 years (for the PSID and the Canadian administrative data). The analysis focuses on developments during the 1990s, but also explores the sensitivity of the results to changes in the length of the period analysed.

    The analysis begins by offering a broad descriptive overview of the major characteristics and events (demographic versus labour market) that determine levels and changes in adjusted household incomes. Attention is paid to movements into and out of low- and high- income ranges. A number of definitions are used, incorporating absolute and relative notions of poverty. The sensitivity of the results to the use of various equivalence scales is examined. An overview offers a broad picture of the state of household income in each country and the relative roles of family structure, the labour market and welfare state in determining income mobility. The paper employs discrete time-hazard methods to model the dynamics of entry to and exit from both low and high income.

    Both observed and unobserved heterogeneity are controlled for with the intention of highlighting differences in the determinants of the transition rates between the countries. This is done in a way that assesses the importance of the relative roles of family, market and state. Attention is also paid to important institutional changes, most notably the increasing integration of product and labour markets in North America and Europe.

    Release date: 2004-09-13

  • Articles and reports: 11-522-X20020016748
    Description:

    Practitioners often use data collected from complex surveys (such as labour force and health surveys involving stratified cluster sampling) to fit logistic regression and other models of interest. A great deal of effort over the last two decades has been spent on developing methods to analyse survey data that take account of design features. This paper looks at an alternative method known as inverse sampling.

    Specialized programs, such as SUDAAN and WESVAR, are also available to implement some of the methods developed to take into account the design features. However, these methods require additional information such as survey weights, design effects or cluster identification of microdata and thus, another method is necessary.

    Inverse sampling (Hinkins et al., Survey Methodology, 1977) provides an alternative approach by undoing the complex data structures so that standard methods can be applied. Repeated subsamples with simple random structure are drawn and each subsample is analysed by standard methods and is combined to increase the efficiency. Although computer-intensive, this method has the potential to preserve confidentiality of microdata files. A drawback of the method is that it can lead to biased estimates of regression parameters when the subsample sizes are small (as in the case of stratified cluster sampling).

    In this paper, we propose using the estimating equation approach that combines the subsamples before estimation and thus leads to nearly unbiased estimates of regression parameters regardless of subsample sizes. This method is computationally less intensive than the original method. We apply the method to cluster-correlated data generated from a nested error linear regression model to illustrate its advantages. A real dataset from a Statistics Canada survey will also be analysed using the estimating equation method.

    Release date: 2004-09-13

  • Articles and reports: 11-522-X20020016749
    Description:

    Survey sampling is a statistical domain that has been slow to take advantage of flexible regression methods. In this technical paper, two approaches are discussed that could be used to make these regression methods accessible: adapt the techniques to the complex survey design that has been used or sample the survey data so that the standard techniques are applicable.

    In following the former route, we introduce techniques that account for the complex survey structure of the data for scatterplot smoothing and additive models. The use of penalized least squares in the sampling context is studied as a tool for the analysis of a general trend in a finite population. We focus on smooth regression with a normal error model. Ties in covariates abound for large scale surveys resulting in the application of scatterplot smoothers to means. The estimation of smooths (for example, smoothing splines) depends on the sampling design only via the sampling weights, meaning that standard software can be used for estimation. Inference for these curves is more challenging, as a result of correlations induced by the sampling design. We propose and illustrate tests that account for the sampling design. Illustrative examples are given using the Ontario health survey, including scatterplot smoothing, additive models and model diagnostics. In an attempt to resolve the problem by appropriate sampling of the survey data file, we discuss some of the hurdles that are faced when using this approach.

    Release date: 2004-09-13

  • Articles and reports: 11-522-X20020016750
    Description:

    Analyses of data from social and economic surveys sometimes use generalized variance function models to approximate the design variance of point estimators of population means and proportions. Analysts may use the resulting standard error estimates to compute associated confidence intervals or test statistics for the means and proportions of interest. In comparison with design-based variance estimators computed directly from survey microdata, generalized variance function models have several potential advantages, as will be discussed in this paper, including operational simplicity; increased stability of standard errors; and, for cases involving public-use datasets, reduction of disclosure limitation problems arising from the public release of stratum and cluster indicators.

    These potential advantages, however, may be offset in part by several inferential issues. First, the properties of inferential statistics based on generalized variance functions (e.g., confidence interval coverage rates and widths) depend heavily on the relative empirical magnitudes of the components of variability associated, respectively, with:

    (a) the random selection of a subset of items used in estimation of the generalized variance function model(b) the selection of sample units under a complex sample design (c) the lack of fit of the generalized variance function model (d) the generation of a finite population under a superpopulation model.

    Second, under conditions, one may link each of components (a) through (d) with different empirical measures of the predictive adequacy of a generalized variance function model. Consequently, these measures of predictive adequacy can offer us some insight into the extent to which a given generalized variance function model may be appropriate for inferential use in specific applications.

    Some of the proposed diagnostics are applied to data from the US Survey of Doctoral Recipients and the US Current Employment Survey. For the Survey of Doctoral Recipients, components (a), (c) and (d) are of principal concern. For the Current Employment Survey, components (b), (c) and (d) receive principal attention, and the availability of population microdata allow the development of especially detailed models for components (b) and (c).

    Release date: 2004-09-13

  • Articles and reports: 11-522-X20020016751
    Description:

    Closing remarks

    Release date: 2004-09-13

  • Articles and reports: 11-522-X20020016752
    Description:

    Opening remarks of the Symposium 2002: Modelling Survey Data for Social and Economic Research, presented by David Binder.

    Release date: 2004-09-13

  • 7,299. Keynote address Archived
    Articles and reports: 11-522-X20020016753
    Description:

    Keynote Address.

    Release date: 2004-09-13

  • Articles and reports: 11F0019M2004229
    Geography: Canada
    Description:

    This study examines trends in the internal migration of the Canadian-born and long-term immigrants into and out of Canada's three largest metropolitan areas.

    Release date: 2004-09-13
Stats in brief (2,659)

Stats in brief (2,659) (60 to 70 of 2,659 results)

Articles and reports (6,974)

Articles and reports (6,974) (50 to 60 of 6,974 results)

  • Articles and reports: 36-28-0001202400300006
    Description: Research generally supports the idea that technological change has favoured the demand for workers in occupations requiring higher levels of education and skills and negatively affected employment in occupations requiring lower skill levels. This article assesses the changes over the past two decades in the occupational skill level of employment in Canada, with a focus on the role of immigration in the changing occupational structure.
    Release date: 2024-03-27

  • Articles and reports: 11-522-X202200100001
    Description: Record linkage aims at identifying record pairs related to the same unit and observed in two different data sets, say A and B. Fellegi and Sunter (1969) suggest each record pair is tested whether generated from the set of matched or unmatched pairs. The decision function consists of the ratio between m(y) and u(y),probabilities of observing a comparison y of a set of k>3 key identifying variables in a record pair under the assumptions that the pair is a match or a non-match, respectively. These parameters are usually estimated by means of the EM algorithm using as data the comparisons on all the pairs of the Cartesian product ?=A×B. These observations (on the comparisons and on the pairs status as match or non-match) are assumed as generated independently of other pairs, assumption characterizing most of the literature on record linkage and implemented in software tools (e.g. RELAIS, Cibella et al. 2012). On the contrary, comparisons y and matching status in ? are deterministically dependent. As a result, estimates on m(y) and u(y) based on the EM algorithm are usually bad. This fact jeopardizes the effective application of the Fellegi-Sunter method, as well as automatic computation of quality measures and possibility to apply efficient methods for model estimation on linked data (e.g. regression functions), as in Chambers et al. (2015). We propose to explore ? by a set of samples, each one drawn so to preserve independence of comparisons among the selected record pairs. Simulations are encouraging.
    Release date: 2024-03-25

  • Articles and reports: 11-522-X202200100002
    Description: The authors used the Splink probabilistic linkage package developed by the UK Ministry of Justice, to link census data from England and Wales to itself to find duplicate census responses. A large gold standard of confirmed census duplicates was available meaning that the results of the Splink implementation could be quality assured. This paper describes the implementation and features of Splink, gives details of the settings and parameters that we used to tune Splink for our particular project, and gives the results that we obtained.
    Release date: 2024-03-25

  • Articles and reports: 11-522-X202200100003
    Description: Estimation at fine levels of aggregation is necessary to better describe society. Small area estimation model-based approaches that combine sparse survey data with rich data from auxiliary sources have been proven useful to improve the reliability of estimates for small domains. Considered here is a scenario where small area model-based estimates, produced at a given aggregation level, needed to be disaggregated to better describe the social structure at finer levels. For this scenario, an allocation method was developed to implement the disaggregation, overcoming challenges associated with data availability and model development at such fine levels. The method is applied to adult literacy and numeracy estimation at the county-by-group-level, using data from the U.S. Program for the International Assessment of Adult Competencies. In this application the groups are defined in terms of age or education, but the method could be applied to estimation of other equity-deserving groups.
    Release date: 2024-03-25

  • Articles and reports: 11-522-X202200100004
    Description: In accordance with Statistics Canada’s long-term Disaggregated Data Action Plan (DDAP), several initiatives have been implemented into the Labour Force Survey (LFS). One of the more direct initiatives was a targeted increase in the size of the monthly LFS sample. Furthermore, a regular Supplement program was introduced, where an additional series of questions are asked to a subset of LFS respondents and analyzed in a monthly or quarterly production cycle. Finally, the production of modelled estimates based on Small Area Estimation (SAE) methodologies resumed for the LFS and will include a wider scope with more analytical value than what had existed in the past. This paper will give an overview of these three initiatives.
    Release date: 2024-03-25

  • Articles and reports: 11-522-X202200100005
    Description: Sampling variance smoothing is an important topic in small area estimation. In this paper, we propose sampling variance smoothing methods for small area proportion estimation. In particular, we consider the generalized variance function and design effect methods for sampling variance smoothing. We evaluate and compare the smoothed sampling variances and small area estimates based on the smoothed variance estimates through analysis of survey data from Statistics Canada. The results from real data analysis indicate that the proposed sampling variance smoothing methods work very well for small area estimation.
    Release date: 2024-03-25

  • Articles and reports: 11-522-X202200100006
    Description: The Australian Bureau of Statistics (ABS) is committed to improving access to more microdata, while ensuring privacy and confidentiality is maintained, through its virtual DataLab which supports researchers to undertake complex research more efficiently. Currently, the DataLab research outputs need to follow strict rules to minimise disclosure risks for clearance. However, the clerical-review process is not cost effective and has potential to introduce errors. The increasing number of statistical outputs from different projects can potentially introduce differencing risks even though these outputs from different projects have met the strict output rules. The ABS has been exploring the possibility of providing automatic output checking using the ABS cellkey methodology to ensure that all outputs across different projects are protected consistently to minimise differencing risks and reduce costs associated with output checking.
    Release date: 2024-03-25

  • Articles and reports: 11-522-X202200100007
    Description: With the availability of larger and more diverse data sources, Statistical Institutes in Europe are inclined to publish statistics on smaller groups than they used to do. Moreover, high impact global events like the Covid crisis and the situation in Ukraine may also ask for statistics on specific subgroups of the population. Publishing on small, targeted groups not only raises questions on statistical quality of the figures, it also raises issues concerning statistical disclosure risk. The principle of statistical disclosure control does not depend on the size of the groups the statistics are based on. However, the risk of disclosure does depend on the group size: the smaller a group, the higher the risk. Traditional ways to deal with statistical disclosure control and small group sizes include suppressing information and coarsening categories. These methods essentially increase the (mean) group sizes. More recent approaches include perturbative methods that have the intention to keep the group sizes small in order to preserve as much information as possible while reducing the disclosure risk sufficiently. In this paper we will mention some European examples of special focus group statistics and discuss the implications on statistical disclosure control. Additionally, we will discuss some issues that the use of perturbative methods brings along: its impact on disclosure risk and utility as well as the challenges in proper communication thereof.
    Release date: 2024-03-25

  • Articles and reports: 11-522-X202200100008
    Description: The publication of more disaggregated data can increase transparency and provide important information on underrepresented groups. Developing more readily available access options increases the amount of information available to and produced by researchers. Increasing the breadth and depth of the information released allows for a better representation of the Canadian population, but also puts a greater responsibility on Statistics Canada to do this in a way that preserves confidentiality, and thus it is helpful to develop tools which allow Statistics Canada to quantify the risk from the additional data granularity. In an effort to evaluate the risk of a database reconstruction attack on Statistics Canada’s published Census data, this investigation follows the strategy of the US Census Bureau, who outlined a method to use a Boolean satisfiability (SAT) solver to reconstruct individual attributes of residents of a hypothetical US Census block, based just on a table of summary statistics. The technique is expanded to attempt to reconstruct a small fraction of Statistics Canada’s Census microdata. This paper will discuss the findings of the investigation, the challenges involved in mounting a reconstruction attack, and the effect of an existing confidentiality measure in mitigating these attacks. Furthermore, the existing strategy is compared to other potential methods used to protect data – in particular, releasing tabular data perturbed by some random mechanism, such as those suggested by differential privacy.
    Release date: 2024-03-25

  • Articles and reports: 11-522-X202200100009
    Description: Education and training is acknowledged as fundamental for the development of a society. It is a complex multidimensional phenomenon, which determinants are ascribable to several interrelated familiar and socio-economic conditions. To respond to the demand of supporting statistical information for policymaking and its monitoring and evaluation process, the Italian National Statistical Institute (Istat) is renewing the education and training statistical production system, implementing a new thematic statistical register. It will be part of the Istat Integrated System of Registers, thus allowing relating the education and training phenomenon to other relevant phenomena, e.g. transition to work.
    Release date: 2024-03-25
Journals and periodicals (323)

Journals and periodicals (323) (40 to 50 of 323 results)

  • Journals and periodicals: 98-20-0003
    Description: Once every five years, the Census of Population provides a detailed and comprehensive statistical portrait of Canada that is vital to our country. It is the primary source of sociodemographic data for specific population groups such as lone-parent families, Indigenous peoples, immigrants, seniors and language groups.

    In order to help users of census products to better understand the various Census of Population concepts, Statistics Canada has developed, in the context of the activities of the 2021 Census and previous censuses, a collection of short videos. These videos are a reference source for users who are new to census concepts or those who have some experience with these concepts, but may need a refresher or would like to expand their knowledge.

    Release date: 2023-11-15

  • Journals and periodicals: 45-26-0001
    Description: The Departmental Sustainable Development Strategy (DSDS) outlines departmental actions, with measurable performance indicators, that support the implementation strategies of the 2022-2026 Federal Sustainable Development Strategy. The DSDS further outlines Statistics Canada’s sustainable development vision to produce data to help track whether Canada is moving toward a more sustainable future and highlights projects with links to supporting sustainable development goals.
    Release date: 2023-11-14

  • Journals and periodicals: 62F0026M
    Description: This series provides detailed documentation on the issues, concepts, methodology, data quality and other relevant research related to household expenditures from the Survey of Household Spending, the Homeowner Repair and Renovation Survey and the Food Expenditure Survey.
    Release date: 2023-10-18

  • Journals and periodicals: 12-206-X
    Description: This report summarizes the annual achievements of the Methodology Research and Development Program (MRDP) sponsored by the Modern Statistical Methods and Data Science Branch at Statistics Canada. This program covers research and development activities in statistical methods with potentially broad application in the agency’s statistical programs; these activities would otherwise be less likely to be carried out during the provision of regular methodology services to those programs. The MRDP also includes activities that provide support in the application of past successful developments in order to promote the use of the results of research and development work. Selected prospective research activities are also presented.
    Release date: 2023-10-11

  • Journals and periodicals: 16-001-M
    Description: The series covers environment accounts and indicators, environmental surveys, spatial environmental information and other research related to environmental statistics. The technical paper series is intended to stimulate discussion on a range of environmental topics.
    Release date: 2023-09-13

  • Table: 51-004-X
    Description: This bulletin presents the most up-to-date available information extracted from all of the Aviation Statistics Centre's surveys. Regular features include releases on principal statistics for Canada's major air carriers, airport data, fare basis statistics and traffic data for Canada's most important markets.
    Release date: 2023-07-28

  • Journals and periodicals: 21-006-X
    Geography: Canada
    Description: This series of analytical articles provides insights on the socio-economic environment in rural communities in Canada. New articles will be released periodically.
    Release date: 2023-07-24

  • Journals and periodicals: 89-20-0006
    Description: Statistics Canada is committed to sharing our knowledge and expertise to help all Canadians develop their data literacy skills by developing a series of data literacy training resources. Data literacy is a key skill needed in the 21st century. It is generally described as the ability to derive meaning from data. Data literacy focuses on the competencies or skills involved in working with data, including the ability to read, analyze, interpret, visualize data, as well as to drive good decision-making.
    Release date: 2023-07-17

  • Journals and periodicals: 81-599-X
    Geography: Canada
    Description: The fact sheets in this series provide an "at-a-glance" overview of particular aspects of education in Canada and summarize key data trends in selected tables published as part of the Pan-Canadian Education Indicators Program (PCEIP).

    The PCEIP mission is to publish a set of statistical measures on education systems in Canada for policy makers, practitioners and the general public to monitor the performance of education systems across jurisdictions and over time. PCEIP is a joint venture of Statistics Canada and the Council of Ministers of Education, Canada (CMEC).

    Release date: 2023-06-21

  • Journals and periodicals: 14-28-0001
    Description: Statistics Canada's Quality of Employment in Canada publication is intended to provide Canadians and Canadian organizations with a better understanding of quality of employment using an internationally-supported statistical framework. Quality of employment is approached as a multidimensional concept, characterized by different elements, which relate to human needs in various ways. To cover all relevant aspects, the framework identified seven dimensions and twelve sub-dimensions of quality of employment.
    Release date: 2023-06-13
Date modified: