Keyword search

Filter results by

Search Help
Currently selected filters that can be removed

Keyword(s)

Type

1 facets displayed. 1 facets selected.

Geography

3 facets displayed. 0 facets selected.

Content

1 facets displayed. 0 facets selected.
Sort Help
entries

Results

All (197)

All (197) (30 to 40 of 197 results)

  • Articles and reports: 13-604-M2015077
    Description:

    This new dataset increases the information available for comparing the performance of provinces and territories across a range of measures. It combines often fragmented provincial time series data that, as such, are of limited utility for examining the evolution of provincial economies over extended periods. More advanced statistical methods, and models with greater breadth and depth, are difficult to apply to existing fragmented Canadian data. The longitudinal nature of the new provincial dataset remedies this shortcoming. This report explains the construction of the latest vintage of the dataset. The dataset contains the most up-to-date information available.

    Release date: 2015-02-12

  • Articles and reports: 11F0027M2014092
    Geography: Province or territory
    Description:

    Using data from the Provincial KLEMS database, this paper asks whether provincial economies have undergone structural change in their business sectors since 2000. It does so by applying a measure of industrial change (the dissimilarity index) using measures of output (real GDP) and hours worked. The paper also develops a statistical methodology to test whether the shifts in the industrial composition of output and hours worked over the period are due to random year-over-year changes in industrial structure or long-term systematic change in the structure of provincial economies. The paper is designed to inform discussion and analysis of recent changes in industrial composition at the national level, notably, the decline in manufacturing output and the concomitant rise of resource industries, and the implications of this change for provincial economies.

    Release date: 2014-05-07

  • Articles and reports: 12-001-X201300111823
    Description:

    Although weights are widely used in survey sampling their ultimate justification from the design perspective is often problematical. Here we will argue for a stepwise Bayes justification for weights that does not depend explicitly on the sampling design. This approach will make use of the standard kind of information present in auxiliary variables however it will not assume a model relating the auxiliary variables to the characteristic of interest. The resulting weight for a unit in the sample can be given the usual interpretation as the number of units in the population which it represents.

    Release date: 2013-06-28

  • Articles and reports: 12-001-X201300111824
    Description:

    In most surveys all sample units receive the same treatment and the same design features apply to all selected people and households. In this paper, it is explained how survey designs may be tailored to optimize quality given constraints on costs. Such designs are called adaptive survey designs. The basic ingredients of such designs are introduced, discussed and illustrated with various examples.

    Release date: 2013-06-28

  • Articles and reports: 12-001-X201300111825
    Description:

    A considerable limitation of current methods for automatic data editing is that they treat all edits as hard constraints. That is to say, an edit failure is always attributed to an error in the data. In manual editing, however, subject-matter specialists also make extensive use of soft edits, i.e., constraints that identify (combinations of) values that are suspicious but not necessarily incorrect. The inability of automatic editing methods to handle soft edits partly explains why in practice many differences are found between manually edited and automatically edited data. The object of this article is to present a new formulation of the error localisation problem which can distinguish between hard and soft edits. Moreover, it is shown how this problem may be solved by an extension of the error localisation algorithm of De Waal and Quere (2003).

    Release date: 2013-06-28

  • Articles and reports: 12-001-X201300111827
    Description:

    SILC (Statistics on Income and Living Conditions) is an annual European survey that measures the population's income distribution, poverty and living conditions. It has been conducted in Switzerland since 2007, based on a four-panel rotation scheme that yields both cross-sectional and longitudinal estimates. This article examines the problem of estimating the variance of the cross-sectional poverty and social exclusion indicators selected by Eurostat. Our calculations take into account the non-linearity of the estimators, total non-response at different survey stages, indirect sampling and calibration. We adapt the method proposed by Lavallée (2002) for estimating variance in cases of non-response after weight sharing, and we obtain a variance estimator that is asymptotically unbiased and very easy to program.

    Release date: 2013-06-28

  • Articles and reports: 12-001-X201300111828
    Description:

    A question that commonly arises in longitudinal surveys is the issue of how to combine differing cohorts of the survey. In this paper we present a novel method for combining different cohorts, and using all available data, in a longitudinal survey to estimate parameters of a semiparametric model, which relates the response variable to a set of covariates. The procedure builds upon the Weighted Generalized Estimation Equation method for handling missing waves in longitudinal studies. Our method is set up under a joint-randomization framework for estimation of model parameters, which takes into account the superpopulation model as well as the survey design randomization. We also propose a design-based, and a joint-randomization, variance estimation method. To illustrate the methodology we apply it to the Survey of Doctorate Recipients, conducted by the U.S. National Science Foundation.

    Release date: 2013-06-28

  • Articles and reports: 12-001-X201300111830
    Description:

    We consider two different self-benchmarking methods for the estimation of small area means based on the Fay-Herriot (FH) area level model: the method of You and Rao (2002) applied to the FH model and the method of Wang, Fuller and Qu (2008) based on augmented models. We derive an estimator of the mean squared prediction error (MSPE) of the You-Rao (YR) estimator of a small area mean that, under the true model, is correct to second-order terms. We report the results of a simulation study on the relative bias of the MSPE estimator of the YR estimator and the MSPE estimator of the Wang, Fuller and Qu (WFQ) estimator obtained under an augmented model. We also study the MSPE and the estimators of MSPE for the YR and WFQ estimators obtained under a misspecified model.

    Release date: 2013-06-28

  • Articles and reports: 12-001-X201300111831
    Description:

    We consider conservative variance estimation for the Horvitz-Thompson estimator of a population total in sampling designs with zero pairwise inclusion probabilities, known as "non-measurable" designs. We decompose the standard Horvitz-Thompson variance estimator under such designs and characterize the bias precisely. We develop a bias correction that is guaranteed to be weakly conservative (nonnegatively biased) regardless of the nature of the non-measurability. The analysis sheds light on conditions under which the standard Horvitz-Thompson variance estimator performs well despite non-measurability and where the conservative bias correction may outperform commonly-used approximations.

    Release date: 2013-06-28

  • Articles and reports: 89-648-X2013002
    Geography: Canada
    Description:

    Data matching is a common practice used to reduce the response burden of respondents and to improve the quality of the information collected from respondents when the linkage method does not introduce bias. However, historical linkage, which consists in linking external records from previous years to the year of the initial wave of a survey, is relatively rare and, until now, had not been used at Statistics Canada. The present paper describes the method used to link the records from the Living in Canada Survey pilot to historical tax data on income and labour (T1 and T4 files). It presents the evolution of the linkage rate going back over time and compares earnings data collected from personal income tax returns with those collected from employers file. To illustrate the new possibilities of analysis offered by this type of linkage, the study concludes with an earnings profile by age and sex for different cohorts based on year of birth.

    Release date: 2013-01-24
Data (0)

Data (0) (0 results)

No content available at this time.

Analysis (197)

Analysis (197) (60 to 70 of 197 results)

  • Articles and reports: 11F0027M2010065
    Geography: Canada
    Description:

    The purpose of this paper is twofold. First, the authors provide a detailed social accounting matrix (SAM), which incorporates the income and financial flows into the standard input-output matrix, for the Canadian economy for 2004. Second, they use the SAM to assess the strength of the real-financial linkages by calculating and comparing real SAM multipliers and financial social accounting matrix (FSAM) multipliers. For FSAM multipliers, financial flows are endogenous, whereas for real SAM multipliers they are not. The results show that taking into account financial flows increases the impact of a final demand shock on Canadian output. Financial flows also play an important role in determining the cumulative effect of an income shock or the availability of investment funds. Between 2008 and the first half of 2009, financial institutions shifted their investments toward government bonds, short-term paper, and foreign investments. This shift together with the fact that non-financial institutions were unwilling or unable to increase their financial liabilities, led to estimated declines in all GDP multipliers between 2008 and the first half of 2009 (2009H1). The main advantage of using the extended input-output analysis is that it provides a simple framework, with very few assumptions, which allows the assessment of the strength of real-financial linkages by means of multipliers. However, the methodology is subject to the Lucas critique, that as shocks shift prices, agents cannot adjust. Such a framework is, nevertheless, appropriate in short-term impact analysis such as this study.

    Release date: 2011-05-20

  • Articles and reports: 12-001-X201000211375
    Description:

    The paper explores and assesses the approaches used by statistical offices to ensure effective methodological input into their statistical practice. The tension between independence and relevance is a common theme: generally, methodologists have to work closely with the rest of the statistical organisation for their work to be relevant; but they also need to have a degree of independence to question the use of existing methods and to lead the introduction of new ones where needed. And, of course, there is a need for an effective research program which, on the one hand, has a degree of independence needed by any research program, but which, on the other hand, is sufficiently connected so that its work is both motivated by and feeds back into the daily work of the statistical office. The paper explores alternative modalities of organisation; leadership; planning and funding; the role of project teams; career development; external advisory committees; interaction with the academic community; and research.

    Release date: 2010-12-21

  • Articles and reports: 12-001-X201000211378
    Description:

    One key to poverty alleviation or eradication in the third world is reliable information on the poor and their location, so that interventions and assistance can be effectively targeted to the neediest people. Small area estimation is one statistical technique that is used to monitor poverty and to decide on aid allocation in pursuit of the Millennium Development Goals. Elbers, Lanjouw and Lanjouw (ELL) (2003) proposed a small area estimation methodology for income-based or expenditure-based poverty measures, which is implemented by the World Bank in its poverty mapping projects via the involvement of the central statistical agencies in many third world countries, including Cambodia, Lao PDR, the Philippines, Thailand and Vietnam, and is incorporated into the World Bank software program PovMap. In this paper, the ELL methodology which consists of first modeling survey data and then applying that model to census information is presented and discussed with strong emphasis on the first phase, i.e., the fitting of regression models and on the estimated standard errors at the second phase. Other regression model fitting procedures such as the General Survey Regression (GSR) (as described in Lohr (1999) Chapter 11) and those used in existing small area estimation techniques: Pseudo-Empirical Best Linear Unbiased Prediction (Pseudo-EBLUP) approach (You and Rao 2002) and Iterative Weighted Estimating Equation (IWEE) method (You, Rao and Kovacevic 2003) are presented and compared with the ELL modeling strategy. The most significant difference between the ELL method and the other techniques is in the theoretical underpinning of the ELL model fitting procedure. An example based on the Philippines Family Income and Expenditure Survey is presented to show the differences in both the parameter estimates and their corresponding standard errors, and in the variance components generated from the different methods and the discussion is extended to the effect of these on the estimated accuracy of the final small area estimates themselves. The need for sound estimation of variance components, as well as regression estimates and estimates of their standard errors for small area estimation of poverty is emphasized.

    Release date: 2010-12-21

  • Articles and reports: 12-001-X201000111244
    Description:

    This paper considers the problem of selecting nonparametric models for small area estimation, which recently have received much attention. We develop a procedure based on the idea of fence method (Jiang, Rao, Gu and Nguyen 2008) for selecting the mean function for the small areas from a class of approximating splines. Simulation results show impressive performance of the new procedure even when the number of small areas is fairly small. The method is applied to a hospital graft failure dataset for selecting a nonparametric Fay-Herriot type model.

    Release date: 2010-06-29

  • Articles and reports: 12-001-X201000111247
    Description:

    In this paper, the problem of estimating the variance of various estimators of the population mean in two-phase sampling has been considered by jackknifing the two-phase calibrated weights of Hidiroglou and Särndal (1995, 1998). Several estimators of population mean available in the literature are shown to be the special cases of the technique developed here, including those suggested by Rao and Sitter (1995) and Sitter (1997). By following Raj (1965) and Srivenkataramana and Tracy (1989), some new estimators of the population mean are introduced and their variances are estimated through the proposed jackknife procedure. The variance of the chain ratio and regression type estimators due to Chand (1975) are also estimated using the jackknife. A simulation study is conducted to assess the efficiency of the proposed jackknife estimators relative to the usual estimators of variance.

    Release date: 2010-06-29

  • Articles and reports: 12-001-X201000111249
    Description:

    For many designs, there is a nonzero probability of selecting a sample that provides poor estimates for known quantities. Stratified random sampling reduces the set of such possible samples by fixing the sample size within each stratum. However, undesirable samples are still possible with stratification. Rejective sampling removes poor performing samples by only retaining a sample if specified functions of sample estimates are within a tolerance of known values. The resulting samples are often said to be balanced on the function of the variables used in the rejection procedure. We provide modifications to the rejection procedure of Fuller (2009a) that allow more flexibility on the rejection rules. Through simulation, we compare estimation properties of a rejective sampling procedure to those of cube sampling.

    Release date: 2010-06-29

  • Articles and reports: 12-001-X201000111250
    Description:

    We propose a Bayesian Penalized Spline Predictive (BPSP) estimator for a finite population proportion in an unequal probability sampling setting. This new method allows the probabilities of inclusion to be directly incorporated into the estimation of a population proportion, using a probit regression of the binary outcome on the penalized spline of the inclusion probabilities. The posterior predictive distribution of the population proportion is obtained using Gibbs sampling. The advantages of the BPSP estimator over the Hájek (HK), Generalized Regression (GR), and parametric model-based prediction estimators are demonstrated by simulation studies and a real example in tax auditing. Simulation studies show that the BPSP estimator is more efficient, and its 95% credible interval provides better confidence coverage with shorter average width than the HK and GR estimators, especially when the population proportion is close to zero or one or when the sample is small. Compared to linear model-based predictive estimators, the BPSP estimators are robust to model misspecification and influential observations in the sample.

    Release date: 2010-06-29

  • Articles and reports: 65-507-M2010009
    Description:

    This issue presents importer statistics from 2002 to 2007 including the number of importers, the value of their imports by industry, importer size, origin and province of residence. The data in this issue are at the establishment level and are derived from the Importer Register Database.

    Release date: 2010-06-25

  • Articles and reports: 75F0002M2010002
    Description:

    This report compares the aggregate income estimates as published by four different statistical programs. The System of National Accounts provides a portrait of economic activity at the macro economic level. The three other programs considered generate data from a micro-economic perspective: two are survey based (Census of Population and Survey of Labour and Income Dynamics) and the third derives all its results from administrative data (Annual Estimates for Census Families and Individuals). A review of the conceptual differences across the sources is followed by a discussion of coverage issues and processing discrepancies that might influence estimates. Aggregate income estimates with adjustments where possible to account for known conceptual differences are compared. Even allowing for statistical variability, some reconciliation issues remain. These are sometimes are explained by the use of different methodologies or data gathering instruments but they sometimes also remain unexplained.

    Release date: 2010-04-06

  • Articles and reports: 65-507-M2010008
    Description:

    This issue presents exporter statistics from 1993 to 2007 including the number of exporters, the value of their domestic exports by industry, exporter size, destination and province of residence as well as employment statistics of exporting establishments for the year 2007. The data in this issue are at the establishment level and are derived from the Exporter Register Database.

    Release date: 2010-01-27
Reference (0)

Reference (0) (0 results)

No content available at this time.

Date modified: