Filter results by

Search Help
Currently selected filters that can be removed

Keyword(s)

Geography

1 facets displayed. 0 facets selected.

Survey or statistical program

2 facets displayed. 0 facets selected.

Content

1 facets displayed. 0 facets selected.
Sort Help
entries

Results

All (52)

All (52) (0 to 10 of 52 results)

  • Journals and periodicals: 11-633-X
    Description: Papers in this series provide background discussions of the methods used to develop data for economic, health, and social analytical studies at Statistics Canada. They are intended to provide readers with information on the statistical methods, standards and definitions used to develop databases for research purposes. All papers in this series have undergone peer and institutional review to ensure that they conform to Statistics Canada's mandate and adhere to generally accepted standards of good professional practice.
    Release date: 2024-01-22

  • Articles and reports: 12-001-X202300100010
    Description: Precise and unbiased estimates of response propensities (RPs) play a decisive role in the monitoring, analysis, and adaptation of data collection. In a fixed survey climate, those parameters are stable and their estimates ultimately converge when sufficient historic data is collected. In survey practice, however, response rates gradually vary in time. Understanding time-dependent variation in predicting response rates is key when adapting survey design. This paper illuminates time-dependent variation in response rates through multi-level time-series models. Reliable predictions can be generated by learning from historic time series and updating with new data in a Bayesian framework. As an illustrative case study, we focus on Web response rates in the Dutch Health Survey from 2014 to 2019.
    Release date: 2023-06-30

  • Articles and reports: 11-522-X202100100020
    Description: Seasonal adjustment of time series at Statistics Canada is performed using the X-12-ARIMA method. For most statistical programs performing seasonal adjustment, subject matter experts (SMEs) are responsible for managing the program and for verification, analysis and dissemination of the data, while methodologists from the Time Series Research and Analysis Center (TSRAC) are responsible for developing and maintaining the seasonal adjustment process and for providing support on seasonal adjustment to SMEs. A visual summary report called the seasonal adjustment dashboard has been developed in R Shiny by the TSRAC to build capacity to interpret seasonally adjusted data and to reduce the resources needed to support seasonal adjustment. It is currently being made available internally to assist SMEs to interpret and explain seasonally adjusted results. The summary report includes graphs of the series across time, as well as summaries of individual seasonal and calendar effects and patterns. Additionally, key seasonal adjustment diagnostics are presented and the net effect of seasonal adjustment is decomposed into its various components. This paper gives a visual representation of the seasonal adjustment process, while demonstrating the dashboard and its interactive functionality.

    Key Words: Time Series; X-12-ARIMA; Summary Report; R Shiny.

    Release date: 2021-10-15

  • Articles and reports: 12-001-X201800154927
    Description:

    Benchmarking monthly or quarterly series to annual data is a common practice in many National Statistical Institutes. The benchmarking problem arises when time series data for the same target variable are measured at different frequencies and there is a need to remove discrepancies between the sums of the sub-annual values and their annual benchmarks. Several benchmarking methods are available in the literature. The Growth Rates Preservation (GRP) benchmarking procedure is often considered the best method. It is often claimed that this procedure is grounded on an ideal movement preservation principle. However, we show that there are important drawbacks to GRP, relevant for practical applications, that are unknown in the literature. Alternative benchmarking models will be considered that do not suffer from some of GRP’s side effects.

    Release date: 2018-06-21

  • Articles and reports: 82-003-X201800254908
    Description:

    This study examined nine national surveys of the household population which collected information about drug use during the period from 1985 through 2015. These surveys are examined for comparability. The data are used to estimate past-year (current) cannabis use (total, and by sex and age). Based on the most comparable data, trends in use from 2004 through 2015 are estimated.

    Release date: 2018-02-21

  • Articles and reports: 12-001-X201700254871
    Description:

    In this paper the question is addressed how alternative data sources, such as administrative and social media data, can be used in the production of official statistics. Since most surveys at national statistical institutes are conducted repeatedly over time, a multivariate structural time series modelling approach is proposed to model the series observed by a repeated surveys with related series obtained from such alternative data sources. Generally, this improves the precision of the direct survey estimates by using sample information observed in preceding periods and information from related auxiliary series. This model also makes it possible to utilize the higher frequency of the social media to produce more precise estimates for the sample survey in real time at the moment that statistics for the social media become available but the sample data are not yet available. The concept of cointegration is applied to address the question to which extent the alternative series represent the same phenomena as the series observed with the repeated survey. The methodology is applied to the Dutch Consumer Confidence Survey and a sentiment index derived from social media.

    Release date: 2017-12-21

  • Articles and reports: 12-001-X201700114819
    Description:

    Structural time series models are a powerful technique for variance reduction in the framework of small area estimation (SAE) based on repeatedly conducted surveys. Statistics Netherlands implemented a structural time series model to produce monthly figures about the labour force with the Dutch Labour Force Survey (DLFS). Such models, however, contain unknown hyperparameters that have to be estimated before the Kalman filter can be launched to estimate state variables of the model. This paper describes a simulation aimed at studying the properties of hyperparameter estimators in the model. Simulating distributions of the hyperparameter estimators under different model specifications complements standard model diagnostics for state space models. Uncertainty around the model hyperparameters is another major issue. To account for hyperparameter uncertainty in the mean squared errors (MSE) estimates of the DLFS, several estimation approaches known in the literature are considered in a simulation. Apart from the MSE bias comparison, this paper also provides insight into the variances and MSEs of the MSE estimators considered.

    Release date: 2017-06-22

  • Articles and reports: 13-604-M2015077
    Description:

    This new dataset increases the information available for comparing the performance of provinces and territories across a range of measures. It combines often fragmented provincial time series data that, as such, are of limited utility for examining the evolution of provincial economies over extended periods. More advanced statistical methods, and models with greater breadth and depth, are difficult to apply to existing fragmented Canadian data. The longitudinal nature of the new provincial dataset remedies this shortcoming. This report explains the construction of the latest vintage of the dataset. The dataset contains the most up-to-date information available.

    Release date: 2015-02-12

  • Articles and reports: 12-001-X201400214110
    Description:

    In developing the sample design for a survey we attempt to produce a good design for the funds available. Information on costs can be used to develop sample designs that minimise the sampling variance of an estimator of total for fixed cost. Improvements in survey management systems mean that it is now sometimes possible to estimate the cost of including each unit in the sample. This paper develops relatively simple approaches to determine whether the potential gains arising from using this unit level cost information are likely to be of practical use. It is shown that the key factor is the coefficient of variation of the costs relative to the coefficient of variation of the relative error on the estimated cost coefficients.

    Release date: 2014-12-19

  • Articles and reports: 11-010-X201000311141
    Geography: Canada
    Description:

    A review of what seasonal adjustment does, and how it helps analysts focus on recent movements in the underlying trend of economic data.

    Release date: 2010-03-18
Data (0)

Data (0) (0 results)

No content available at this time.

Analysis (46)

Analysis (46) (30 to 40 of 46 results)

  • Articles and reports: 12-001-X199300214457
    Description:

    The maximum likelihood estimation of a non-linear benchmarking model, proposed by Laniel and Fyfe (1989; 1990), is considered. This model takes into account the biases and sampling errors associated with the original series. Since the maximum likelihood estimators of the model parameters are not obtainable in closed forms, two iterative procedures to find the maximum likelihood estimates are discussed. The closed form expressions for the asymptotic variances and covariances of the benchmarked series, and of the fitted values are also provided. The methodology is illustrated using published Canadian retail trade data.

    Release date: 1993-12-15

  • Articles and reports: 12-001-X199100214505
    Description:

    The X-11-ARIMA seasonal adjustment method and the Census X-11 variant use a standard ANOVA-F-test to assess the presence of stable seasonality. This F-test is applied to a series consisting of estimated seasonals plus irregulars (residuals) which may be (and often are) autocorrelated, thus violating the basic assumption of the F-test. This limitation has long been known by producers of seasonally adjusted data and the nominal value of the F statistic has been rarely used as a criterion for seasonal adjustment. Instead, producers of seasonally adjusted data have used rules of thumb, such as, F equal to or greater than 7. This paper introduces an exact test which takes into account autocorrelated residuals following an SMA process of the (0, q) (0, Q)_s type. Comparisons of this modified F-test and the standard ANOVA test of X-11-ARIMA are made for a large number of Canadian socio-economic series.

    Release date: 1991-12-16

  • Articles and reports: 12-001-X199000214531
    Description:

    Benchmarking is a method of improving estimates from a sub-annual survey with the help of corresponding estimates from an annual survey. For example, estimates of monthly retail sales might be improved using estimates from the annual survey. This article deals, first with the problem posed by the benchmarking of time series produced by economic surveys, and then reviews the most relevant methods for solving this problem. Next, two new statistical methods are proposed, based on a non-linear model for sub-annual data. The benchmarked estimates are then obtained by applying weighted least squares.

    Release date: 1990-12-14

  • Articles and reports: 12-001-X199000214532
    Description:

    Births by census division are studied via graphs and maps for the province of Saskatchewan for the years 1986-87. The goal of the work is to see how births are related to time and geography by obtaining contour maps that display the birth phenomenon in a smooth fashion. A principal difficulty arising is that the data are aggregate. A secondary goal is to examine the extent to which the Poisson-lognormal can replace for data that are counts, the normal regression model for continuous variates. To this end a hierarchy of models for count-valued random variates are fit to the birth data by maximum likelihood. These models include: the simple Poisson, the Poisson with year and weekday effects and the Poisson-lognormal with year and weekday effects. The use of the Poisson-lognormal is motivated by the idea that important covariates are unavailable to include in the fitting. As the discussion indicates, the work is preliminary.

    Release date: 1990-12-14

  • Articles and reports: 12-001-X199000214533
    Description:

    A commonly used model for the analysis of time series models is the seasonal ARIMA model. However, the survey errors of the input data are usually ignored in the analysis. We show, through the use of state-space models with partially improper initial conditions, how to estimate the unknown parameters of this model using maximum likelihood methods. As well, the survey estimates can be smoothed using an empirical Bayes framework and model validation can be performed. We apply these techniques to an unemployment series from the Labour Force Survey.

    Release date: 1990-12-14

  • Articles and reports: 12-001-X199000214534
    Description:

    The common approach to small area estimation is to exploit the cross-sectional relationships of the data in an attempt to borrow information from one small area to assist in the estimation in others. However, in the case of repeated surveys, further gains in efficiency can be secured by modelling the time series properties of the data as well. We illustrate the idea by considering regression models with time varying, cross-sectionally correlated coefficients. The use of past relationships to estimate current means raises the question of how to protect against model breakdowns. We propose a modification which guarantees that the model dependent predictors of aggregates of the small area means coincide with the corresponding survey estimators and we explore the statistical properties of the modification. The proposed procedure is applied to data on home sale prices used for the computation of housing price indexes.

    Release date: 1990-12-14

  • Articles and reports: 12-001-X199000214535
    Description:

    Papers by Scott and Smith (1974) and Scott, Smith, and Jones (1977) suggested the use of signal extraction results from time series analysis to improve estimates in repeated surveys, what we call the time series approach to estimation in repeated surveys. We review the underlying philosophy of this approach, pointing out that it stems from recognition of two sources of variation - time series variation and sampling variation - and that the approach can provide a unifying framework for other problems where the two sources of variation are present. We obtain some theoretical results for the time series approach regarding design consistency of the time series estimators, and uncorrelatedness of the signal and sampling error series. We observe that, from a design-based perspective, the time series approach trades some bias for a reduction in variance and a reduction in average mean squared error relative to classical survey estimators. We briefly discuss modeling to implement the time series approach, and then illustrate the approach by applying it to time series of retail sales of eating places and of drinking places from the U.S. Census Bureau’s Retail Trade Survey.

    Release date: 1990-12-14

  • Articles and reports: 12-001-X198900114579
    Description:

    Estimation of the means of a characteristic for a population at different points in time, based on a series of repeated surveys, is briefly reviewed. By imposing a stochastic parametric model on these means, it is possible to estimate the parameters of the model and to obtain alternative estimators of the means themselves. We describe the case where the population means follow an autoregressive-moving average (ARMA) process and the survey errors can also be formulated as an ARMA process. An example using data from the Canadian Travel Survey is presented.

    Release date: 1989-06-15

  • Articles and reports: 12-001-X198700114509
    Description:

    This paper discusses three problems that have been a major preoccupation among researchers and practitioners of seasonal adjustment in statistical bureaus for the last ten years. These problems are: (l) the use of concurrent seasonal factors versus seasonal factor forecasts for current seasonal adjustment; (2) finding an optimal pattern of revisions for series seasonally adjusted with concurrent factors; and (3) smoothing highly irregular seasonally adjusted data.

    Release date: 1987-06-15

  • Articles and reports: 12-001-X198600214448
    Description:

    The seasonal adjustment of a time series is not a straightforward procedure particularly when the level of a series nearly doubles in just one year. The 1981-82 recession had a very sudden great impact not only on the structure of the series but on the estimation of the trend- cycle and seasonal components at the end of the series. Serious seasonal adjustment problems can occur. For instance: the selection of the wrong decomposition model may produce underadjustment in the seasonally high months and overadjustment in the seasonally low months. The wrong decomposition model may also signal a false turning point. This article analyses these two aspects of the interplay between a severe recession and seasonal adjustment.

    Release date: 1986-12-15
Reference (6)

Reference (6) ((6 results))

  • Surveys and statistical programs – Documentation: 11-522-X19990015648
    Description:

    We estimate the parameters of a stochastic model for labour force careers involving distributions of correlated durations employed, unemployed (with and without job search) and not in the labour force. If the model is to account for sub-annual labour force patterns as well as advancement towards retirement, then no single data source is adequate to inform it. However, it is possible to build up an approximation from a number of different sources.

    Release date: 2000-03-02

  • Surveys and statistical programs – Documentation: 11-522-X19990015656
    Description:

    Time series studies have shown associations between air pollution concentrations and morbidity and mortality. These studies have largely been conducted within single cities, and with varying methods. Critics of these studies have questioned the validity of the data sets used and the statistical techniques applied to them; the critics have noted inconsistencies in findings among studies and even in independent re-analyses of data from the same city. In this paper we review some of the statistical methods used to analyze a subset of a national data base of air pollution, mortality and weather assembled during the National Morbidity and Mortality Air Pollution Study (NMMAPS).

    Release date: 2000-03-02

  • Surveys and statistical programs – Documentation: 11-522-X19990015688
    Description:

    The geographical and temporal relationship between outdoor air pollution and asthma was examined by linking together data from multiple sources. These included the administrative records of 59 general practices widely dispersed across England and Wales for half a million patients and all their consultations for asthma, supplemented by a socio-economic interview survey. Postcode enabled linkage with: (i) computed local road density; (ii) emission estimates of sulphur dioxide and nitrogen dioxides, (iii) measured/interpolated concentration of black smoke, sulphur dioxide, nitrogen dioxide and other pollutants at practice level. Parallel Poisson time series analysis took into account between-practice variations to examine daily correlations in practices close to air quality monitoring stations. Preliminary analyses show small and generally non-significant geographical associations between consultation rates and pollution markers. The methodological issues relevant to combining such data, and the interpretation of these results will be discussed.

    Release date: 2000-03-02

  • Surveys and statistical programs – Documentation: 11-522-X19980015031
    Description:

    The U.S. Third National Health and Nutrition Examination Survey (NHANES III) was carried out from 1988 to 1994. This survey was intended primarily to provide estimates of cross-sectional parameters believed to be approximately constant over the six-year data collection period. However, for some variable (e.g., serum lead, body mass index and smoking behavior), substantive considerations suggest the possible presence of nontrivial changes in level between 1988 and 1994. For these variables, NHANES III is potentially a valuable source of time-change information, compared to other studies involving more restricted populations and samples. Exploration of possible change over time is complicated by two issues. First, there was of practical concern because some variables displayed substantial regional differences in level. This was of practical concern because some variables displayed substantial regional differences in level. Second, nontrivial changes in level over time can lead to nontrivial biases in some customary NHANES III variance estimators. This paper considers these two problems and discusses some related implications for statistical policy.

    Release date: 1999-10-22

  • Surveys and statistical programs – Documentation: 11-522-X19980015033
    Description:

    Victimizations are not randomly scattered through the population, but tend to be concentrated in relatively few victims. Data from the U.S. National Crime Victimization Survey (NCVS), a multistage rotating panel survey, are employed to estimate the conditional probabilities of being a crime victim at time t given the victimization status in earlier interviews. Models are presented and fit to allow use of partial information from households that move in or out of the housing unit during the study period. The estimated probability of being a crime victim at interview t given the status at interview (t-l) is found to decrease with t. Possible implications for estimating cross-sectional victimization rates are discusssed.

    Release date: 1999-10-22

  • Notices and consultations: 62-010-X19970023422
    Description:

    The current official time base of the Consumer Price Index (CPI) is 1986=100. This time base was first used when the CPI for June 1990 was released. Statistics Canada is about to convert all price index series to the time base 1992=100. As a result, all constant dollar series will be converted to 1992 dollars. The CPI will shift to the new time base when the CPI for January 1998 is released on February 27th, 1998.

    Release date: 1997-11-17
Date modified: