Survey design

Sort Help
entries

Results

All (266)

All (266) (10 to 20 of 266 results)

  • Articles and reports: 12-001-X202200100010
    Description:

    This study combines simulated annealing with delta evaluation to solve the joint stratification and sample allocation problem. In this problem, atomic strata are partitioned into mutually exclusive and collectively exhaustive strata. Each partition of atomic strata is a possible solution to the stratification problem, the quality of which is measured by its cost. The Bell number of possible solutions is enormous, for even a moderate number of atomic strata, and an additional layer of complexity is added with the evaluation time of each solution. Many larger scale combinatorial optimisation problems cannot be solved to optimality, because the search for an optimum solution requires a prohibitive amount of computation time. A number of local search heuristic algorithms have been designed for this problem but these can become trapped in local minima preventing any further improvements. We add, to the existing suite of local search algorithms, a simulated annealing algorithm that allows for an escape from local minima and uses delta evaluation to exploit the similarity between consecutive solutions, and thereby reduces the evaluation time. We compared the simulated annealing algorithm with two recent algorithms. In both cases, the simulated annealing algorithm attained a solution of comparable quality in considerably less computation time.

    Release date: 2022-06-21

  • Articles and reports: 12-001-X202100200008
    Description:

    Multiple-frame surveys, in which independent probability samples are selected from each of Q sampling frames, have long been used to improve coverage, to reduce costs, or to increase sample sizes for subpopulations of interest. Much of the theory has been developed assuming that (1) the union of the frames covers the population of interest, (2) a full-response probability sample is selected from each frame, (3) the variables of interest are measured in each sample with no measurement error, and (4) sufficient information exists to account for frame overlap when computing estimates. After reviewing design, estimation, and calibration for traditional multiple-frame surveys, I consider modifications of the assumptions that allow a multiple-frame structure to serve as an organizing principle for other data combination methods such as mass imputation, sample matching, small area estimation, and capture-recapture estimation. Finally, I discuss how results from multiple-frame survey research can be used when designing and evaluating data collection systems that integrate multiple sources of data.

    Release date: 2022-01-06

  • Articles and reports: 11-522-X202100100024
    Description: The Economic Directorate of the U.S. Census Bureau is developing coordinated design and sample selection procedures for the Annual Integrated Economic Survey. The unified sample will replace the directorate’s existing practice of independently developing sampling frames and sampling procedures for a suite of separate annual surveys, which optimizes sample design features at the cost of increased response burden. Size attributes of business populations, e.g., revenues and employment, are highly skewed. A high percentage of companies operate in more than one industry. Therefore, many companies are sampled into multiple surveys compounding the response burden, especially for “medium sized” companies.

    This component of response burden is reduced by selecting a single coordinated sample but will not be completely alleviated. Response burden is a function of several factors, including (1) questionnaire length and complexity, (2) accessibility of data, (3) expected number of repeated measures, and (4) frequency of collection. The sample design can have profound effects on the third and fourth factors. To help inform decisions about the integrated sample design, we use regression trees to identify covariates from the sampling frame that are related to response burden. Using historic frame and response data from four independently sampled surveys, we test a variety of algorithms, then grow regression trees that explain relationships between expected levels of response burden (as measured by response rate) and frame covariates common to more than one survey. We validate initial findings by cross-validation, examining results over time. Finally, we make recommendations on how to incorporate our robust findings into the coordinated sample design.
    Release date: 2021-10-29

  • Articles and reports: 11-522-X202100100007
    Description: The National Center for Health Statistics (NCHS) annually administers the National Ambulatory Medical Care Survey (NAMCS) to assess practice characteristics and ambulatory care provided by office-based physicians in the United States, including interviews with sampled physicians. After the onset of the COVID-19 pandemic, NCHS adapted NAMCS methodology to assess the impacts of COVID-19 on office-based physicians, including: shortages of personal protective equipment; COVID-19 testing in physician offices; providers testing positive for COVID-19; and telemedicine use during the pandemic. This paper describes challenges and opportunities in administering the 2020 NAMCS and presents key findings regarding physician experiences during the COVID-19 pandemic.

    Key Words: National Ambulatory Medical Care Survey (NAMCS); Office-based physicians; Telemedicine; Personal protective equipment.

    Release date: 2021-10-22

  • Articles and reports: 11-522-X202100100016
    Description: To build data capacity and address the U.S. opioid public health emergency, the National Center for Health Statistics received funding for two projects. The projects involve development of algorithms that use all available structured and unstructured data submitted for the 2016 National Hospital Care Survey (NHCS) to enhance identification of opioid-involvement and the presence of co-occurring disorders (coexistence of a substance use disorder and a mental health issue). A description of the algorithm development process is provided, and lessons learned from integrating data science methods like natural language processing to produce official statistics are presented. Efforts to make the algorithms and analytic datafiles accessible to researchers are also discussed.

    Key Words: Opioids; Co-Occurring Disorders; Data Science; Natural Language Processing; Hospital Care

    Release date: 2021-10-22

  • Articles and reports: 12-001-X202100100002
    Description:

    We consider the problem of deciding on sampling strategy, in particular sampling design. We propose a risk measure, whose minimizing value guides the choice. The method makes use of a superpopulation model and takes into account uncertainty about its parameters through a prior distribution. The method is illustrated with a real dataset, yielding satisfactory results. As a baseline, we use the strategy that couples probability proportional-to-size sampling with the difference estimator, as it is known to be optimal when the superpopulation model is fully known. We show that, even under moderate misspecifications of the model, this strategy is not robust and can be outperformed by some alternatives.

    Release date: 2021-06-24

  • Articles and reports: 12-001-X202000200001
    Description:

    This paper constructs a probability-proportional-to-size (PPS) ranked-set sample from a stratified population. A PPS-ranked-set sample partitions the units in a PPS sample into groups of similar observations. The construction of similar groups relies on relative positions (ranks) of units in small comparison sets. Hence, the ranks induce more structure (stratification) in the sample in addition to the data structure created by unequal selection probabilities in a PPS sample. This added data structure makes the PPS-ranked-set sample more informative then a PPS-sample. The stratified PPS-ranked-set sample is constructed by selecting a PPS-ranked-set sample from each stratum population. The paper constructs unbiased estimators for the population mean, total and their variances. The new sampling design is applied to apple production data to estimate the total apple production in Turkey.

    Release date: 2020-12-15

  • Articles and reports: 12-001-X202000100002
    Description:

    Model-based methods are required to estimate small area parameters of interest, such as totals and means, when traditional direct estimation methods cannot provide adequate precision. Unit level and area level models are the most commonly used ones in practice. In the case of the unit level model, efficient model-based estimators can be obtained if the sample design is such that the sample and population models coincide: that is, the sampling design is non-informative for the model. If on the other hand, the sampling design is informative for the model, the selection probabilities will be related to the variable of interest, even after conditioning on the available auxiliary data. This will imply that the population model no longer holds for the sample. Pfeffermann and Sverchkov (2007) used the relationships between the population and sample distribution of the study variable to obtain approximately unbiased semi-parametric predictors of the area means under informative sampling schemes. Their procedure is valid for both sampled and non-sampled areas.

    Release date: 2020-06-30

  • Articles and reports: 12-001-X202000100005
    Description:

    Selecting the right sample size is central to ensure the quality of a survey. The state of the art is to account for complex sampling designs by calculating effective sample sizes. These effective sample sizes are determined using the design effect of central variables of interest. However, in face-to-face surveys empirical estimates of design effects are often suspected to be conflated with the impact of the interviewers. This typically leads to an over-estimation of design effects and consequently risks misallocating resources towards a higher sample size instead of using more interviewers or improving measurement accuracy. Therefore, we propose a corrected design effect that separates the interviewer effect from the effects of the sampling design on the sampling variance. The ability to estimate the corrected design effect is tested using a simulation study. In this respect, we address disentangling cluster and interviewer variance. Corrected design effects are estimated for data from the European Social Survey (ESS) round 6 and compared with conventional design effect estimates. Furthermore, we show that for some countries in the ESS round 6 the estimates of conventional design effect are indeed strongly inflated by interviewer effects.

    Release date: 2020-06-30

  • Articles and reports: 12-001-X201900300001
    Description:

    Standard linearization estimators of the variance of the general regression estimator are often too small, leading to confidence intervals that do not cover at the desired rate. Hat matrix adjustments can be used in two-stage sampling that help remedy this problem. We present theory for several new variance estimators and compare them to standard estimators in a series of simulations. The proposed estimators correct negative biases and improve confidence interval coverage rates in a variety of situations that mirror ones that are met in practice.

    Release date: 2019-12-17
Data (0)

Data (0) (0 results)

No content available at this time.

Analysis (266)

Analysis (266) (260 to 270 of 266 results)

  • Articles and reports: 12-001-X197900254834
    Description: An alternative to the direct selection of sample is suggested, which while retaining the efficiency at the same level simplifies the selection and variance estimation processes in a wide variety of situations. If n* is the largest feasible pPS sample size that can be drawn from a given population of size N, then the proposed method entails selection of m (=N - n*) units using a pPS scheme and rejecting these units from the population such that the remainder is a pPS sample of n* units; the final sample of n units is then selected as a subsample from the remainder set. This method for selecting the pPS sample can be seen as an analogue of SRS where it is well known that the “unsampled” part of the population as well as any subsample from this part are also SRS from the entire population when SRS is the procedure used. The method is very practical for situations where m is less than the actual sample size n. Moreover, the method has the additional advantage in the context of continuing surveys, e.g. Canadian Labour Force Survey (LFS), where the number of primary sampling units (PSU’s) may have to be increased (or decreased) subsequent to the initial selection of the sample. The method also has advantages in the case of sample rotation. Main features of the proposed scheme and its limitations are given. Efficiency of the method is also evaluated empirically.
    Release date: 1979-12-15

  • Articles and reports: 12-001-X197900254835
    Description: The problem considered in this paper is the estimation of various agricultural variables using a multiple frame approach. The list frame is completely contained within the area frame. The stratification for the list and area frames are based on different criteria. Overall, the multiple frame shows some gains in terms of variance over the area frame. However, a more careful analysis reveals problem areas associated with the list frame such as the method of stratification and the degeneration of list strata over time.
    Release date: 1979-12-15

  • Articles and reports: 12-001-X197900100004
    Description: Let U = {1, 2, …, i, …, N} be a finite population of N identifiable units. A known “size measure” x_i is associated with unit i; i = 1, 2, ..., N. A sampling procedure for selecting a sample of size n (2 < n < N) with probability proportional to size (PPS) and without replacement (WOR) from the population is proposed. With this method, the inclusion probability is proportional to size (IPPS) for each unit in the population.
    Release date: 1979-06-15

  • Articles and reports: 12-001-X197900100005
    Description: Approximate cutoff rules for stratifying a population into a take-all and take-some universe have been given by Dalenius (1950) and Glasser (1962). They expressed the cutoff value (that value which delineates the boundary of the take-all and take-some) as a function of the mean, the sampling weight and the population variance. Their cutoff values were derived on the assumption that a single random sample of size n was to be drawn without replacement from the population of size N.

    In the present context, exact and approximate cutoff rules have been worked out for a similar situation. Rather than providing the sample size of the sample, the precision (coefficient of variation) is given. Note that in many sampling situations, the sampler is given a set of objectives in terms of reliability and not sample size. The result is particularly useful for determining the take-all - take-some boundary for samples drawn from a known population. The procedure is also extended to ratio estimation.
    Release date: 1979-06-15

  • Articles and reports: 12-001-X197800154832
    Description: This paper describes a survey design established to measure truck commodity flows in Peru. The article addresses the conceptual and operational features of the survey design as well as describing its elements and implementation techniques in the context of a pilot project. Finally, the paper illustrates how the results of this pilot might be used to design and implement a full-scale national survey.
    Release date: 1978-06-15

  • Articles and reports: 12-001-X197500254824
    Description:

    Madow [1968] has proposed a two-phase sampling scheme under which response bias can be eliminated from sample surveys by obtaining “true” values for a subsample of the original sample. Often in cases of Censuses or ongoing surveys, the subsample data are not used to correct the main survey estimates but to assess their reliability. The main purpose of this paper is to present methods by which reliability estimates can be obtained when true values can be determined for a subsample of units.

    Release date: 1975-12-15
Reference (1)

Reference (1) ((1 result))

  • Surveys and statistical programs – Documentation: 75F0002M1992001
    Description:

    Starting in 1994, the Survey of Labour and Income Dynamics (SLID) will follow individuals and families for at least six years, tracking their labour market experiences, changes in income and family circumstances. An initial proposal for the content of SLID, entitled "Content of the Survey of Labour and Income Dynamics : Discussion Paper", was distributed in February 1992.

    That paper served as a background document for consultation with and a review by interested users. The content underwent significant change during this process. Based upon the revised content, a large-scale test of SLID will be conducted in February and May 1993.

    The present document outlines the income and wealth content to be tested in May 1993. This document is really a continuation of SLID Research Paper Series 92-01A, which outlines the demographic and labour content used in the January /February 1993 test.

    Release date: 2008-02-29
Date modified: