# A comparison between nonparametric estimators for finite population distribution functions 2. Definition of the estimatorsA comparison between nonparametric estimators for finite population distribution functions 2. Definition of the estimators

Let $\left({y}_{i},{x}_{i}\right)$ denote the values taken on by a study variable $Y$ and an auxiliary variable $X$ on unit $i$ of a finite population $U:=\left\{1,2,\dots ,N\right\}.$ Suppose that

${y}_{i}=m\left({x}_{i}\right)+{\epsilon }_{i},\text{ }\text{ }i\in U,\text{ }\text{ }\text{ }\text{ }\text{ }\left(2.1\right)$

where $m\left(x\right)$ is a smooth function and where the ${\epsilon }_{i}’\text{s}$ are independent zero mean random variables whose distribution functions $P\left({\epsilon }_{i}\le \epsilon \right)=G\left(\epsilon \text{\hspace{0.17em}}|{x}_{i}\right)$ depend smoothly on ${x}_{i}.$ Let $s\subset U$ be a sample chosen from the population $U$ according to some sample design. As usual in the context of complete auxiliary information we assume that the ${x}_{i}\text{\hspace{0.17em}}-$ values are known for all population units, while the ${y}_{i}\text{\hspace{0.17em}}-$ values are observed only for the population units which belong to the sample $s.$

To estimate the unknown population distribution function

${F}_{N}\left(t\right):=\frac{1}{N}\sum _{i\in U}I\left({y}_{i}\le t\right),$

Kuo (1988) proposes the estimator given by

$\stackrel{^}{F}\left(t\right):=\frac{1}{N}\left(\sum _{j\in s}I\left({y}_{j}\le t\right)+\sum _{i\notin s}\sum _{j\in s}{w}_{i,j}I\left({y}_{j}\le t\right)\right),\text{ }\text{ }\text{ }\text{ }\text{ }\left(2.2\right)$

where in place of ${w}_{i,j}$ she suggests to use either the local constant regression weights

$w i,j := K( x i − x j λ ) ∑ k∈s K( x i − x k λ ) MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiFu0Je9sqqrpepC0xbbL8F4rqqrpgpeea0xe9Lqpe0x e9q8qqvqFr0dXdbrVc=b0P0xb9peee0hXddrpe0=1qpeea0=yrVue9 Fve9Fve8meaabaqaciGacaGaaeqabaWaaeaaeaaakeaacaWG3bWaaS baaSqaaiaadMgacaaISaGaamOAaaqabaGccaaMe8UaaGOoaiaai2da daWcaaqaaiaadUeadaqadaqaamaalaaabaGaamiEamaaBaaaleaaca WGPbaabeaakiabgkHiTiaadIhadaWgaaWcbaGaamOAaaqabaaakeaa cqaH7oaBaaaacaGLOaGaayzkaaaabaWaaabuaeaacaWGlbWaaeWaae aadaWcaaqaaiaadIhadaWgaaWcbaGaamyAaaqabaGccqGHsislcaWG 4bWaaSbaaSqaaiaadUgaaeqaaaGcbaGaeq4UdWgaaaGaayjkaiaawM caaaWcbaGaam4AaiabgIGiolaadohaaeqaniabggHiLdaaaaaa@5707@$

with some (integrable) kernel function in place of $K\left(u\right)$ and $\lambda >0,$ or the nearest $k$ neighbor weights

Note that in the definition $\stackrel{^}{F}\left(t\right),$

${\stackrel{^}{G}}_{i}\left(t\right):=\sum _{j\in s}{w}_{i,j}I\left({y}_{j}\le t\right)\text{ }\text{ }\text{ }\text{ }\text{ }\left(2.3\right)$

is used as the fitted value in place of the unobserved indicator function $I\left({y}_{i}\le t\right)$ for $i\notin s.$

Following an idea put forward in the textbook of Chambers and Clark (2012), we shall analyze an estimator for ${F}_{N}\left(t\right)$ based on alternative fitted values which incorporate a nonparametric estimate for the mean regression function $m\left(x\right).$ The fitted values in question are given by

${\stackrel{^}{G}}_{i}^{*}\left(t\right):=\sum _{j\in s}{w}_{i,j}I\left({y}_{j}-{\stackrel{^}{m}}_{j}\le t-{\stackrel{^}{m}}_{i}\right)\text{ }\text{ }\text{ }\text{ }\text{ }\left(2.4\right)$

where

${\stackrel{^}{m}}_{i}:=\sum _{k\in s}{w}_{i,j}{y}_{j}$

is a nonparametric estimator for $m\left(x\right)$ at $x={x}_{i},$ and the resulting estimator for ${F}_{N}\left(t\right)$ is given by

${\stackrel{^}{F}}^{*}\left(t\right):=\frac{1}{N}\left(\sum _{j\in s}I\left({y}_{j}\le t\right)+\sum _{i\notin s}\sum _{j\in s}{w}_{i,j}I\left({y}_{j}-{\stackrel{^}{m}}_{j}\le t-{\stackrel{^}{m}}_{i}\right)\right).\text{ }\text{ }\text{ }\text{ }\text{ }\left(2.5\right)$

The fitted values in (2.3) and (2.4), or appropriately modified versions of them which include sample inclusion probabilities in the regression weights ${w}_{i,j},$ can obviously be computed also for $i\in s,$ and they can be employed for example in generalized difference estimators (Särndal et al. 1992, page 221) or in model calibrated estimators (see for example Wu and Sitter 2001; Chen and Wu 2002; Wu 2003; Montanari and Ranalli 2005; Rueda, Martínez, Martínez and Arcos 2007; Rueda, Sànchez-Borrego, Arcos and Martínez 2010). In addition to the model-based estimators in (2.2) and (2.5), we shall thus consider also the generalized difference estimators given by

$\stackrel{˜}{F}\left(t\right):=\frac{1}{N}\left(\sum _{i\in U}\sum _{j\in s}{\stackrel{˜}{w}}_{i,j}I\left({y}_{j}\le t\right)\right)+\sum _{i\in s}{\pi }_{i}^{-1}\left(I\left({y}_{i}\le t\right)-\sum _{j\in s}{\stackrel{˜}{w}}_{i,j}I\left({y}_{j}\le t\right)\right)$

and by

${\stackrel{˜}{F}}^{*}\left(t\right):=\frac{1}{N}\left(\sum _{i\in U}\sum _{j\in s}{\stackrel{˜}{w}}_{i,j}I\left({y}_{j}-{\stackrel{˜}{m}}_{j}\le t-{\stackrel{˜}{m}}_{i}\right)\right)+\sum _{i\in s}{\pi }_{i}^{-1}\left(I\left({y}_{i}\le t\right)-\sum _{j\in s}{\stackrel{˜}{w}}_{i,j}I\left({y}_{j}-{\stackrel{˜}{m}}_{j}\le t-{\stackrel{˜}{m}}_{i}\right)\right)$

where ${\pi }_{i}$ denotes the first order sample inclusion probabilities, ${\stackrel{˜}{w}}_{i,j}$ denotes design weighted regression weights whose definition is given below, and ${\stackrel{˜}{m}}_{i}:={\sum }_{k\in s}{\stackrel{˜}{w}}_{i,k}{y}_{k}.$ Note that $\stackrel{˜}{F}\left(t\right)$ and ${\stackrel{˜}{F}}^{*}\text{​}\left(t\right)$ are based on design weighted counterparts of the fitted values ${\stackrel{^}{G}}_{i}\left(t\right)$ and ${\stackrel{^}{G}}_{i}^{*}\left(t\right)$ which are given by

${\stackrel{˜}{G}}_{i}\left(t\right):=\sum _{j\in s}{\stackrel{˜}{w}}_{i,j}I\left({y}_{j}\le t\right)$

and

${\stackrel{˜}{G}}_{i}^{*}\left(t\right):=\sum _{j\in s}{\stackrel{˜}{w}}_{i,j}I\left({y}_{j}-{\stackrel{˜}{m}}_{j}\le t-{\stackrel{˜}{m}}_{i}\right),$

respectively.

As for the regression weights ${w}_{i,j}$ and ${\stackrel{˜}{w}}_{i,j},$ in the present work we consider local linear regression weights in their place. In what follows ${w}_{i,j}$ and ${\stackrel{˜}{w}}_{i,j}$ are thus defined by

${w}_{i,j}:=\frac{1}{n\lambda }K\left(\frac{{x}_{i}-{x}_{j}}{\lambda }\right)\frac{{M}_{2,s}\left({x}_{i}\right)-\left(\frac{{x}_{i}-{x}_{j}}{\lambda }\right){M}_{1,s}\left({x}_{i}\right)}{{M}_{2,s}\left({x}_{i}\right){M}_{0,s}\left({x}_{i}\right)-{M}_{1,s}^{2}\left({x}_{i}\right)}$

and

${\stackrel{˜}{w}}_{i,j}:=\frac{1}{{\pi }_{j}n\lambda }K\left(\frac{{x}_{i}-{x}_{j}}{\lambda }\right)\frac{{\stackrel{˜}{M}}_{2,s}\left({x}_{i}\right)-\left(\frac{{x}_{i}-{x}_{j}}{\lambda }\right){\stackrel{˜}{M}}_{1,s}\left({x}_{i}\right)}{{\stackrel{˜}{M}}_{2,s}\left({x}_{i}\right){\stackrel{˜}{M}}_{0,s}\left({x}_{i}\right)-{\stackrel{˜}{M}}_{1,s}^{2}\left({x}_{i}\right)},$

where $n$ is the number of units in the sample $s,$

${M}_{r,s}\left(x\right):=\sum _{k\in s}\frac{1}{n\lambda }K\left(\frac{x-{x}_{k}}{\lambda }\right){\left(\frac{x-{x}_{k}}{\lambda }\right)}^{r},\text{ }\text{ }\text{ }r=0,1,2,$

and

${\stackrel{˜}{M}}_{r,s}\left(x\right):=\sum _{k\in s}\frac{1}{{\pi }_{k}n\lambda }K\left(\frac{x-{x}_{k}}{\lambda }\right){\left(\frac{x-{x}_{k}}{\lambda }\right)}^{r},\text{ }\text{ }\text{ }r=0,1,2.$

It is worth noting that the nonparametric estimators of this section are not well-defined if the regression weights ${w}_{i,j}$ and ${\stackrel{˜}{w}}_{i,j}$ included in their definitions are not well-defined. This problem occurs for example when the support of the kernel function $K\left(u\right)$ is given by the interval $\left[-1,1\right]$ (e.g., uniform kernel, Epanechnikov kernel), and when there are not at least two $j\in s$ such that $|\text{\hspace{0.17em}}{x}_{i}-{x}_{j}\text{\hspace{0.17em}}|<\lambda .$ To overcome this problem one can use a kernel function whose support is given by the whole real line (e.g., Gaussian kernel) or choose the bandwidth adaptively. The latter solution may also lead to more efficient estimators (see e.g., Fan and Gijbels 1992). With reference to the estimators ${\stackrel{^}{F}}^{*}\text{​}\left(t\right)$ and ${\stackrel{˜}{F}}^{*}\text{​}\left(t\right)$ based on the modified fitted values, it is moreover worth noting that one could in principle apply different bandwidths and/or regression weights to the ${y}_{i}-$ values and to the indicator functions. For the sake of simplicity, in the present work we shall consider neither adaptive bandwidth selection nor the possibility of different regression weights to estimate the mean regression function and the distributions of the error components.

Comparing the definitions of the estimators based on the two types of fitted values, it becomes immediately obvious that $\stackrel{^}{F}\left(t\right)$ and $\stackrel{˜}{F}\left(t\right)$ are easier to compute since they are linear combinations of the observed indicator functions $I\left({y}_{j}\le t\right).$ The coefficients of these linear combinations do not depend on the study variable $Y$ and they can therefore be used to estimate averages of other functions than indicator functions, or of functions of several study variables, in particular when there are reasons to believe that the latter are related to the auxiliary variable $X.$ This fact is of particular value to practitioners who want estimates related to several study variables to be consistent with one another. However, there is a strong argument in favor of the estimators ${\stackrel{^}{F}}^{*}\text{​}\left(t\right)$ and ${\stackrel{˜}{F}}^{*}\text{​}\left(t\right)$ based on the modified fitted values too: if ${y}_{i}=a+b{x}_{i}$ for all $i\in U,$ then it follows that ${\stackrel{^}{F}}^{*}\text{​}\left(t\right)={\stackrel{˜}{F}}^{*}\text{​}\left(t\right)={F}_{N}\left(t\right)$ for every sample $s$ such that the estimators are well-defined. One would therefore expect that ${\stackrel{^}{F}}^{*}\text{​}\left(t\right)$ and ${\stackrel{˜}{F}}^{*}\text{​}\left(t\right)$ be more efficient than $\stackrel{^}{F}\left(t\right)$ and $\stackrel{˜}{F}\left(t\right)$ when there is a strong regression relationship between $Y$ and $X.$

Date modified: