Sample-based estimation of mean electricity consumption curves for small domains Section 3. Direct estimation methods in the design-based approach

In this section, we adopt the sampling design approach. This means that the variable interest values ${Y}_{i}$ for each population unit are considered to be deterministic and the only variable present is that of the construction of the sample. The statistical inference then only describes the randomness created by the sampling design.

We will present two classic estimators, the Horvitz-Thompson estimator and the calibration estimator, which will be the references to which we will compare our methods to evaluate performances. These are direct estimators, i.e., estimators constructed by using, for the estimation of the mean for each domain, only units and auxiliary information related to the domain in question.

The functional Horvitz-Thompson estimator (Horvitz and Thompson, 1952; Cardo, Chaouch, Goga and Labruère, 2010) of ${\mu }_{d}$ is given by:

${\stackrel{^}{\mu }}_{d}^{\text{HT}}\left(t\right)=\frac{1}{{N}_{d}}\sum _{i\in {s}_{d}}\text{\hspace{0.17em}}{d}_{i}{Y}_{i}\left(t\right),\text{ }d=1,\text{\hspace{0.17em}}\dots ,\text{\hspace{0.17em}}D,\text{ }t\in \left[0,\text{\hspace{0.17em}}T\right],\text{ }\text{ }\text{ }\text{ }\text{ }\left(3.1\right)$

with ${d}_{i}=1/{\pi }_{i}$ the sampling weight of unit $i,$ also called the Horvitz-Thompson weight. It obviously cannot be calculated for the unsampled domains (i.e., domains $d$ such that ${s}_{d}$ is empty) and it is extremely unstable for small domains. Moreover, it in no way uses the predictor variables available to us.

To take advantage of the auxiliary information, again in a sampling design approach, we can use the calibration estimator proposed by Deville and Särndal (1992).

The calibration estimator for the mean ${\mu }_{d}$ is given by:

${\stackrel{^}{\mu }}_{d}^{\text{cal}}\left(t\right)=\frac{1}{{N}_{d}}\sum _{i\in {s}_{d}}\text{\hspace{0.17em}}{w}_{id}^{\text{cal}}{Y}_{i}\left(t\right)\text{ }d=1,\text{\hspace{0.17em}}\dots ,\text{\hspace{0.17em}}D,\text{ }t\in \left[0,\text{\hspace{0.17em}}T\right],\text{ }\text{ }\text{ }\text{ }\text{ }\left(3.2\right)$

where the calibration weights ${w}_{id}^{\text{cal}}\text{​},\text{\hspace{0.17em}}i\in {s}_{d}$ are as close as possible to the sampling weights ${d}_{i}$ units of ${s}_{d}$ within the meaning of a certain distance or pseudo-distance $G\left(w,\text{\hspace{0.17em}}d\right)$ defined by the statistician:

$\underset{{w}_{id}}{\mathrm{min}}\sum _{i\in {s}_{d}}\text{\hspace{0.17em}}{d}_{i}G\left({w}_{id},\text{\hspace{0.17em}}{d}_{i}\right)\text{ }\text{ }\text{subject}\text{\hspace{0.17em}}\text{to}\text{ }\text{ }\sum _{i\in {s}_{d}}\text{\hspace{0.17em}}{w}_{id}{X}_{i}=\sum _{i\in {U}_{d}}\text{\hspace{0.17em}}{X}_{i}.\text{ }\text{ }\text{ }\text{ }\text{ }\left(3.3\right)$

For the distance of chi-square $G\left({w}_{id},\text{\hspace{0.17em}}{d}_{i}\right)={\sum }_{i\in {s}_{d}}{\left({w}_{id}-{d}_{i}\right)}^{2}/{d}_{i},$ the weights are given by

${w}_{id}^{\text{cal}}={d}_{i}+{d}_{i}{\left(\sum _{i\in {U}_{d}}\text{\hspace{0.17em}}{X}_{i}-\sum _{i\in {s}_{d}}\text{\hspace{0.17em}}{d}_{i}{X}_{i}\right)}^{\prime }{\left(\sum _{i\in {s}_{d}}\text{\hspace{0.17em}}{d}_{i}{X}_{i}{X}_{i}^{\prime }\right)}^{-1}{X}_{i},\text{ }i\in {s}_{d}$

and the estimator becomes

${\stackrel{^}{\mu }}_{d}^{\text{cal}}\left(t\right)=\frac{1}{{N}_{d}}\sum _{i\in {s}_{d}}\text{\hspace{0.17em}}{d}_{i}{y}_{i}-\frac{1}{{N}_{d}}{\left(\sum _{i\in {s}_{d}}\text{\hspace{0.17em}}{d}_{i}{X}_{i}-\sum _{i\in {U}_{d}}\text{\hspace{0.17em}}{X}_{i}\right)}^{\prime }{\stackrel{^}{\beta }}_{d}\left(t\right),$

where ${\stackrel{^}{\beta }}_{d}\left(t\right)={\left({\sum }_{i\in {s}_{d}}\text{\hspace{0.17em}}{d}_{i}{X}_{i}{X}_{i}^{\prime }\right)}^{-1}{\sum }_{i\in {s}_{d}}\text{\hspace{0.17em}}{d}_{i}{X}_{i}{Y}_{i}\left(t\right).$ The calibration weights are not dependent on time $t,$ but they are dependent in this case on the domain $d,$ therefore, the estimator ${\stackrel{^}{\mu }}_{d}^{\text{cal}}\left(t\right)$ does not satisfy the additivity property, i.e., ${\sum }_{d=1}^{D}\text{\hspace{0.17em}}{\stackrel{^}{\mu }}_{d}^{\text{cal}}\left(t\right)={\stackrel{^}{\mu }}^{\text{cal}}\left(t\right)$ where ${\stackrel{^}{\mu }}^{\text{cal}}\left(t\right)$ is the calibration estimator of $\mu ={\sum }_{i\in U}\text{\hspace{0.17em}}{Y}_{i}/N.$ Where the vector $1={\left(1,\text{\hspace{0.17em}}1,\text{\hspace{0.17em}}\dots ,\text{\hspace{0.17em}}1\right)}^{\prime }$ is in the model, thus,

${\stackrel{^}{\mu }}_{d}^{\text{cal}}\left(t\right)=\frac{1}{{N}_{d}}\sum _{i\in {U}_{d}}\text{\hspace{0.17em}}{X}_{i}^{\prime }{\stackrel{^}{\beta }}_{d}\left(t\right)={\overline{X}}_{d}{\stackrel{^}{\beta }}_{d}\left(t\right),\text{ }t\in \left[0,\text{\hspace{0.17em}}T\right].$

If size ${n}_{d}$ is large, this estimator is approximately bias-free regarding the sampling plan. We can consider the modified estimator:

${\stackrel{^}{\mu }}_{d}^{\text{mod}}\left(t\right)=\frac{1}{{N}_{d}}\sum _{i\in {s}_{d}}\text{\hspace{0.17em}}{d}_{i}{Y}_{i}\left(t\right)-\frac{1}{{N}_{d}}{\left(\sum _{i\in {s}_{d}}\text{\hspace{0.17em}}{d}_{i}{X}_{i}-\sum _{i\in {U}_{d}}\text{\hspace{0.17em}}{X}_{i}\right)}^{\prime }\stackrel{^}{\beta }\left(t\right),\text{ }t\in \left[0,\text{\hspace{0.17em}}T\right],\text{ }\text{ }\text{ }\text{ }\left(3.4\right)$

where

$\stackrel{^}{\beta }\left(t\right)={\left(\sum _{i\in s}\text{\hspace{0.17em}}{d}_{i}{X}_{i}{X}_{i}^{\prime }\right)}^{-1}\sum _{i\in s}\text{\hspace{0.17em}}{d}_{i}{X}_{i}{Y}_{i}\left(t\right),\text{ }t\in \left[0,\text{\hspace{0.17em}}T\right],\text{ }\text{ }\text{ }\text{ }\left(3.5\right)$

does not depend on domain $d$ and, therefore, the estimator ${\stackrel{^}{\mu }}_{d}^{\text{mod}}$ satisfies the additivity property, i.e., ${\sum }_{d=1}^{D}\text{\hspace{0.17em}}{\stackrel{^}{\mu }}_{d}^{\text{mod}}\left(t\right)={\stackrel{^}{\mu }}^{\text{cal}}\left(t\right)$ where ${\stackrel{^}{\mu }}^{\text{cal}}\left(t\right)$ is the calibration estimator of $\mu ={\sum }_{i\in U}\text{\hspace{0.17em}}{Y}_{i}/N.$ As well, if $n$ is large, it has no asymptotic bias even if size ${n}_{d}$ is not large. The asymptotic variance functions of ${\stackrel{^}{\mu }}_{d}^{\text{cal}}\left(t\right)$ and ${\stackrel{^}{\mu }}_{d}^{\text{mod}}\left(t\right)$ are equal to the Horvitz-Thompson variances of residuals ${Y}_{i}\left(t\right)-{X}_{i}^{\prime }{\stackrel{^}{\beta }}_{d}\left(t\right)$ and ${Y}_{i}\left(t\right)-{X}_{i}^{\prime }\stackrel{^}{\beta }\left(t\right)$ (see Rao and Molina, 2015).

Nonetheless, for each domain, these estimates are based only on data from the domain in question (curves and explanatory variables) without considering the rest of the sample. Like the Horvitz-Thompson estimator, they are therefore inaccurate for small domains and cannot be calculated for unsampled domains.

The methods that we present in the following section will allow us, by presenting a model common to all units of the population that describes the link between variables of interest and auxiliary information, to jointly use all data from the sample to perform the estimate for each domain, and thus increase the accuracy for each one. It will also make it possible to even provide estimates for unsampled domains.

﻿

Is something not working? Is there information outdated? Can't find what you're looking for?