Sample-based estimation of mean electricity consumption curves for small domains
Section 6. Conclusions and outlooks
In this article, we proposed four approaches for estimating mean curves by sampling for small domains. The first two consist of projecting curves in a finite space and using the usual methods for estimating total real variables for each base vector in the projection space. In this case, we use either unit-level linear mixed models or linear regression. The last two approaches consist of predicting each curve of the unsampled units using a non-parametric model and aggregating those predictions to determine the estimated mean curves for each domain. The models used to build the predictions are regression trees adapted to functional data build using the Courbotree approach of Stéphan and Cogordan (2009) or random forests adapted to functional data built by aggregating random Courbotree trees. For each approach, we also proposed a process for approximating the variance of mean curve estimators based on a bootstrap.
Our tests showed that the linear mixed models gave the best results and, for this particular data set, made it possible to divide the error committed by approximately seven in relation to the Horvitz-Thompson estimators. The regression trees come next, followed by the linear functional regressions.
This work can be extended in various ways. In particular, we feel that the approach based on the aggregation of non-parametric estimates of curves using regression trees or random forests is promising. An interesting possibility for improvement could be the use of more relevant distances than the Euclidean distance in the split criteria that builds our regression trees. We could thus use the Mahalanobis distance, the Manhattan distance, or a “dynamic time warping” distance.
Another possibility could be to build this split criterion by applying the Euclidian distance not on the discretized curves, but on a transformation of those curves, by projection in a wavelet base, or on non-linear summaries, such as variational autoencoders from deep learning models (see, for example, LeCun, Bengio and Hinton, 2015).
We can also question the choice of depth of the regression tree, the minimum size of the leaves and the number of trees in the forest. The criteria usually used in non-parametric statistics to answer this question are usually based on the principle of cross-validation. However, our objective here is not to determine the best possible prediction for each population unit, but a prediction that gives the best estimate of the mean curve by domain, which is not necessarily the same thing. It would therefore be best to adapt the cross-validation criteria to reflect our objective.
Finally, we note that the introduction of random effects in the linear models results in improved prediction, which leads us to think that there are characteristics in the domains that are not explained solely by the auxiliary information. It could therefore be relevant to adapt the functional regression trees to include the random effects. One solution, for example, would be to extend the algorithm from Hajjem, Bellavance and Larocque (2014), based on an EM algorithm as part of the functional data.
Acknowledgements
The authors thank Hervé Cardot for the fruitful discussions and the associate editor and two referees for their remarks and comments, which helped greatly improve this article.
References
Battese, G.E., Harter, R.M. and Fuller, W.A. (1988). An error-components model for prediction of county crop areas using survey and satellite data. Journal of the American Statistical Association, 83(401), 28-36.
Breiman, L. (1998). Arcing classifiers (with a discussion and a response from the author). The Annals of Statistics, 26(3), 801-849.
Breiman, L. (2001). Random forests. Machine Learning, 45(1), 5-32.
Breiman, L., Friedman, J., Stone, C.J. and Olshen, R.A. (1984). Classification and Regression Trees. CRC press.
Cardot, H., Degras, D. and Josserand, E. (2013). Confidence bands for Horvitz-Thompson estimators using sampled noisy functional data. Bernoulli, 19(5A), 2067-2097.
Cardot, H., Goga, C. and Lardin, P. (2013). Uniform convergence and asymptotic confidence bands for model-assisted estimators of the mean of sampled functional data. Electronic Journal of Statistics, 7, 562-596.
Cardot, H., Chaouch, M., Goga, C. and Labruère, C. (2010). Properties of design-based functional principal components analysis. Journal of Statistical Planning and Inference, 140(1), 75-91.
Cardot, H., Dessertaine, A., Goga, C., Josserand, E. and Lardin, P. (2013). Comparison of different sample designs and construction of confidence bands to estimate the mean of functional data: An illustration on electricity consumption. Survey Methodology, 39, 2, 283-301. Paper available at https://www150.statcan.gc.ca/n1/pub/12-001-x/2013002/article/11888-eng.pdf.
Cristianini, N., and Shawe-Taylor, J. (2000). An Introduction to Support Vector Machines. Cambridge University Press Cambridge.
Dauxois, J., Pousse, A. and Romain, Y. (1982). Asymptotic theory for the principal component analysis of a vector random function: Some applications to statistical inference. Journal of Multivariate Analysis, 12(1), 136-154.
De’Ath, G. (2002). Multivariate regression trees: A new technique for modeling species-environment relationships. Ecology, 83(4), 1105-1117.
Deville, J.-C. (1974). Méthodes statistiques et numériques de l’analyse harmonique. In Annales de l’INSEE, JSTOR, 3-101.
Deville, J.-C., and Särndal, C.-E. (1992). Calibration estimators in survey sampling. Journal of the American statistical Association, 87(418), 376-382.
Faraway, J.J. (1997). Regression analysis for a functional response. Technometrics, 39(3), 254-261.
González-Manteiga, W., Lombarda, M.J., Molina, I., Morales, D. and Santamara, L. (2008). Analytic and bootstrap approximations of prediction errors under a multivariate Fay-Herriot model. Computational Statistics & Data Analysis, 52(12), 5242-5252.
Hajjem, A., Bellavance, F. and Larocque, D. (2014). Mixed-effects random forest for clustered data. Journal of Statistical Computation and Simulation, 84(6), 1313-1328.
Hall, P., Müller, H.-G. and Wang, J.-L. (2006). Properties of principal component methods for functional and longitudinal data analysis. The Annals of Statistics, 1493-1517.
Horvitz, D.G., and Thompson, D.J. (1952). A generalization of sampling without replacement from a finite universe. Journal of the American Statistical Association, 47(260), 663-685.
LeCun, Y., Bengio, Y. and Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444.
Mallat, S. (1999). A Wavelet Tour of Signal Processing. Academic press.
Molina, I., and Rao, J.N.K. (2010). Small area estimation of poverty indicators. Canadian Journal of Statistics, 38(3), 369-385.
Pfeffermann, D., and Burck, L. (1990). Robust small area estimation combining time series and cross-sectional data. Survey Methodology, 16, 2, 217-237. Paper available at https://www150.statcan.gc.ca/n1/pub/12-001-x/1990002/article/14534-eng.pdf.
Ramsay, J.-O., and Silverman, B.-W. (2005). Functional Data Analysis. Springer Series in Statistics, New York, Second Edition.
Rao, J.N.K., and Molina, I. (2015). Small Area Estimation. New York: John Wiley & Sons, Inc.
Rao, J.N.K., and Yu, M. (1994). Small-area estimation by combining time-series and cross-sectional data. Canadian Journal of Statistics, 22(4), 511-528.
Segal, M., and Xiao, Y. (2011). Multivariate random forests. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 1(1), 80-87.
Stéphan, V., and Cogordan, F. (2009). CourboTree: Application des arbres de régression multivariés pour la classification de courbes. La Revue MODULAD, June.
Toth, D., and Eltinge, J.L. (2011). Building consistent regression trees from complex sample data. Journal of the American Statistical Association, 106(496), 1626-1636.
Valliant, R., Dorfman, A.H. and Royall, R.M. (2000). Finite Population Sampling and Inference: A Prediction Approach. New York: John Wiley & Sons, Inc.
- Date modified: