NEC orchestrating a brighter world
NEC Laboratories Europe

Home
Blog

 Blog_entry.jpg blog_preview_data-science_edge_1.png

A Study on Ensemble Learning for Time Series Forecasting and the Need for Meta-Learning

Time series forecasting estimates how a sequence of observations continues into the future. In this blog post, we discuss the performance of ensemble methods for time series forecasting. We obtained our insights from conducting an experiment that compared a collection of 12 ensemble methods for time series forecasting, their hyperparameters and the different strategies used to select forecasting models. Furthermore, we will describe our developed meta-learning approach that automatically selects a subset of these ensemble methods (plus their hyperparameter configurations) to run for any given time series dataset.

 CCMTL_concept.pdf-1.png blog_preview_data-science_edge_2.png

Multi-task Learning for Massive Numbers of Regression Tasks

Many real-world large-scale regression problems can be formulated as Multi-task Learning (MTL) problems with a massive number of tasks, as in retail and transportation domains. However, existing MTL methods still fail to offer both the generalization performance and the scalability for such problems. Scaling up MTL methods to problems with a tremendous number of tasks is a big challenge. Here, we propose a novel algorithm, named Convex Clustering Multi-Task regression Learning (CCMTL), which integrates with convex clustering on the k -nearest neighbor graph of the prediction models. Further, CCMTL efficiently solves the underlying convex problem with a newly proposed optimization method. CCMTL is accurate, efficient to train, and empirically scales linearly in the number of tasks. On both synthetic and real-world datasets, the proposed CCMTL outperforms seven state-of-the-art (SoA) multi-task learning methods in terms of prediction accuracy as well as computational efficiency. On a real-world retail dataset with 23 , 812 tasks, CCMTL requires only around 30 seconds to train on a single thread, while the SoA methods need up to hours or even days.

 linear_stacking.png blog_preview_data-science_edge_1.png

MetaBags

Methods for learning heterogeneous regression ensembles have not yet been proposed on a large scale. Hitherto, in classical ML literature, stacking, cascading and voting are mostly restricted to classification problems. Regression poses distinct learning challenges that may result in poor performance, even when using well established homogeneous ensemble schemas such as bagging or boosting. In this paper, we introduce MetaBags, a novel stacking framework for regression. MetaBags learns a set of meta-decision trees designed to select one base model (i.e. expert) for each query, and focuses on inductive bias reduction. Finally, these predictions are aggregated into a single prediction through a bagging procedure at meta-level. MetaBags is designed to learn a model with a fair bias-variance trade-off, and its improvement over base model performance is correlated with the prediction diversity of different experts on specific input space subregions. An exhaustive empirical testing of the method was performed, evaluating both generalization error and scalability of the approach on open, synthetic and real-world application datasets. The obtained results show that our method outperforms existing state-of-the-art approaches.

Top of this page