Training Data Subset Selection for Regression With Controlled Generalization Error
Durga Sivasubramanian, Rishabh Iyer, Ganesh Ramakrishnan and Abir De
Accepted paper at the 38th International Conference on Machine Learning (ICML 2021).
[Abstract]
[PDF]
[Code]
[Video]
Data subset selection from a large number of training instances has been a successful approach toward
efficient and cost-effective machine learning. However, models trained on a smaller subset may show poor
generalization ability. In this paper, our goal is to design an algorithm for selecting a subset of the
training data, so that the model can be trained quickly, without significantly sacrificing on accuracy.
More specifically, we focus on data subset selection for L2 regularized regression problems and provide a
novel problem formulation which seeks to minimize the training loss with respect to both the trainable
parameters and the subset of training data, subject to error bounds on the validation set. We tackle this
problem using several technical innovations. First, we represent this problem with simplified constraints
using the dual of the original training problem and show that the objective of this new representation is a
monotone and α-submodular function, for a wide variety of modeling choices. Such properties lead us to develop
SELCON, an efficient majorization-minimization algorithm for data subset selection, that admits an
approximation guarantee even when the training provides an imperfect estimate of the trained model. Finally,
our experiments on several datasets show that SELCON trades off accuracy and efficiency more effectively than
the current state-of-the-art.
GLISTER: Generalization based Data Subset Selection for Efficient and Robust Learning
Krishnateja Killamsetty, Durga Sivasubramanian, Ganesh Ramakrishnan and Rishabh Iyer
In Proceedings of The 35th AAAI Conference on Artificial Intelligence (AAAI 2021).
[Abstract]
[PDF]
[Code]
[Video]
Large scale machine learning and deep models are extremely data-hungry. Unfortunately,
obtaining large amounts of labeled data is expensive, and training state-of-the-art models
(with hyperparameter tuning) requires significant computing resources and time. Secondly, real-world data
is noisy and imbalanced. As a result, several recent papers try to make the training process more efficient
and robust. However, most existing work either focuses on robustness or efficiency, but not both. In this
work, we introduce GLISTER, a GeneraLIzation based data Subset selecTion for Efficient and Robust learning
framework. We formulate GLISTER as a mixed discretecontinuous bi-level optimization problem to select a
subset of the training data, which maximizes the log-likelihood on a held-out validation set. We then analyze
GLISTER for simple classifiers such as gaussian and multinomial naive-bayes, k-nearest neighbor classifier,
and linear regression and show connections to submodularity. Next, we propose an iterative online algorithm
GLISTER-ONLINE, which performs data selection iteratively along with the parameter updates and can be applied
to any loss-based learning algorithm. We then show that for a rich class of loss functions including
cross-entropy, hinge-loss, squared-loss, and logistic-loss, the inner discrete data selection is an
instance of (weakly) submodular optimization, and we analyze conditions for which GLISTER-ONLINE reduces
the validation loss and converges. Finally, we propose GLISTER-ACTIVE, an extension to batch active learning,
and we empirically demonstrate the performance of GLISTER on a wide range of tasks including, (a) data
selection to reduce training time, (b) robust learning under label noise and imbalance settings, and
(c) batch-active learning with several deep and shallow models. We show that our framework improves upon state
of the art both in efficiency and accuracy (in cases (a) and (c)) and is more efficient compared to other
state-ofthe-art robust learning algorithms in case (b).