We study the mutual information between (certain summaries of) the output of a learning algorithm and its n training data, conditional on a supersample of n + 1 i.i.d. data from which the training data is chosen at ra...
详细信息
Large-batch training has been essential in leveraging large-scale datasets and models in deep learning. While it is computationally beneficial to use large batch sizes, it often requires a specially designed learning ...
详细信息
Inspired by recent work of Islamov et al (2021), we propose a family of Federated Newton Learn (FedNL) methods, which we believe is a marked step in the direction of making second-order methods applicable to FL. In co...
详细信息
We investigate top-m arm identification, a basic problem in bandit theory, in a multi-agent learning model in which agents collaborate to learn an objective function. We are interested in designing collaborative learn...
详细信息
Developing simple, sample-efficient learning algorithms for robust classification is a pressing issue in today’s tech-dominated world, and current theoretical techniques requiring exponential sample complexity and co...
详细信息
Supervised deep learning-based approaches have been applied to task-oriented dialog and have proven to be effective for limited domain and language applications when a sufficient number of training examples are availa...
详细信息
In this paper, we consider a two-dimensional (2-D) formation problem for multi-agent systems subject to switching topologies that dynamically change along both a finite time axis and an infinite iteration axis. We pre...
详细信息
ISBN:
(纸本)9781479901784
In this paper, we consider a two-dimensional (2-D) formation problem for multi-agent systems subject to switching topologies that dynamically change along both a finite time axis and an infinite iteration axis. We present a distributed iterative learning control (ILC) algorithm via the nearest neighbor rules. By employing the 2-D approach, we develop both the asymptotic and exponentially fast convergence of our formation ILC, which can be guaranteed by conditions in terms of the spectral radius and the matrix norms, respectively.
A shared goal of several machine learning communities like continual learning, meta-learning and transfer learning, is to design algorithms and models that efficiently and robustly adapt to unseen tasks. An even more ...
详细信息
A shared goal of several machine learning communities like continual learning, meta-learning and transfer learning, is to design algorithms and models that efficiently and robustly adapt to unseen tasks. An even more ambitious goal is to build models that never stop adapting, and that become increasingly more efficient through time by suitably transferring the accrued knowledge. Beyond the study of the actual learning algorithm and model architecture, there are several hurdles towards our quest to build such models, such as the choice of learning protocol, metric of success and data needed to validate research hypotheses. In this work, we introduce the Never-Ending VIsual-classification Stream (Nevis’22), a benchmark consisting of a stream of over 100 visual classification tasks, sorted chronologically and extracted from papers sampled uniformly from computer vision proceedings spanning the last three decades. The resulting stream reflects what the research community thought was meaningful at any point in time, and it serves as an ideal test bed to assess how well models can adapt to new tasks, and do so better and more efficiently as time goes by. Despite being limited to classification, the resulting stream has a rich diversity of tasks from OCR, to texture analysis, scene recognition, and so forth. The diversity is also reflected in the wide range of dataset sizes, spanning over four orders of magnitude. Overall, Nevis’22 poses an unprecedented challenge for current sequential learning approaches due to the scale and diversity of tasks, yet with a low entry barrier as it is limited to a single modality and well understood supervised learning problems. Moreover, we provide a reference implementation including strong baselines and an evaluation protocol to compare methods in terms of their trade-off between accuracy and compute. We hope that Nevis’22 can be useful to researchers working on continual learning, meta-learning, AutoML and more generally sequential lear
Continual (or "incremental") learning approaches are employed when additional knowledge or tasks need to be learned from subsequent batches or from streaming data. However these approaches are typically adve...
详细信息
Federated learning (FL) has emerged as an important machine learning paradigm where a global model is trained based on the private data from distributed clients. However, most of existing FL algorithms cannot guarante...
详细信息
暂无评论