We consider the Copeland voting rule, a classical and simple voting rule that takes a set of voters’ rankings over a set of candidates and outputs the candidate that wins the most pair-wise match-ups against other ca...
the class of very simple grammars is known to be polynomial-time identifiable in the limit from positive data. this paper gives an even more general discussion on the efficiency of identification of very simple gramma...
详细信息
the class of very simple grammars is known to be polynomial-time identifiable in the limit from positive data. this paper gives an even more general discussion on the efficiency of identification of very simple grammars from positive data, which includes both positive and negative results. In particular, we present an alternative efficient inconsistent learning algorithm for very simple grammars. (C) 2009 Elsevier B.V. All rights reserved.
It has been recently shown that calibration with an error less than Delta > 0 is almost surely guaranteed with a randomized forecasting algorithm, where forecasts are obtained by random rounding the deterministic f...
详细信息
It has been recently shown that calibration with an error less than Delta > 0 is almost surely guaranteed with a randomized forecasting algorithm, where forecasts are obtained by random rounding the deterministic forecasts up to Delta. We show that this error cannot be improved for a vast majority of sequences: we prove that, using a probabilistic algorithm, we can effectively generate with probability close to one a sequence "resistant" to any randomized rounding forecasting with an error Much smaller than Delta. We also reformulate this result by means of a probabilistic game. (C) 2009 Elsevier B.V. All rights reserved.
In transfer learningthe aim is to solve new learning tasks using fewer examples by using information gained from solving related tasks. Existing transfer learning methods have been used successfully in practice and P...
详细信息
In transfer learningthe aim is to solve new learning tasks using fewer examples by using information gained from solving related tasks. Existing transfer learning methods have been used successfully in practice and PAC analysis of these methods have been developed. But the key notion of relatedness between tasks has not yet been defined clearly, which makes it difficult to understand, let alone answer, questions that naturally arise ill the context of transfer, such as, how much information to transfer, whether to transfer information, and how to transfer information across tasks. In this paper, we look at transfer learning from the perspective of algorithmic Information theory/Kolmogorov complexity theory, and formally solve these problems in the same sense Solomonoff Induction solves the problem of inductive inference. We define universal measures of relatedness between tasks, and use these measures to develop universally optimal Bayesian transfer learning methods. We also derive results in AlT that are interesting by themselves. To address a concern that arises from the theory, we also briefly look at the notion of Kolmogorov complexity of probability measures. Finally, we present a simple practical approximation to the theory to do transfer learning and show that even these are quite effective, allowing LIS to transfer across tasks that are superficially unrelated. the latter is an experimental feat which has not been achieved before, and thus shows the theory is also useful in constructing practical transfer algorithms. (C) 2009 Elsevier B.V. All rights reserved.
this book constitutes the proceedings of the 26thinternationalconference on algorithmiclearningtheory, ALT 2015, held in Banff, AB, Canada, in October 2015, and co-located withthe 18thinternationalconference on...
ISBN:
(数字)9783319244860
ISBN:
(纸本)9783319244853
this book constitutes the proceedings of the 26thinternationalconference on algorithmiclearningtheory, ALT 2015, held in Banff, AB, Canada, in October 2015, and co-located withthe 18thinternationalconference on Discovery Science, DS 2015. the 23 full papers presented in this volume were carefully reviewed and selected from 44 submissions. In addition the book contains 2 full papers summarizing the invited talks and 2 abstracts of invited talks. the papers are organized in topical sections named: inductive inference; learning from queries, teaching complexity; computational learningtheory and algorithms; statistical learningtheory and sample complexity; online learning, stochastic optimization; and Kolmogorov complexity, algorithmic information theory.
Iterative learning (It-learning) is a Gold-style learning model in which each of a learner's output conjectures may depend only upon the learner's current conjecture and the current input element. Two extensio...
详细信息
Iterative learning (It-learning) is a Gold-style learning model in which each of a learner's output conjectures may depend only upon the learner's current conjecture and the current input element. Two extensions of the It-learning model are considered, each of which involves parallelism. the first is to run, in parallel, distinct instantiations of a single learner on each input element. the second is to run, in parallel, n individual learners incorporating the first extension, and to allow the n learners to communicate their results. In most contexts, parallelism is only a means of improving efficiency. However, as shown herein, learners incorporating the first extension are more powerful than It-learners, and, collective learners resulting from the second extension increase in learning power as n increases. Attention is paid to how one would actually implement a learner incorporating each extension. Parallelism is the underlying mechanism employed. (C) 2009 Elsevier B.V. All rights reserved.
this book constitutes the proceedings of the 17thinternationalconference on Discovery Science, DS 2015, held in banff, AB, Canada in October 2015. the 16 long and 12 short papers presendted together with 4 invited t...
ISBN:
(数字)9783319242828
ISBN:
(纸本)9783319242811;9783319242828
this book constitutes the proceedings of the 17thinternationalconference on Discovery Science, DS 2015, held in banff, AB, Canada in October 2015. the 16 long and 12 short papers presendted together with 4 invited talks in this volume were carefully reviewed and selected from 44 *** combination of recentadvances in the development and analysis of methods for discovering scienti cknowledge, coming from machine learning, data mining, and intelligent dataanalysis, as well as their application in various scienti c domains, on the onehand, withthe algorithmic advances in machine learningtheory, on the otherhand, makes every instance of this joint event unique and attractive.
We study the parameterized complexity of learning k-juntas and some variations of juntas. We show the hardness of learning k-juntas and subclasses of k-juntas in the PAC model by reductions from a W[2]-complete proble...
详细信息
ISBN:
(纸本)9783540752240
We study the parameterized complexity of learning k-juntas and some variations of juntas. We show the hardness of learning k-juntas and subclasses of k-juntas in the PAC model by reductions from a W[2]-complete problem. On the other hand, as a consequence of a more general result we show that k-juntas are exactly learnable with improper equivalence queries and access to a W [P] oracle.
Assume we are given a sample of points from some underlying distribution which contains several distinct clusters. Our goal is to construct a neighborhood graph on the sample points such that clusters are "identi...
详细信息
ISBN:
(纸本)9783540752240
Assume we are given a sample of points from some underlying distribution which contains several distinct clusters. Our goal is to construct a neighborhood graph on the sample points such that clusters are "identified": that is, the subgraph induced by points from the same cluster is connected, while subgraphs corresponding to different clusters are not connected to each other. We derive bounds on the probability that cluster identification is successful, and use them to predict `optimal" values of k for the mutual and symmetric k-nearest-neighbor graphs. We point out different properties of the mutual and symmetric nearest-neighbor graphs related to the cluster identification problem.
暂无评论