We consider exact attribute-efficient learning of functions from Post closed classes using membership queries and obtain bounds on learning complexity.
We consider exact attribute-efficient learning of functions from Post closed classes using membership queries and obtain bounds on learning complexity.
We investigate the computational efficiency of multitask learning of Boolean functions over the d-dimensional hypercube, that are related by means of a feature representation of size k << d shared across all tas...
详细信息
We investigate the computational efficiency of multitask learning of Boolean functions over the d-dimensional hypercube, that are related by means of a feature representation of size k << d shared across all tasks. We present a polynomial time multitask learning algorithm for the concept class of halfspaces with margin gamma, which is based on a simultaneous boosting technique and requires only poly(k/gamma) samples-per-task and poly(k log(d)/gamma) samples in total. In addition, we prove a computational separation, showing that assuming there exists a concept class that cannot be learned in the attribute-efficient model, we can construct another concept class such that can be learned in the attribute-efficient model, but cannot be multitask learned efficiently-multitask learning this concept class either requires super-polynomial time complexity or a much larger total number of samples.
We study the problem of learning parity functions that depend on at most k variables (k-parities) attribute-efficiently in the mistake-bound model. We design a simple, deterministic, polynomial-time algorithm for lear...
详细信息
We study the problem of learning parity functions that depend on at most k variables (k-parities) attribute-efficiently in the mistake-bound model. We design a simple, deterministic, polynomial-time algorithm for learning k-parities with mistake bound O(n(1-1/k)). This is the first polynomial-time algorithm to learn omega(1)-parities in the mistake-bound model with mistake bound o(n). Using the standard conversion techniques from the mistake-bound model to the PAC model, our algorithm can also be used for learning k-parities in the PAC model. In particular, this implies a slight improvement over the results of Klivans and Servedio (2004) [1] for learning k-parities in the PAC model. We also show that the (O) over tilde (n(k/2)) time algorithm from Klivans and Servedio (2004) [1] that PAC-learns k-parities with sample complexity O(k log n) can be extended to the mistake-bound model. (c) 2010 Elsevier B.V. All rights reserved.
Conditional preference networks (CP-nets) have recently emerged as a popular language capable of representing ordinal preference relations in a compact and structured manner. In this paper, we investigate the problem ...
详细信息
Conditional preference networks (CP-nets) have recently emerged as a popular language capable of representing ordinal preference relations in a compact and structured manner. In this paper, we investigate the problem of learning CP-nets in the well-known model of exact identification with equivalence and membership queries. The goal is to identify a target preference ordering with a binary-valued CP-net by interacting with the user through a small number of queries. Each example supplied by the user or the learner is a preference statement on a pair of outcomes. In this model, we show that acyclic CP-nets are not learnable with equivalence queries alone, even if the examples are restricted to swaps for which dominance testing takes linear time. By contrast, acyclic CP-nets are what is called attribute-efficiently learnable when both equivalence queries and membership queries are available: we indeed provide a learning algorithm whose query complexity is linear in the description size of the target concept, but only logarithmic in the total number of attributes. Interestingly, similar properties are derived for tree-structured CP-nets in the presence of arbitrary examples. Our learning algorithms are shown to be quasi-optimal by deriving lower bounds on the VC-dimension of CP-nets. In a nutshell, our results reveal that active queries are required for efficiently learning CP-nets in large multi-attribute domains. (C) 2010 Elsevier B.V. All rights reserved.
Let F be a class of functions obtained by replacing some inputs of a Boolean function of a fixed type with some constants. The problem considered in this paper, which is called attributeefficientlearning, is to iden...
详细信息
Let F be a class of functions obtained by replacing some inputs of a Boolean function of a fixed type with some constants. The problem considered in this paper, which is called attributeefficientlearning, is to identify "efficiently" a Boolean function g out of F by asking for the value of g at chosen inputs, where "efficiency" is measured in terms of the number of essential variables. We study the query complexity of attribute-efficient learning for three function classes that are, respectively, obtained from disjunction, parity, and threshold functions. In many cases, we obtain almost optimal upper and lower bound on the number of queries. (C) 2000 Elsevier Science B.V. All rights reserved.
A method of combining learning algorithms is described that preserves attribute-efficiency. It yields learning algorithms that require a number of examples that is polynomial in the number of relevant variables and lo...
详细信息
A method of combining learning algorithms is described that preserves attribute-efficiency. It yields learning algorithms that require a number of examples that is polynomial in the number of relevant variables and logarithmic in the number of irrelevant ones. The algorithms are simple to implement and realizable on networks with a number of nodes linear in the total number of variables. They include generalizations of Littlestone's Winnow algorithm, and are, therefore, good candidates for experimentation on domains having very large numbers of attributes but where nonlinear hypotheses are sought.
暂无评论