Generalization performance of stochastic optimization stands a central place in machine learning. In this paper, we investigate the excess risk performance and towards improved learning rates for two popular approache...
详细信息
Machine learning methods have shown promise in learning chaotic dynamical systems, enabling model-free short-term prediction and attractor reconstruction. However, when applied to large-scale, spatiotemporally chaotic...
详细信息
Stimuli are represented in the brain by the collective population responses of sensory neurons, and an object presented under varying conditions gives rise to a collection of neural population responses called an '...
详细信息
Stimuli are represented in the brain by the collective population responses of sensory neurons, and an object presented under varying conditions gives rise to a collection of neural population responses called an 'object manifold'. Changes in the object representation along a hierarchical sensory system are associated with changes in the geometry of those manifolds, and recent theoretical progress connects this geometry with 'classification capacity', a quantitative measure of the ability to support object classification. Deep neural networks trained on object classification tasks are a natural testbed for the applicability of this relation. We show how classification capacity improves along the hierarchies of deep neural networks with different architectures. We demonstrate that changes in the geometry of the associated object manifolds underlie this improved capacity, and shed light on the functional roles different levels in the hierarchy play to achieve it, through orchestrated reduction of manifolds' radius, dimensionality and inter-manifold correlations.
In a differentially private sequential learning setting, agents introduce endogenous noise into their actions to maintain privacy. Applying this approach to a standard sequential learning model leads to different outc...
详细信息
Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks than joint-training methods relying on task-specific supervision. ...
详细信息
To enhance exploitation for artificial bee colony (ABC) algorithm, we propose a modified multi-strategy ABC variant in which a superior information learning strategy is employed. In this strategy, the individuals can ...
详细信息
To address the challenge of detecting weak targets obscured by strong sea clutter and noise in complex maritime environments, this paper presents a robust sea clutter suppression method. It combines a complex-adaptive...
详细信息
We study the problem of learning binary classifiers from positive and unlabeled data, in the case that the distribution of unlabeled data is shifted. We call this problem learning from positive and imperfect unlabeled...
详细信息
We study the problem of learning binary classifiers from positive and unlabeled data, in the case that the distribution of unlabeled data is shifted. We call this problem learning from positive and imperfect unlabeled data (PIU learning). In the absence of covariate shifts, i.e., with perfect unlabeled data, Denis (1998) reduced this problem to learning under Massart noise. This reduction, however, fails even under small covariate shifts. Our main results on PIU learning are: the characterization of the sample complexity of PIU learning and a computationally and sample-efficient algorithm achieving misclassification error Ε. We illustrate the importance of PIU learning by proving that our main result for PIU learning implies new algorithms for the following problems: 1. learning from smooth distributions, where we give algorithms that learn interesting concept classes from only positive samples under smooth feature distributions, which circumvents existing impossibility results and contributes to recent advances in learning algorithms under smoothness (Haghtalab, Roughgarden, and Shetty, J. ACM’24) (Chandrasekaran, Klivans, Kontonis, Meka, and Stavropoulos, COLT’24). 2. learning with a list of unlabeled distributions, where we design new algorithms that apply to a broad class of concept classes under the assumption that we are given a list of unlabeled distributions, one of which – unknown to the learner – is O(1)-close to the true distribution of features. 3. Estimation in the presence of unknown truncation, where we give the first polynomial sample and time algorithm for estimating the parameters of an exponential family distribution from samples truncated to an unknown set S* that is approximable by polynomials in L1-norm. This improves the algorithm by Lee, Mehrotra, and Zampetakis (FOCS’24) that requires approximation in L2-norm – a significantly stronger condition. 4. Detecting truncation, where we present the first algorithm for detecting whether given sample
The number of electrified powertrains is ever increasing today towards a more sustainable future;thus, it is essential that unwanted failures are prevented, and a reliable operation is secured. Monitoring the internal...
详细信息
Approaches for teaching learning agents via human demonstrations have been widely studied and successfully applied to multiple domains. However, the majority of imitation learning work utilizes only behavioral informa...
详细信息
暂无评论