A novel cluster-based traffic offloading and user association (UA) algorithm alongside a multi-agent deep reinforcement learning (DRL) based base station (BS) activation mechanism is proposed in this paper. Our design...
详细信息
Iris biometrics allow contactless authentication, which makes it widely deployed human recognition mechanisms since the couple of years. Susceptibility of iris identification systems remains a challenging task due to ...
详细信息
These trackers based on the space-time memory network locate the target object in the search image employing contextual information from multiple memory frames and their corresponding foreground-background features. I...
详细信息
Reversible Data Hiding in Encrypted Images (RDHEI) has drawn increasing concern in multimedia cloud computing scenarios. It embeds secret message into the encrypted carrier while preserving the confidentiality of the ...
详细信息
The compressed code of Absolute Moment Block Truncation Coding (AMBTC) consists of quantized values (QVs) and bitmaps. The QVs exhibit greater predictability, and the bitmaps themselves carry more randomness. While ex...
详细信息
Deep Learning (DL) models have demonstrated remarkable proficiency in image classification and recognition tasks, surpassing human capabilities. The observed enhancement in performance can be attributed to the utiliza...
详细信息
Deep Learning (DL) models have demonstrated remarkable proficiency in image classification and recognition tasks, surpassing human capabilities. The observed enhancement in performance can be attributed to the utilization of extensive datasets. Nevertheless, DL models have huge data requirements. Widening the learning capability of such models from limited samples even today remains a challenge, given the intrinsic constraints of small datasets. The trifecta of challenges, encompassing limited labeled datasets, privacy, poor generalization performance, and the costliness of annotations, further compounds the difficulty in achieving robust model performance. Overcoming the challenge of expanding the learning capabilities of Deep Learning models with limited sample sizes remains a pressing concern even today. To address this critical issue, our study conducts a meticulous examination of established methodologies, such as Data Augmentation and Transfer Learning, which offer promising solutions to data scarcity dilemmas. Data Augmentation, a powerful technique, amplifies the size of small datasets through a diverse array of strategies. These encompass geometric transformations, kernel filter manipulations, neural style transfer amalgamation, random erasing, Generative Adversarial Networks, augmentations in feature space, and adversarial and meta-learning training paradigms. Furthermore, Transfer Learning emerges as a crucial tool, leveraging pre-trained models to facilitate knowledge transfer between models or enabling the retraining of models on analogous datasets. Through our comprehensive investigation, we provide profound insights into how the synergistic application of these two techniques can significantly enhance the performance of classification tasks, effectively magnifying scarce datasets. This augmentation in data availability not only addresses the immediate challenges posed by limited datasets but also unlocks the full potential of working with Big Data in
As one of the most representative recommendation solutions, traditional collaborative filtering (CF) models typically have limitations in dealing with large-scale, sparse data to capture complex relationships between ...
详细信息
Even though various features have been investigated in the detection of figurative language, oxymoron features have not been considered in the classification of sarcastic content. The main objective of this work is to...
详细信息
For point cloud registration, the purpose of this article is to propose a novel centralized random sample consensus (RANSAC) (C-RANSAC) registration with fast convergence and high accuracy. In our algorithm, the novel...
详细信息
暂无评论