In the era of data-driven decision-making, the ability to rapidly and accurately analyze vast amounts of textual data is essential. This paper examines the role of text pre-processing techniques in enhancing the effec...
详细信息
This study investigates the application of advanced fine-tuned Large Language Models (LLMs) for Turkish Sentiment Analysis (SA), focusing on e-commerce product reviews. Our research utilizes four open-source Turkish S...
详细信息
In modern real-time operating systems, complex task loads are often modeled as directed acyclic graphs (DAG) and executed in parallel on multiprocessor systems. The topological constraints present in DAG tasks prevent...
详细信息
The COVID-19 pandemic has had a profound impact on human society. It has highlighted the need for faster diagnostic methods. Research has shown that combining semantic segmentation with traditional medical approaches ...
详细信息
Peer-to-peer (P2P) energy trading, Smart Grids (SG), and electric vehicle energy management are integral components of the Internet of Energy (IoE) field. The integration of software-Defined Networks (SDNs) and Blockc...
详细信息
With the prevalence of pre-training-fine-tuning paradigm, how to efficiently adapt the pre-trained model to the downstream tasks has been an intriguing issue. Parameter-Efficient Fine-Tuning (PEFT) methods have been p...
详细信息
With the prevalence of pre-training-fine-tuning paradigm, how to efficiently adapt the pre-trained model to the downstream tasks has been an intriguing issue. Parameter-Efficient Fine-Tuning (PEFT) methods have been proposed for low-cost adaptation. Although PEFT has demonstrated effectiveness and been widely applied, the underlying principles are still unclear. In this paper, we adopt the PAC-Bayesian generalization error bound, viewing pre-training as a shift of prior distribution which leads to a tighter bound for generalization error. We validate this shift from the perspectives of oscillations in the loss landscape and the quasi-sparsity in gradient distribution. Based on this, we propose a gradient-based sparse finetuning algorithm, named Sparse Increment Fine-Tuning (SIFT), and validate its effectiveness on a range of tasks including the GLUE Benchmark and Instruction-tuning. The code is accessible at https://***/song-wx/SIFT. Copyright 2024 by the author(s)
In recent years, the proliferation of deep learning (DL) techniques has given rise to a significant challenge in the form of deepfake videos, posing a grave threat to the authenticity of media content. With the rapid ...
详细信息
In recent years, the proliferation of deep learning (DL) techniques has given rise to a significant challenge in the form of deepfake videos, posing a grave threat to the authenticity of media content. With the rapid advancement of DL technology, the creation of convincingly realistic deepfake videos has become increasingly prevalent, raising serious concerns about the potential misuse of such content. Deepfakes have the potential to undermine trust in visual media, with implications for fields as diverse as journalism, entertainment, and security. This study presents an innovative solution by harnessing blockchain-based federated learning (FL) to address this issue, focusing on preserving data source anonymity. The approach combines the strengths of SegCaps and convolutional neural network (CNN) methods for improved image feature extraction, followed by capsule network (CN) training to enhance generalization. A novel data normalization technique is introduced to tackle data heterogeneity stemming from diverse global data sources. Moreover, transfer learning (TL) and preprocessing methods are deployed to elevate DL performance. These efforts culminate in collaborative global model training zfacilitated by blockchain and FL while maintaining the utmost confidentiality of data sources. The effectiveness of our methodology is rigorously tested and validated through extensive experiments. These experiments reveal a substantial improvement in accuracy, with an impressive average increase of 6.6% compared to six benchmark models. Furthermore, our approach demonstrates a 5.1% enhancement in the area under the curve (AUC) metric, underscoring its ability to outperform existing detection methods. These results substantiate the effectiveness of our proposed solution in countering the proliferation of deepfake content. In conclusion, our innovative approach represents a promising avenue for advancing deepfake detection. By leveraging existing data resources and the power of FL
This paper presents LibMTL, an open-source Python library built on PyTorch, which provides a unified, comprehensive, reproducible, and extensible implementation framework for Multi-Task Learning (MTL). LibMTL consider...
详细信息
In the wake of developments in the field of Natural Language Processing, Question Answering (QA) software has penetrated our daily lives. Due to the data-driven programming paradigm, QA software inevitably contains bu...
详细信息
Dialogue policy trains an agent to select dialogue actions frequently implemented via deep reinforcement learning (DRL). The model-based reinforcement methods built a world model to generate simulated data to alleviat...
详细信息
暂无评论