Network addressing is a traditional problem in communications network. IP addressing which can fully reflect the characteristics of Wireless Sensor Network will provide good support for the design of
Network addressing is a traditional problem in communications network. IP addressing which can fully reflect the characteristics of Wireless Sensor Network will provide good support for the design of
As an emerging learning paradigm, Federated Learning (FL) enables data owners to collaborate training a model while keeps data locally. However, classic FL methods are susceptible to model poisoning attacks and Byzant...
详细信息
As an emerging learning paradigm, Federated Learning (FL) enables data owners to collaborate training a model while keeps data locally. However, classic FL methods are susceptible to model poisoning attacks and Byzantine failures. Despite several defense methods proposed to mitigate such concerns, it is challenging to balance adverse effects while allowing that each credible node contributes to the learning process. To this end, a Fair and Robust FL method is proposed for defense against model poisoning attack from malicious nodes, namely FRFL. FRFL can learn a high-quality model even if some nodes are malicious. In particular, we first classify each participant into three categories: training node, validation node, and blockchain node. Among these, blockchain nodes replace the central server in classic FL methods while enabling secure aggregation. Then, a fairness-aware role rotation method is proposed to periodically alter the sets of training and validation nodes in order to utilize the valuable information included in local datasets of credible nodes. Finally, a decentralized and adaptive aggregation mechanism cooperating with blockchain nodes is designed to detect and discard malicious nodes and produce a high-quality model. The results show the effectiveness of FRFL in enhancing model performance while defending against malicious nodes.
Connected Autonomous Vehicle (CAV) Driving, as a data-driven intelligent driving technology within the Internet of Vehicles (IoV), presents significant challenges to the efficiency and security of real-time data manag...
详细信息
Connected Autonomous Vehicle (CAV) Driving, as a data-driven intelligent driving technology within the Internet of Vehicles (IoV), presents significant challenges to the efficiency and security of real-time data management. The combination of Web3.0 and edge content caching holds promise in providing low-latency data access for CAVs’ real-time applications. Web3.0 enables the reliable pre-migration of frequently requested content from content providers to edge nodes. However, identifying optimal edge node peers for joint content caching and replacement remains challenging due to the dynamic nature of traffic flow in IoV. Addressing these challenges, this article introduces GAMA-Cache, an innovative edge content caching methodology leveraging Graph Attention Networks (GAT) and Multi-Agent Reinforcement Learning (MARL). GAMA-Cache conceptualizes the cooperative edge content caching issue as a constrained Markov decision process. It employs a MARL technique predicated on cooperation effectiveness to discern optimal caching decisions, with GAT augmenting information extracted from adjacent nodes. A distinct collaborator selection mechanism is also developed to streamline communication between agents, filtering out those with minimal correlations in the vector input to the policy network. Experimental results demonstrate that, in terms of service latency and delivery failure, the GAMA-Cache outperforms other state-of-the-art MARL solutions for edge content caching in IoV.
Large Language Models (LLMs) have shown impressive In-Context Learning (ICL) ability in code generation. LLMs take a prompt context consisting of a few demonstration examples and a new requirement as input, and output...
详细信息
Large Language Models (LLMs) have shown impressive In-Context Learning (ICL) ability in code generation. LLMs take a prompt context consisting of a few demonstration examples and a new requirement as input, and output new programs without any parameter update. Existing studies have found that the performance of ICL-based code generation heavily depends on the quality of demonstration examples and thus arises research on selecting demonstration examples: given a new requirement, a few demonstration examples are selected from a candidate pool, where LLMs are expected to learn the pattern hidden in these selected demonstration examples. Existing approaches are mostly based on heuristics or randomly selecting examples. However, the distribution of randomly selected examples usually varies greatly, making the performance of LLMs less robust. The heuristics retrieve examples by only considering textual similarities of requirements, leading to sub-optimal *** fill this gap, we propose a Large language model-Aware selection approach for In-context-Learning-based code generation named LAIL. LAIL uses LLMs themselves to select examples. It requires LLMs themselves to label a candidate example as a positive example or a negative example for a requirement. Positive examples are helpful for LLMs to generate correct programs, while negative examples are trivial and should be ignored. Based on the labeled positive and negative data, LAIL trains a model-aware retriever to learn the preference of LLMs and select demonstration examples that LLMs need. During the inference, given a new requirement, LAIL uses the trained retriever to select a few examples and feed them into LLMs to generate desired programs. We apply LAIL to four widely used LLMs and evaluate it on five code generation datasets. Extensive experiments demonstrate that LAIL outperforms the state-of-the-art (SOTA) baselines by 11.58%, 3.33%, and 5.07% on CodeGen-Multi-16B, 1.32%, 2.29%, and 1.20% on CodeLlama-3
暂无评论