Due to enormous growth of the World Wide Web in recent years, crawling specific topical portions quickly without having to explore all Web pages has become a new challenge for resource discovery. A new idea is to pred...
详细信息
ISBN:
(纸本)0780374908
Due to enormous growth of the World Wide Web in recent years, crawling specific topical portions quickly without having to explore all Web pages has become a new challenge for resource discovery. A new idea is to predicate the URL's relevance degree to the topic by related properties of the URL, then crawl the URLs with high probability. In this paper, we do further study on the topic resource and introduce some new properties helpful for more effective relevance predication. We also improve the evaluation algorithm and add two rules to adjust the weights of factors dynamically, which lead to better predication precision. These new issues improve the system performance due to higher topic harvest rate and lower sensitivity to various kinds of initial URL seeds.
Genetic algorithms are highly parallel, adaptive search method based on the processes of Darwinian evolution. This paper combines genetic algorithms with simulated annealing algorithms to a new kind of random search a...
详细信息
Hardware description language (HDL) code designing is a critical component of the chip design process, requiring substantial engineering and time resources. Recent advancements in large language models (LLMs), such as...
详细信息
Hardware description language (HDL) code designing is a critical component of the chip design process, requiring substantial engineering and time resources. Recent advancements in large language models (LLMs), such as GPT series, have shown promise in automating HDL code generation. However, current LLM-based approaches face significant challenges in meeting real-world hardware design requirements, particularly in handling complex designs and ensuring code correctness. Our evaluations reveal that the functional correctness rate of LLM-generated HDL code significantly decreases as design complexity increases. In this paper, we propose the AutoSilicon framework, which aims to scale up the hardware design capability of LLMs. AutoSilicon incorporates an agent system, which 1) allows for the decomposition of large-scale, complex code design tasks into smaller, simpler tasks; 2) provides a compilation and simulation environment that enables LLMs to compile and test each piece of code it generates; and 3) introduces a series of optimization strategies. Experimental results demonstrate that AutoSilicon can scale hardware designs to projects with code equivalent to over 10,000 tokens. In terms of design quality, it further improves the syntax correctness rate and functional correctness rate compared with approaches that do not employ any extensions. For example, compared to directly generating HDL code using GPT-4-turbo, AutoSilicon enhances the syntax correctness rate by an average of 35.8% and improves functional correctness by an average of 35.6%.
Connected Autonomous Vehicle (CAV) Driving, as a data-driven intelligent driving technology within the Internet of Vehicles (IoV), presents significant challenges to the efficiency and security of real-time data manag...
详细信息
Connected Autonomous Vehicle (CAV) Driving, as a data-driven intelligent driving technology within the Internet of Vehicles (IoV), presents significant challenges to the efficiency and security of real-time data management. The combination of Web3.0 and edge content caching holds promise in providing low-latency data access for CAVs’ real-time applications. Web3.0 enables the reliable pre-migration of frequently requested content from content providers to edge nodes. However, identifying optimal edge node peers for joint content caching and replacement remains challenging due to the dynamic nature of traffic flow in IoV. Addressing these challenges, this article introduces GAMA-Cache, an innovative edge content caching methodology leveraging Graph Attention Networks (GAT) and Multi-Agent Reinforcement Learning (MARL). GAMA-Cache conceptualizes the cooperative edge content caching issue as a constrained Markov decision process. It employs a MARL technique predicated on cooperation effectiveness to discern optimal caching decisions, with GAT augmenting information extracted from adjacent nodes. A distinct collaborator selection mechanism is also developed to streamline communication between agents, filtering out those with minimal correlations in the vector input to the policy network. Experimental results demonstrate that, in terms of service latency and delivery failure, the GAMA-Cache outperforms other state-of-the-art MARL solutions for edge content caching in IoV.
The rapid advancements in big data and the Internet of Things (IoT) have significantly accelerated the digital transformation of medical institutions, leading to the widespread adoption of Digital Twin Healthcare (DTH...
详细信息
The rapid advancements in big data and the Internet of Things (IoT) have significantly accelerated the digital transformation of medical institutions, leading to the widespread adoption of Digital Twin Healthcare (DTH). The Cloud DTH Platform (CDTH) serves as a cloud-based framework that integrates DTH models, healthcare resources, patient data, and medical services. By leveraging real-time data from medical devices, the CDTH platform enables intelligent healthcare services such as disease prediction and medical resource optimization. However, the platform functions as a system of systems (SoS), comprising interconnected yet independent healthcare services. This complexity is further compounded by the integration of both black-box AI models and domain-specific mechanistic models, which pose challenges in ensuring the interpretability and trustworthiness of DTH models. To address these challenges, we propose a Model-Based systems Engineering (MBSE)-driven DTH modeling methodology derived from systematic requirement and functional analyses. To implement this methodology effectively, we introduce a DTH model development approach using the X language, along with a comprehensive toolchain designed to streamline the development process. Together, this methodology and toolchain form a robust framework that enables engineers to efficiently develop interpretable and trustworthy DTH models for the CDTH platform. By integrating domain-specific mechanistic models with AI algorithms, the framework enhances model transparency and reliability. Finally, we validate our approach through a case study involving elderly patient care, demonstrating its effectiveness in supporting the development of DTH models that meet healthcare and interpretability requirements.
Explainable Fake News Detection (EFND) is a new challenge that aims to verify news authenticity and provide clear explanations for its decisions. Traditional EFND methods often treat the tasks of classification and ex...
详细信息
Explainable Fake News Detection (EFND) is a new challenge that aims to verify news authenticity and provide clear explanations for its decisions. Traditional EFND methods often treat the tasks of classification and explanation as separate, ignoring the fact that explanation content can assist in enhancing fake news detection. To overcome this gap, we present a new solution: the End-to-end Explainable Fake News Detection Network (\(EExpFND\)). Our model includes an evidence-claim variational causal inference component, which not only utilizes explanation content to improve fake news detection but also employs a variational approach to address the distributional bias between the ground truth explanation in the training set and the prediction explanation in the test set. Additionally, we incorporate a masked attention network to detail the nuanced relationships between evidence and claims. Our comprehensive tests across two public datasets show that \(EExpFND\) sets a new benchmark in performance. The code is available at https://***/r/EExpFND-F5C6.
The advancement of the Internet of Medical Things (IoMT) has led to the emergence of various health and emotion care services, e.g., health monitoring. To cater to increasing computational requirements of IoMT service...
详细信息
The advancement of the Internet of Medical Things (IoMT) has led to the emergence of various health and emotion care services, e.g., health monitoring. To cater to increasing computational requirements of IoMT services, Mobile Edge computing (MEC) has emerged as an indispensable technology in smart health. Benefiting from the cost-effectiveness of deployment, unmanned aerial vehicles (UAVs) equipped with MEC servers in Non-Orthogonal Multiple Access (NOMA) have emerged as a promising solution for providing smart health services in proximity to medical devices (MDs). However, the escalating number of MDs and the limited availability of communication resources of UAVs give rise to a significant increase in transmission latency. Moreover, due to the limited communication range of UAVs, the geographically-distributed MDs lead to workload imbalance of UAVs, which deteriorates the service response delay. To this end, this paper proposes a UAV-enabled Distributed computation Offloading and Power control method with Multi-Agent, named DOPMA, for NOMA-based IoMT environment. Specifically, this paper introduces computation and transmission queue models to analyze the dynamic characteristics of task execution latency and energy consumption. Moreover, a credit assignment scheme-based reward function is designed considering both system-level rewards and rewards tailored to each MD, and an improved multi-agent deep deterministic policy gradient algorithm is developed to derive offloading and power control decisions independently. Extensive simulations demonstrate that the proposed method outperforms existing schemes, achieving \(7.1\%\) reduction in energy consumption and \(16\%\) decrease in average delay.
暂无评论