This paper discusses a rough set approach for evaluating solutions of scheduling problems. Algorithms for solving scheduling problems are planners and the scheduling problems are modelled as constraint satisfaction pr...
详细信息
This paper discusses a rough set approach for evaluating solutions of scheduling problems. Algorithms for solving scheduling problems are planners and the scheduling problems are modelled as constraint satisfaction problems. Conventional approach for the analysis of algorithms often focuses on the time and representational complexities, and assumes an identical cost on all operations. The proposed rough set approach augments conventional approaches for the analysis of algorithms in two ways: 1) it permits the consideration of different costs arising from different operations; and 2) it allows one to define a new utility for a complexity analysis.
Hardware description language (HDL) code designing is a critical component of the chip design process, requiring substantial engineering and time resources. Recent advancements in large language models (LLMs), such as...
详细信息
Hardware description language (HDL) code designing is a critical component of the chip design process, requiring substantial engineering and time resources. Recent advancements in large language models (LLMs), such as GPT series, have shown promise in automating HDL code generation. However, current LLM-based approaches face significant challenges in meeting real-world hardware design requirements, particularly in handling complex designs and ensuring code correctness. Our evaluations reveal that the functional correctness rate of LLM-generated HDL code significantly decreases as design complexity increases. In this paper, we propose the AutoSilicon framework, which aims to scale up the hardware design capability of LLMs. AutoSilicon incorporates an agent system, which 1) allows for the decomposition of large-scale, complex code design tasks into smaller, simpler tasks; 2) provides a compilation and simulation environment that enables LLMs to compile and test each piece of code it generates; and 3) introduces a series of optimization strategies. Experimental results demonstrate that AutoSilicon can scale hardware designs to projects with code equivalent to over 10,000 tokens. In terms of design quality, it further improves the syntax correctness rate and functional correctness rate compared with approaches that do not employ any extensions. For example, compared to directly generating HDL code using GPT-4-turbo, AutoSilicon enhances the syntax correctness rate by an average of 35.8% and improves functional correctness by an average of 35.6%.
Communication and concurrency are essential in understanding complex dynamic systems, and there have been many theories to deal with them such as Petri nets, CSP and ACP. Among them, CCS (process calculus is one of th...
详细信息
ISBN:
(数字)9781461301233
ISBN:
(纸本)9780387950921;9781461265221
Communication and concurrency are essential in understanding complex dynamic systems, and there have been many theories to deal with them such as Petri nets, CSP and ACP. Among them, CCS (process calculus is one of the most important and mathematically developed models of communication and concurrency. Various behavior equivalences between agents, such as (strong and weak) bisimilarity, observation congruence, trace equivalence, testing equivalence and failure equivalence, are central notions in process calculus. In the real applications of process calculus, specification and implementation are described as two agents, correctness of programs is treated as a certain behavior equivalence between specification and implementation, and then the proof of correctness of programs is a task to establish some behavior equivalence between them. The goal of this book is to provide some suitable and useful concepts and tools for the understanding and analysis of approximate correctness of programs in concurrent systems. Throughout this book the focus is on the framework of process calculus, and the main idea is to construct some natural and reasonable topological structures which can reveal suitably a mechanism of approximate computation in process calculus and to work out various relationships among processes which are compatible with these topological structures.
Connected Autonomous Vehicle (CAV) Driving, as a data-driven intelligent driving technology within the Internet of Vehicles (IoV), presents significant challenges to the efficiency and security of real-time data manag...
详细信息
Connected Autonomous Vehicle (CAV) Driving, as a data-driven intelligent driving technology within the Internet of Vehicles (IoV), presents significant challenges to the efficiency and security of real-time data management. The combination of Web3.0 and edge content caching holds promise in providing low-latency data access for CAVs’ real-time applications. Web3.0 enables the reliable pre-migration of frequently requested content from content providers to edge nodes. However, identifying optimal edge node peers for joint content caching and replacement remains challenging due to the dynamic nature of traffic flow in IoV. Addressing these challenges, this article introduces GAMA-Cache, an innovative edge content caching methodology leveraging Graph Attention Networks (GAT) and Multi-Agent Reinforcement Learning (MARL). GAMA-Cache conceptualizes the cooperative edge content caching issue as a constrained Markov decision process. It employs a MARL technique predicated on cooperation effectiveness to discern optimal caching decisions, with GAT augmenting information extracted from adjacent nodes. A distinct collaborator selection mechanism is also developed to streamline communication between agents, filtering out those with minimal correlations in the vector input to the policy network. Experimental results demonstrate that, in terms of service latency and delivery failure, the GAMA-Cache outperforms other state-of-the-art MARL solutions for edge content caching in IoV.
This book is organized into thirteen chapters that range over the relevant approaches and tools in data integration, modeling, analysis and knowledge discovery for signaling pathways. Having in mind that the book is a...
详细信息
ISBN:
(数字)9783319041728
ISBN:
(纸本)9783319041711
This book is organized into thirteen chapters that range over the relevant approaches and tools in data integration, modeling, analysis and knowledge discovery for signaling pathways. Having in mind that the book is also addressed for students, the contributors present the main results and techniques in an easily accessed and understood way together with many references and instances. Chapter 1 presents an introduction to signaling pathway, including motivations, background knowledge and relevant data mining techniques for pathway data analysis. Chapter 2 presents a variety of data sources and data analysis with respect to signaling pathway, including data integration and relevant data mining applications. Chapter 3 presents a framework to measure the inconsistency between heterogenous biological databases. A GO-based (genome ontology) strategy is proposed to associate different data sources. Chapter 4 presents identification of positive regulation of kinase pathways in terms of association rule mining. The results derived from this project could be used when predicting essential relationships and enable a comprehensive understanding of kinase pathway interaction. Chapter 5 presents graphical model-based methods to identify regulatory network of protein kinases. A framework using negative association rule mining is introduced in Chapter 6 to discover featured inhibitory regulation patterns and the relationships between involved regulation factors. It is necessary to not only detect the objects that exhibit a positive regulatory role in a kinase pathway but also to discover those objects that inhibit the regulation. Chapter 7 presents methods to model ncRNA secondary structure data in terms of stems, loops and marked labels, and illustrates how to find matched structure patterns for a given query. Chapter 8 shows an interval-based distance metric for computing the distance between conserved RNA secondary structures. Chapter 9 presents a framework to explore structura
The rapid advancements in big data and the Internet of Things (IoT) have significantly accelerated the digital transformation of medical institutions, leading to the widespread adoption of Digital Twin Healthcare (DTH...
详细信息
The rapid advancements in big data and the Internet of Things (IoT) have significantly accelerated the digital transformation of medical institutions, leading to the widespread adoption of Digital Twin Healthcare (DTH). The Cloud DTH Platform (CDTH) serves as a cloud-based framework that integrates DTH models, healthcare resources, patient data, and medical services. By leveraging real-time data from medical devices, the CDTH platform enables intelligent healthcare services such as disease prediction and medical resource optimization. However, the platform functions as a system of systems (SoS), comprising interconnected yet independent healthcare services. This complexity is further compounded by the integration of both black-box AI models and domain-specific mechanistic models, which pose challenges in ensuring the interpretability and trustworthiness of DTH models. To address these challenges, we propose a Model-Based systems Engineering (MBSE)-driven DTH modeling methodology derived from systematic requirement and functional analyses. To implement this methodology effectively, we introduce a DTH model development approach using the X language, along with a comprehensive toolchain designed to streamline the development process. Together, this methodology and toolchain form a robust framework that enables engineers to efficiently develop interpretable and trustworthy DTH models for the CDTH platform. By integrating domain-specific mechanistic models with AI algorithms, the framework enhances model transparency and reliability. Finally, we validate our approach through a case study involving elderly patient care, demonstrating its effectiveness in supporting the development of DTH models that meet healthcare and interpretability requirements.
contains contributions from world-leading experts from both the academic and industrial communities. The first part of the volume consists of invited papers by international authors describing possibilistic logic i...
详细信息
ISBN:
(数字)9781461552611
ISBN:
(纸本)9780792386506;9781461373995
contains contributions from world-leading experts from both the academic and industrial communities. The first part of the volume consists of invited papers by international authors describing possibilistic logic in decision analysis, fuzzy dynamic programming in optimization, linguistic modifiers for word computation, and theoretical treatments and applications of fuzzy reasoning. The second part is composed of eleven contributions from Chinese authors focusing on some of the key issues in the fields: stable adaptive fuzzy control systems, partial evaluations and fuzzy reasoning, fuzzy wavelet neural networks, analysis and applications of genetic algorithms, partial repeatability, rough set reduction for data enriching, limits of agents in process calculus, medium logic and its evolution, and factor spaces canes.;These contributions are not only theoretically sound and well-formulated, but are also coupled with applicability implications and/or implementation treatments. The domains of applications realized or implied are: decision analysis, word computation, databases and knowledge discovery, power systems, control systems, and multi-destinational routing. Furthermore, the articles contain materials that are an outgrowth of recently conducted research, addressing fundamental and important issues of fuzzy logic and soft computing.
Explainable Fake News Detection (EFND) is a new challenge that aims to verify news authenticity and provide clear explanations for its decisions. Traditional EFND methods often treat the tasks of classification and ex...
详细信息
Explainable Fake News Detection (EFND) is a new challenge that aims to verify news authenticity and provide clear explanations for its decisions. Traditional EFND methods often treat the tasks of classification and explanation as separate, ignoring the fact that explanation content can assist in enhancing fake news detection. To overcome this gap, we present a new solution: the End-to-end Explainable Fake News Detection Network (\(EExpFND\)). Our model includes an evidence-claim variational causal inference component, which not only utilizes explanation content to improve fake news detection but also employs a variational approach to address the distributional bias between the ground truth explanation in the training set and the prediction explanation in the test set. Additionally, we incorporate a masked attention network to detail the nuanced relationships between evidence and claims. Our comprehensive tests across two public datasets show that \(EExpFND\) sets a new benchmark in performance. The code is available at https://***/r/EExpFND-F5C6.
暂无评论