Blockchains suffer from scalability limitations, both in terms of latency and throughput. Various approaches to alleviate this have been proposed, most prominent of which are payment and state channels, sidechains, co...
ISBN:
(纸本)9783031547751;9783031547768
Blockchains suffer from scalability limitations, both in terms of latency and throughput. Various approaches to alleviate this have been proposed, most prominent of which are payment and state channels, sidechains, commit-chains, rollups, and sharding. This work puts forth a novel commit-chain protocol, Bitcoin Clique. It is the first trustless commit-chain that is compatible with all major blockchains, including (an upcoming version of) Bitcoin. Clique enables a pool of users to pay each other off-chain, i.e., without interacting with the blockchain, thus sidestepping its bottlenecks. A user can directly send its coins to any other user in the Clique: In contrast to payment channels, its funds are not tied to a specific counterparty, avoiding the need for multi-hop payments. An untrusted operator facilitates payments by verifiably recording them. Furthermore, a novel technique of independent interest is used at the core of Bitcoin Clique. It builds on Adaptor Signatures and allows the extraction of the witness only after two signatures are published on the blockchain.
The operation of ground support vehicles in busy airport taxiways and runways is a growing concern, especially when the driver does not have an adequate field of view (FoV). Airport ground vehicle accidents due to vis...
ISBN:
(纸本)9783031610592;9783031610608
The operation of ground support vehicles in busy airport taxiways and runways is a growing concern, especially when the driver does not have an adequate field of view (FoV). Airport ground vehicle accidents due to vision obstruction have resulted in loss of revenue, injuries, and fatalities. The research presented in this paper examines and quantifies vision obstruction in airport ground support vehicles using an early design methodology based on digital human modeling (DHM). The methodology is presented through investigating two case studies based on actual airport accidents influenced by vision obstruction, involving pushback tractors and an aircraft refueling truck. Each study comprises three-dimensional (3D) computer-aided design (CAD) models of the vehicles and four DHM manikins with varying anthropometries. Obstruction-causing vehicle elements are identified, and then the CAD models are redesigned to improve the driver's forward FoV. Results from this study show that DHM can facilitate a proactive design approach by pinpointing potential hazards in the design of ground vehicles by retrofitting changes during early-phase design via digital models to reduce the risk of driver vision obstruction.
With the development of the Internet of Things, research on edge computing has surged. The essence of edge computing is to bring processing closer to data sources, aiming to minimize latency and enhance efficiency. Ho...
ISBN:
(纸本)9789819771837;9789819771844
With the development of the Internet of Things, research on edge computing has surged. The essence of edge computing is to bring processing closer to data sources, aiming to minimize latency and enhance efficiency. However, resource constraints, network bandwidth limitations, and dynamic demands in edge computing present optimization challenges. Traditional reinforcement learning methods require manual feature engineering and can't automatically learn advanced features, making them unsuitable for high-dimensional states and complex decision-making. To address these challenges, this paper investigates above question in edge networks, developing a model based on Multi-Agent Deep Q-Learning (MA-DQN). It introduces a self-learning offloading strategy where each user acts independently, observes its local environment, and optimally offloads without knowing other users' conditions. Simulation results demonstrate that this network minimizes system utility, approaching the optimal solution.
We propose a new supervised manifold visualisation method, slipmap, that finds local explanations for complex black-box supervised learning methods and creates a two-dimensional embedding of the data items such that d...
ISBN:
(纸本)9783031585555;9783031585531
We propose a new supervised manifold visualisation method, slipmap, that finds local explanations for complex black-box supervised learning methods and creates a two-dimensional embedding of the data items such that data items with similar local explanations are embedded nearby. This work extends and improves our earlier algorithm and addresses its shortcomings: poor scalability, inability to make predictions, and a tendency to find patterns in noise. We present our visualisation problem and provide an efficient GPU-optimised library to solve it. We experimentally verify that slipmap is fast and robust to noise, provides explanations that are on the level or better than the other local explanation methods, and are usable in practice.
We prove that in any n-vertex complete graph there is a collection P of (1 + o(1))n paths that strongly separates any pair of distinct edges e, f, meaning that there is a path in P which contains e but not f. Furtherm...
ISBN:
(纸本)9783031556005;9783031556012
We prove that in any n-vertex complete graph there is a collection P of (1 + o(1))n paths that strongly separates any pair of distinct edges e, f, meaning that there is a path in P which contains e but not f. Furthermore, for certain classes of n-vertex an-regular graphs we find a collection of (root 3 alpha + 1 - 1+ o(1))n paths that strongly separates any pair of edges. Both results are best-possible up to the o(1) term.
Electron microscopy (EM) images exhibit anisotropic axial resolution due to the characteristics inherent to the imaging modality, presenting challenges in analysis and downstream tasks. Recently proposed deep-learning...
ISBN:
(数字)9783031537677
ISBN:
(纸本)9783031537660;9783031537677
Electron microscopy (EM) images exhibit anisotropic axial resolution due to the characteristics inherent to the imaging modality, presenting challenges in analysis and downstream tasks. Recently proposed deep-learning-based isotropic reconstruction methods have addressed this issue;however, training the deep neural networks require either isotropic ground truth volumes, prior knowledge of the degradation process, or point spread function (PSF). Moreover, these methods struggle to generate realistic volumes when confronted with high scaling factors (e.g. x8, x10). In this paper, we propose a diffusion-model-based framework that overcomes the limitations of requiring reference data or prior knowledge about the degradation process. Our approach utilizes 2D diffusion models to consistently reconstruct 3D volumes and is well-suited for highly downsampled data. Extensive experiments conducted on two public datasets demonstrate the robustness and superiority of leveraging the generative prior compared to supervised learning methods. Additionally, we demonstrate our method's feasibility for self-supervised reconstruction, which can restore a single anisotropic volume without any training data. The source code is available on GitHub: https://***/hvcl/diffusion-em-recon.
In this paper, we investigate different approaches for generating synthetic microdata from open-source aggregated data. Specifically, we focus on macro-to-micro data synthesis. We explore the potential of the Gaussian...
ISBN:
(数字)9783031696510
ISBN:
(纸本)9783031696503;9783031696510
In this paper, we investigate different approaches for generating synthetic microdata from open-source aggregated data. Specifically, we focus on macro-to-micro data synthesis. We explore the potential of the Gaussian copulas framework to estimate joint distributions from aggregated data. Our generated synthetic data is intended for educational and software testing use cases. We propose three scenarios to achieve realistic and high-quality synthetic microdata: (1) zero knowledge, (2) internal knowledge, and (3) external knowledge. The three scenarios involve different knowledge of the underlying properties of the real microdata, i.e., standard deviation, and covariate. Our evaluation includes matching tests to evaluate the privacy of the synthetic datasets. Our results indicate that macro-to-micro synthesis achieves better privacy preservation compared to other methods, demonstrating both the potential and challenges of synthetic data generation in maintaining data privacy while providing useful data for analysis.
State-of-the-art approaches for multi-target prediction, such as Regressor Chains, can exploit interdependencies among the targets and model the outputs jointly, by flowing predictions from the first output to the las...
ISBN:
(纸本)9783031585555;9783031585531
State-of-the-art approaches for multi-target prediction, such as Regressor Chains, can exploit interdependencies among the targets and model the outputs jointly, by flowing predictions from the first output to the last. While these models are very useful in applications where targets are highly interdependent and should be modeled jointly, they are however unable to answer queries in situations when targets are not only mutually dependent but also have joint constraints over the output. In addition, existing models are unsuitable when certain target values are fixed or manually imputed prior to inference, and as a result, the flow of predictions cannot cascade backward from an already-imputed output. Here we present a solution to the aforementioned problem as a backward inference algorithm for Regressor Chains via Metropolis-Hastings sampling. We evaluate the proposed approach via different metrics using both synthetic and real-world data. We show that our approach notably reduces errors when compared to traditional marginal inference methods that overlook joint modeling. Furthermore, we show that the proposed method can provide useful insights into a problem in conservation science in predicting the distribution of potential natural vegetation.
With the continuous development of intelligence and network connectivity, the smart cockpit gradually transforms into a multifunctional value space. Smart devices are heterogeneous, massive, complex, and contextually ...
ISBN:
(纸本)9789819723898;9789819723904
With the continuous development of intelligence and network connectivity, the smart cockpit gradually transforms into a multifunctional value space. Smart devices are heterogeneous, massive, complex, and contextually dynamic, which makes the services provided by the system inaccurate. Introducing knowledge graphs in smart cockpit situations can meet users' needs in specific scenarios while delivering experiences that exceed expectations. This paper constructs a smart cockpit situation model with context, service, and user as the core elements, not only refining the context dimension but also incorporating context into the definition of service. Firstly, we analyze the elements that constitute the smart cockpit situation model and explore the connection between them. Secondly, a top-down approach is used to construct the smart cockpit situation ontology using the smart cockpit situation model as a guide. Finally, the smart cockpit situation model is instantiated to build a knowledge graph for fitness scenarios. The research results show that the coverage relationships between scenarios are inferred based on the coverage relationships between contexts. Furthermore, we verify the context can improve the accuracy of the service with a family travel scenario example. The situation knowledge graph constructed in this paper cannot only comprehensively describe the smart cockpit scene data, but also the service can adapt to the dynamic changes of contextual data.
With the rise of social media users, the quick transmission of news without sufficient verification has become a common problem. The proliferation of fake news across various social media platforms poses enormous harm...
ISBN:
(纸本)9783031575396;9783031575402
With the rise of social media users, the quick transmission of news without sufficient verification has become a common problem. The proliferation of fake news across various social media platforms poses enormous harm to society and affects the news industry's credibility. Therefore, it is critical to develop effective automated algorithms to detect deceptive articles. We show that existing models for fake news detection based on deep learning have limitations in terms of generalizability when confronted with a variety of news sources. Current deep learning models frequently fail to generalize adequately across different datasets, resulting in inferior performance. In this paper, we investigate the performance of numerous deep learning models on multiple fake news datasets, each with distinct characteristics. Our goal is to assess these models' performance within the same dataset and across other datasets in the domain of fake news. We aim to acquire useful insights into the models' robustness and generalizability across multiple datasets. We carried out an extensive set of experiments with five deep-learning models and seven datasets. These models are tested within a domain and across domains, i.e., on the same domain where they are trained and on other domains that were not seen during training. Our results show that these models cannot be generalized over various datasets and domains. The results reveal that these models exhibit high accuracy (around 99%) when tested on the dataset they were trained on but they experience a significant drop in performance (around 30%) when evaluated on different datasets.
暂无评论