Although micro-mobility has become a popular and indispensable mode of transportation in recent years, it has also introduced a large number of traffic accidents. Timely tracking and predicting the maneuvers hold the ...
详细信息
ISBN:
(纸本)9798350386066;9798350386059
Although micro-mobility has become a popular and indispensable mode of transportation in recent years, it has also introduced a large number of traffic accidents. Timely tracking and predicting the maneuvers hold the potential to prevent accidents through prompt warnings and interventions. However, the open and simple structure of micro-mobility makes it hard to install sophisticated infrastructures for maneuver prediction. In this paper, we argue that the micro-mobility body dynamics provide sufficient information for maneuver prediction. Our preliminary study suggests that micro-mobility body dynamic patterns appear beforehand and exhibit the correlation with steering maneuvers. We accordingly present RideGuard, which leverages a built-in Inertial Measurement Unit on smartphones to achieve the prediction of steering maneuvers. Through a dual-stream CNN deep learning architecture, RideGuard effectively captures complex patterns and feature relationships from the time and frequency domain. Our extensive real-traffic experiments involving 20 participants demonstrate the superiority of RideGuard: employing a 3s detection window, RideGuard attains a minimum of 94% precision in maneuver prediction with a 5s prediction time gap. The low-cost and rapid response feature of RideGuard enables feasible deployment and promotes safer riding practices. Additionally, we open-source our well-labeled dataset to facilitate further research.
Graph neural networks (GNNs) have demonstrated effectiveness across diverse application domains by leveraging graph information to uncover intrinsic correlations alongside feature representation. This enables GNNs to ...
详细信息
ISBN:
(纸本)9798350386066;9798350386059
Graph neural networks (GNNs) have demonstrated effectiveness across diverse application domains by leveraging graph information to uncover intrinsic correlations alongside feature representation. This enables GNNs to explore richer information compared to conventional neural networks, resulting in enhanced predictive performance. However, the integration of graph-structured data into the learning process poses two challenges. First, the sparsity and irregularity of the graph representation result in inefficient and expensive memory accesses on throughput-oriented accelerators, such as GPUs. Second, as GNN training involves interleaved graph operations to extract topological information and neural operations to update node or edge embeddings, the joint optimization of these two operations on accelerators is challenging due to their distinct resource requirements. Among GNN graph operations, graph attention which helps focus GNN training on highly correlated nodes is critical to training performance and model accuracy. However, our profiling of representative GNNs reveals that irregular memory access during graph attention accounts for the dominating overhead in GNN training. To address this issue, this paper proposes a more efficient graph attention method MEGA to accelerate GNN training. MEGA converts the original graph representation into one that regularizes memory access patterns for graph attention. Specifically, during preprocessing, MEGA traverses a graph to derive a schedule for graph attention and uses the schedule to reorganize the graph representation for optimized memory access. MEGA explores several techniques to balance memory access efficiency and preserve the original graph properties to avoid the loss of model accuracy. Experimental results with representative GNNs and graph data sets show that MEGA consistently outperforms conventional graph attention methods with up to 3x speedup.
With the increasing penetration of renewable sources, distributed Energy Resources (DER) are emerging as a crucial components of modern power systems. In a distribution system integrated with distributed Generation Sy...
详细信息
ISBN:
(纸本)9798350385939;9798350385922
With the increasing penetration of renewable sources, distributed Energy Resources (DER) are emerging as a crucial components of modern power systems. In a distribution system integrated with distributed Generation systems (DG's), efficient power management is crucial to minimize energy losses and maximize effective utilization of electrical energy. The power losses in distribution system have been on higher side due to low X/R ratio and high AT&C losses. This paper presents the minimization of power losses in a radial distribution system integrated with DG's. The load flow analysis is carried out using Forward-Backward Sweep (FBS) method. The optimal power levels of DG's for power loss reduction are obtained by using Particle Swarm Optimization (PSO) algorithm. The proposed method is efficient on a standard ieee 15 bus radial distribution system.
Containers are widely deployed in clouds. There are two common container architectures: operating system-level (OS-level) container and virtual machine-level (VM-level) container. Typical examples are runc and Kata. I...
详细信息
ISBN:
(纸本)9798350386066;9798350386059
Containers are widely deployed in clouds. There are two common container architectures: operating system-level (OS-level) container and virtual machine-level (VM-level) container. Typical examples are runc and Kata. It is well known that VM-level containers provide better isolation than OS-level containers, but at a higher overhead. Although there are quantitative analyses of the performance gap between these two container architectures, they rarely discuss the performance gap under the constrained resources provisioned to containers. Since the high-density deployment of containers is demanding in the cloud, each container is provisioned with limited resources specified by the cgroup mechanism. In this paper, we provide an in-depth analysis of the storage and network (two key aspects) performance differences between runc and Kata under varying resource constraints. We identify configuration implications that are crucial to performance and find that some of them are not exposed by the Kata interfaces. Based on that, we propose a profiling tool to automatically offer configuration suggestions for optimizing container performance. Our evaluation shows that the auto-generated configuration can improve the performance of MySQL by up to 107% in the TPCC benchmark compared with the default Kata setup.
An innovative architecture called execute-order-validate (EOV) has been proposed by Hyperledger Fabric that enables concurrent processing of transactions. However, the architecture suffers from issues such as excessiv...
详细信息
ISBN:
(纸本)9798350386066;9798350386059
An innovative architecture called execute-order-validate (EOV) has been proposed by Hyperledger Fabric that enables concurrent processing of transactions. However, the architecture suffers from issues such as excessive invalid transactions and serialization limitations in scenarios with high transaction conflicts, which restrict its applicability in real-time and high-performance settings. To address the aforementioned limitations, we propose ParFabric to enhance the EOV architecture. Firstly, we analyze four essential characteristics required for the transaction reordering algorithm within this architecture. We propose a heuristic dynamic reordering algorithm to reduce the number of invalid transactions. This is achieved through real-time identification and early abortion of transactions based on weighted pre-ordering and the construction of a transaction conflict graph. Secondly, leveraging the transaction conflict graph, we introduce a novel optimal block packing strategy based on transaction dependencies. This strategy replaces the total transaction order with partial order, enabling parallel validation and commit at the block level, thereby leading to increased system throughput while reducing transaction latency. Experimental results indicate that, ParFabric demonstrates excellent performance in terms of vertical scaling of peers. Additionally, at the same infrastructure cost, ParFabric provides 2.2x and 1.6x higher throughput than FabricPlusPlus and FabricSharp in high-conflict scenarios.
The proceedings contain 84 papers. The topics discussed include: on preventing symbolic execution attacks by low cost obfuscation;code clone tracer (CCT): a tracking tool for analyzing human and social factors in crea...
ISBN:
(纸本)9781728116518
The proceedings contain 84 papers. The topics discussed include: on preventing symbolic execution attacks by low cost obfuscation;code clone tracer (CCT): a tracking tool for analyzing human and social factors in creating and reusing code clones;connecting personal health records together with EHR using tangle;an multi-client web-based interactive HCI for interactive supercomputing;toward sustainable communities with a community currency a study in car sharing;extraction of useful features from neural network for facial expression recognition;and extracting related concepts from Wikipedia by using a graph database system.
The proceedings contain 12 papers. The special focus in this conference is on Formal Techniques for distributed Objects, Components, and systems. The topics include: Monitoring Hyperproperties with Circuits;encod...
ISBN:
(纸本)9783031086786
The proceedings contain 12 papers. The special focus in this conference is on Formal Techniques for distributed Objects, Components, and systems. The topics include: Monitoring Hyperproperties with Circuits;encodability Criteria for Quantum Based systems;LTL Under Reductions with Weaker Conditions Than Stutter Invariance;computing Race Variants in Message-Passing Concurrent Programming with Selective Receives;Process Algebra Can Save Lives: Static Analysis of XACML Access Control Policies Using mCRL2;the Reversible Temporal Process Language;branch-Well-Structured Transition systems and Extensions;offline and Online Monitoring of Scattered Uncertain Logs Using Uncertain Linear Dynamical systems;co-engineering Safety-Security Using Statistical Model Checking;fault-Tolerant Multiparty Session Types;effective Reductions of Mealy Machines.
Cloud computing paradigm uses Virtual Machine (VM)-based Resource provisioning strategy. Allocating VM in an appropriate PM (Physical Machine) is challenging. Efficient VM allocation reduce the number of under-utilize...
详细信息
ISBN:
(纸本)9798350339864
Cloud computing paradigm uses Virtual Machine (VM)-based Resource provisioning strategy. Allocating VM in an appropriate PM (Physical Machine) is challenging. Efficient VM allocation reduce the number of under-utilized PMs in Cloud. We developed a new algorithm that considers compatibility of VMs in a PM before allocation. We performed basic experiment and presented the results in this article.
The widespread utilization of Internet of Things (IoT) devices has resulted in an exponential increase in data at the Internet's edges. This trend, combined with the rapid growth of machine learning (ML) applicati...
详细信息
ISBN:
(纸本)9798350369458;9798350369441
The widespread utilization of Internet of Things (IoT) devices has resulted in an exponential increase in data at the Internet's edges. This trend, combined with the rapid growth of machine learning (ML) applications, necessitates the execution of learning tasks across the entire spectrum of computing resources - from the device, to the edge, to the cloud. This paper investigates the execution of machine learning algorithms within the edge-cloud continuum, focusing on their implications from a distributedcomputing perspective. We explore the integration of traditional ML algorithms, leveraging edge computing benefits such as low-latency processing and privacy preservation, along with cloud computing capabilities offering virtually limitless computational and storage resources. Our analysis offers insights into optimizing the execution of machine learning applications by decomposing them into smaller components and distributing these across processing nodes in edge-cloud architectures. By utilizing the Apache Spark framework, we define an efficient task allocation solution for distributing ML tasks across edge and cloud layers. Experiments on a clustering application in an edgecloud setup confirm the effectiveness of our solution compared to highly centralized alternatives, in which cloud resources are extensively used for handling large volumes of data from IoT devices.
Performance is a crucial indicator for the blockchain system evaluation. However, it is difficult to measure and comprehensively evaluate the performance of blockchain systems, particularly due to their complexity and...
详细信息
ISBN:
(纸本)9798350381993;9798350382006
Performance is a crucial indicator for the blockchain system evaluation. However, it is difficult to measure and comprehensively evaluate the performance of blockchain systems, particularly due to their complexity and diversity. Based on the comprehensive analysis of common blockchain systems and existing performance testing tools, this paper proposes a novel efficient and user-friendly performance testing tool for blockchain systems named TrustedBench. This tool supports all-round analysis and testing, and provides a powerful tool for understanding and analyzing the performance of blockchain systems. The architecture design, test execution process and important performance indicators of blockchain systems are introduced, and their calculation methods are further expounded. Finally, the overall results from the implementation of TrustedBench are presented and interpreted, revealing the operation and performance characteristics of some different blockchain products. With this tool, users can have a more comprehensive understanding of the performance of blockchain systems. Furthermore, TrustedBench effectively enhances the efficiency of the blockchain platform in many real world scenarios.
暂无评论