Current electric grids were designed a century ago for simple and limited needs. An electric grid helps the power generation company (PGC) supply electricity to homes and industrial consumers. Traditional electric gri...
详细信息
ISBN:
(数字)9798350373363
ISBN:
(纸本)9798350373370
Current electric grids were designed a century ago for simple and limited needs. An electric grid helps the power generation company (PGC) supply electricity to homes and industrial consumers. Traditional electric grids have one-way interactions that make it difficult to cater to ever-changing and increased demand. The Smart grids (SGs) introduced the bidirectional flow of electricity and the exchange of information between consumers and PGCs. SG is a combination of networks of communication, controls, computers, and automations that allow its integration with emerging technologies. Blockchain (BC) is a digital distributed ledger (DDL) that ensures the security, reliability and immutability of data. The integration of BC and SGs is an emerging idea that could also provide a platform for community-based electricity trading. This study aims to review community-based energy trading with BC-based SGs and discusses the first ever community-based peer-to-peer (P2P) solar energy trading project, Quartierstrom, by the Swiss Federal Office of Energy (SFOE), Switzerland. The findings of this study contribute a valuable reference to the knowledge body and enhance the comprehension of P2P energy trading for both new and experienced researchers, enabling them to identify new opportunities and challenges.
In distributed storage, dynamic loads usually cause a huge burden such as hot-spot and unblance region problems. In this paper, we focus on the problems of dynamic unbalance loads in distributed database HBase. Firstl...
详细信息
ISBN:
(纸本)9781728162515
In distributed storage, dynamic loads usually cause a huge burden such as hot-spot and unblance region problems. In this paper, we focus on the problems of dynamic unbalance loads in distributed database HBase. Firstly, we propose a new intelligent load balance framework to support the collection and analysis of dynamic load information. Then, two efficient load balance strategies are proposed to solve hot-spot problem and unbalance region problem. Finally, we evaluate the strategies on HBase using real-world big data on distributed clusters. Compared to the existing balance strategies, the proposed load balance strategies can make more effective use of the resources and reduce the response time for large dynamic loads.
The emerging resource-sharing container-based virtualization is prevalent in IT, as it is a much lighter deployment in the cloud environment compared to VM-based virtualization. distributed data-processing workloads e...
详细信息
ISBN:
(纸本)9781728187808
The emerging resource-sharing container-based virtualization is prevalent in IT, as it is a much lighter deployment in the cloud environment compared to VM-based virtualization. distributed data-processing workloads executing in parallel take advantages of resource sharing, fast delivery, and excellent portability of containerization, but also suffer from resource competition and performance interference. Especially for memory virtualization, data-processing frameworks allocate physical memory (i.e., RAM) and swap to applications specified by users, without considering cache-characteristics and parallelism of applications, which induces performance degradation and significantly protracted latency which is worse given over-provisioning. We design an efficient memory allocation scheme (RITA) for containerized parallel systems to improve data processing latency. RITA monitors memory usage and cache characteristics of applications, and dynamically re-allocates memory resources. We implement RITA in a real-world system, which can easily migrate to other container-based virtualization environments. Our experimental results show that RITA provides remarkable latency improvement for memory intensive distributed data-processing workloads.
Empirical Dynamic Modeling (EDM) is a nonlinear time series causal inference framework. The latest implementation of EDM, cppEDM, has only been used for small datasets due to computational cost. With the growth of dat...
详细信息
ISBN:
(纸本)9781728190747
Empirical Dynamic Modeling (EDM) is a nonlinear time series causal inference framework. The latest implementation of EDM, cppEDM, has only been used for small datasets due to computational cost. With the growth of data collection capabilities, there is a great need to identify causal relationships in large datasets. We present mpEDM, a paralleldistributed implementation of EDM optimized for modern GPU-centric supercomputers. We improve the original algorithm to reduce redundant computation and optimize the implementation to fully utilize hardware resources such as GPUs and SIMD units. As a use case, we run mpEDM on AI Bridging Cloud Infrastructure (ABCI) using datasets of an entire animal brain sampled at single neuron resolution to identify dynamical causation patterns across the brain. mpEDM is 1,530x faster than cppEDM and a dataset containing 101,729 neuron was analyzed in 199 seconds on 512 nodes. This is the largest EDM causal inference achieved to date.
Edge computing provides a feasible technical solution to solve the problem of resource and energy supply constraints in terminal device deep learning applications. By migrating part of the deep learning application co...
Edge computing provides a feasible technical solution to solve the problem of resource and energy supply constraints in terminal device deep learning applications. By migrating part of the deep learning application computing from the terminal to the cloud edge server, the computing pressure of the terminal device can be greatly relieved, and the energy consumption of the terminal can also be reduced to a certain extent. However, in the process of computing migration, the context switching loss of deep learning applications results in a long delay. This paper studies the problem of minimizing the completion time of deep learning applications under edge computing. We model the deep learning application as a task graph, and consider the constraints such as task switching delay, energy consumption and task dependency. Without changing the structure of the task graph, we transform it into a task scheduling-based application completion time minimization problem. In this paper, we first design a parallel scheduling algorithm based on greedy search. Considering the loss of context switching and the dependence of task data, we can allocate tasks to multiple processors for parallel processing by calculating the expected completion time of tasks, thus reducing the completion time of applications. Based on the experimental results, the greedy scheduling algorithm and comparison algorithm are analyzed and verified to optimize the completion time of different depth learning applications.
In cloud computing, the resource allocation of cloud platform faces not only single node resource request, but also complex multi-node resource request. Especially for users who need to run parallel or distributed tas...
详细信息
ISBN:
(纸本)9781510642768;9781510642751
In cloud computing, the resource allocation of cloud platform faces not only single node resource request, but also complex multi-node resource request. Especially for users who need to run parallel or distributed tasks, there are very strict delay and bandwidth requirements for communication between nodes in cloud cluster. Existing cloud platforms often allocate resources one by one virtual machine, ignoring or making it difficult to guarantee the link resources between nodes, that is to say, there is a problem of multi resource allocation in cloud clusters. Therefore, this paper proposes a new cloud resource description method, and improves the allocation method of particle swarm cloud resource. The simulation results show that the proposed method can effectively allocate cloud resources, improve the average revenue and resource utilization of cloud resources, reduce the resource cost by at least 10% compared with the traditional methods, and have shorter task execution time (within 30ms).
Multivariate time series anomaly detection (MTAD) poses a challenge due to temporal and feature dependencies. The critical aspects of enhancing the detection performance lie in accurately capturing the dependencies be...
详细信息
ISBN:
(数字)9798350368741
ISBN:
(纸本)9798350368758
Multivariate time series anomaly detection (MTAD) poses a challenge due to temporal and feature dependencies. The critical aspects of enhancing the detection performance lie in accurately capturing the dependencies between variables within the sliding window and effectively leveraging them. Existing studies rely on domain knowledge to pre-set the window size, and overlook the strength of dependencies while calculating direction based on variable similarity. This paper proposes GSLTE, a graph structure learning method for MTAD. GSLTE employs Fast Fourier Transform to conduct iterative segmentation of the whole series, selecting the dominant Fourier frequency as the window size for each subsequence within the minimum interval. GSLTE quantifies the direction and strength of the dependencies based on variable-lag transfer entropy which is achieved through Dynamic Time Warping method to learn asymmetric links between variables. Extensive experiments show that GNN-based MTAD methods applying GSLTE can further improve anomaly detection performance while outperforming state-of-the-art competitors.
Self-supervised time series anomaly detection (TSAD) demonstrates remarkable performance improvement by extracting high-level data semantics through proxy tasks. Nonetheless, most existing self-supervised TSAD techniq...
详细信息
ISBN:
(数字)9798350368741
ISBN:
(纸本)9798350368758
Self-supervised time series anomaly detection (TSAD) demonstrates remarkable performance improvement by extracting high-level data semantics through proxy tasks. Nonetheless, most existing self-supervised TSAD techniques rely on manual- or neural-based transformations when designing proxy tasks, overlooking the intrinsic temporal patterns of time series. This paper proposes a local temporal pattern learning-based time series anomaly detection (LTPAD). LTPAD first generates sub-sequences. Pairwise sub-sequences naturally manifest proximity relationships along the time axis, and such correlations can be used to construct supervision and train neural networks to facilitate the learning of temporal patterns. Time intervals between two sub-sequences serve as labels for sub-sequence pairs. By classifying these labeled data pairs, our model captures the local temporal patterns of time series, thereby modeling the temporal pattern-aware "normality". Abnormal scores of testing data are acquired by evaluating their conformity to these learned patterns shared in training data. Extensive experiments show that LTPAD significantly outperforms state-of-the-art competitors.
Presents the introductory welcome message from the conference proceedings. May include the conference officers' congratulations to all involved with the conference event and publication of the proceedings record.
Presents the introductory welcome message from the conference proceedings. May include the conference officers' congratulations to all involved with the conference event and publication of the proceedings record.
In JointCloud computing, multi-party participation introduces complexity and uncertainty. For all participants in JointCloud computing, both continuous supervision and necessary privacy protection are required. Tradit...
详细信息
In JointCloud computing, multi-party participation introduces complexity and uncertainty. For all participants in JointCloud computing, both continuous supervision and necessary privacy protection are required. Traditional supervision methods usually adopt the centralized information interaction mode, which has such defects as collusion of interests, single point of failure, privacy disclosure, etc. Building a decentralized supervision mechanism has become a new research direction. In this paper, we propose PPSS, a privacy-preserving supervision scheme based on blockchain, which decentralizes the supervision of the participants in JointCloud computing, and combines the “double encryptions” and “threshold encryption” technologies to provide privacy protection. While making full use of the decentralization of the blockchain, a committee is established to carry out the analysis and decision-making tasks in terms of supervision and privacy protection. Experimental results indicate that PPSS can balance performance and security by reasonably configuring the committee.
暂无评论