High penetration of renewable resources into traditional Medium Voltage (MV) or secondary distribution grid may impose severe challenges due to bidirectional power flow, low-inertia generation interconnections to weak...
详细信息
Along with high performance computer systems, the Application Programming Interface (API) used is crucial to develop efficient solutions for modern parallel and distributedcomputing. Compute Unified Device Architectu...
详细信息
ISBN:
(纸本)9781665440714;9781665411615
Along with high performance computer systems, the Application Programming Interface (API) used is crucial to develop efficient solutions for modern parallel and distributedcomputing. Compute Unified Device Architecture (CUDA) and Open computing Language (OpenCL) are two popular APIs that allow General Purpose Graphics Processing Unit (GPGPU, GPU for short) to accelerate processing in applications where they are supported. This paper presents a comparative study of OpenCL and CUDA and their impact on parallel and distributedcomputing. Mandelbrot set (represents complex numbers) generation, Marching Squares algorithm (represents embarrassingly parallelism), and Bitonic Sorting algorithm (represents distributedcomputing) are implemented using OpenCL (version 2.x) and CUDA (version 9.x) and run on a Linux-based High Performance computing (HPC) system. The HPC system uses an Intel i7-9700k processor and an Nvidia GTX 1070 GPU card. Experimental results from 25 different tests using the Mandelbrot Set generation, the Marching Squares algorithm, and the Bitonic Sorting algorithm are analyzed. According to the experimental results, CUDA performs better than OpenCL (up to 7.34x speedup). However, in most cases, OpenCL performs at an acceptable rate (CUDA speedup is less than 2x).
The proceedings contain 71 papers. The topics discussed include: implementing a comprehensive networks-on-chip generator with optimal configurations;ECS2: A fast erasure coding library for GPU-accelerated storage syst...
ISBN:
(纸本)9781728166773
The proceedings contain 71 papers. The topics discussed include: implementing a comprehensive networks-on-chip generator with optimal configurations;ECS2: A fast erasure coding library for GPU-accelerated storage systems with parallel & direct IO;autoscaling high-throughput workloads on container orchestrators;Grade10: a framework for performance characterization of distributed graph processing;data life aware model updating strategy for stream-based online deep learning;energy optimization and analysis with EAR;parallel particle advection bake-off for scientific visualization workloads;HAN: a hierarchical AutotuNed collective communication framework;and evaluating worksharing tasks on distributed environments.
The proceedings contain 130 papers. The special focus in this conference is on Intelligent computing. The topics include: A Comparable Study on Dimensionality Reduction Methods for Endmember Extraction;Hyperspectral I...
ISBN:
(纸本)9783030845216
The proceedings contain 130 papers. The special focus in this conference is on Intelligent computing. The topics include: A Comparable Study on Dimensionality Reduction Methods for Endmember Extraction;Hyperspectral Image Classification with Locally Linear Embedding, 2D Spatial Filtering, and SVM;a Hierarchical Retrieval Method Based on Hash Table for Audio Fingerprinting;Automatic Extraction of Document Information Based on OCR and Image Registration Technology;using Simplified Slime Mould Algorithm for Wireless Sensor Network Coverage Problem;super-Large Medical Image Storage and Display Technology Based on Concentrated Points of Interest;person Re-identification Based on Hash;a Robust and Automatic Recognition Method of Pointer Instruments in Power System;partial Distillation of Deep Feature for Unsupervised Image Anomaly Detection and Segmentation;an Evolutionary Neuron Model with Dendritic Computation for Classification and Prediction;Speech Recognition Method for Home Service Robots Based on CLSTM-HMM Hybrid Acoustic Model;serialized Local Feature Representation Learning for Infrared-Visible Person Re-identification;a Novel Decision Mechanism for Image Edge Detection;Rapid Earthquake Assessment from Satellite Imagery Using RPN and Yolo v3;attention-Based Deep Multi-scale Network for Plant Leaf Recognition;short Video Users’ Personal Privacy Leakage and Protection Measures;An Efficient Video Steganography Method Based on HEVC;analysis on the Application of Blockchain Technology in Ideological and Political Education in Universities;parallel Security Video Streaming in Cloud Server Environment;An Efficient Video Steganography Scheme for Data Protection in H.265/HEVC;an Improved Genetic Algorithm for distributed Job Shop Scheduling Problem;A Robust Lossless Steganography Method Based on H.264/AVC;detection of Pointing Position by Omnidirectional Camera.
in an era of the widespread introduction of information technology, the broadest possibilities are opening up for the implementation of additional services for prosumers in the electricity market. The traditional elec...
详细信息
ISBN:
(数字)9781728194790
ISBN:
(纸本)9781728194790
in an era of the widespread introduction of information technology, the broadest possibilities are opening up for the implementation of additional services for prosumers in the electricity market. The traditional electric power system of Russia allows customers to receive electricity only from large power plants, which leads to centralization and monopoly. Currently, the concept of distributed generation is gaining popularity. The idea is that small electric power generators are distributed throughout the local power network. The introduction of such an electrical network will improve the reliability of the system. However, such a network needs effective management and an electricity metering system. This paper proposes the use of distributed network blockchain technology as a solution to these problems. The proposed double-chain blockchain model will provide for the electric energy trading within the local electricity market, without third-party intermediaries, with a sufficient level of security and automation for all participants.
Recently there has been many studies on backdoor attacks, which involve injecting poisoned samples into the training set in order to embed backdoors into the model. Existing multiple poisoned samples attacks usually r...
详细信息
ISBN:
(数字)9798350381993
ISBN:
(纸本)9798350382006
Recently there has been many studies on backdoor attacks, which involve injecting poisoned samples into the training set in order to embed backdoors into the model. Existing multiple poisoned samples attacks usually randomly select a subset from clean samples to generate the poisoned samples. Filtering-and-Updating Strategy (FUS) has shown that the poisoning efficiency of each poisoned sample is inconsistent and random selection is not optimal. However, FUS does not fully considered the selection of multiple poisoned samples, there are still some issues with the selection of multiple poisoned samples. In this paper, we formulate the selection of multiple types of poisoned samples as a multi-objective optimization problem and proposed a Multiple Poisoned Samples Selection Strategy (MPS) to solve the issue. Unlike FUS, we consider the potential of clean samples that are not selected as to become efficient poisoned samples. Specifically, we use a weight-based contribution approach to calculate the contribution of each sample (clean sample and poisoned sample) during the training process from multiple dimensions. Finally, based on the greedy approach, we retain a subset of samples with the largest contribution in each dimension through iterations. We evaluate the effectiveness of MPS on various attack methods, including BadNet, Blended, ISSBA, and WaNet, as well as benchmark datasets. The experimental results on CIFAR-10 and GTSRB show that MPS can increase the attack strength by 1.45% to 18.34% compared to RSS and 0.43% to 10.84% compared to FUS in multiple poisoned samples attacks, thereby enhancing the stealthiness of the attack. Meanwhile, MPS is suitable for black-box settings, meaning that poisoned samples selected in one setting can be applied to other settings.
In the Bitcoin system, transaction history of an address can be useful in many scenarios, such as the balance calculation and behavior analysis. However, it is non-trivial for a common user who runs a light node to fe...
详细信息
ISBN:
(纸本)9781728170022
In the Bitcoin system, transaction history of an address can be useful in many scenarios, such as the balance calculation and behavior analysis. However, it is non-trivial for a common user who runs a light node to fetch historical transactions, since it only stores headers without any transaction details. It usually has to request a full node who stores the complete data. Validation of these query results is critical and mainly involves two aspects: correctness and completeness. The former can be implemented via Merkle branch easily, while the latter is quite difficult in the Bitcoin protocol. To enable the completeness validation, a strawman design is proposed, which simply includes the BF (Bloom filter) in the headers. However, since the size of BF is about KB, light nodes in the strawman will suffer from the incremental storage burden. What's worse, an integrated block must be transmitted when BF cannot work, resulting in large network overhead. In this paper, we propose LVQ, the first lightweight verifiable query approach that reduces the storage requirement and network overhead at the same time. To be specific, by only storing the hash of BF in headers, LVQ keeps data stored by light nodes being little. Besides, LVQ introduces a novel BMT (BF integrated Merkle Tree) structure for lightweight query, which can eliminate the communication costs of query results by merging the multiple successive BFs. Furthermore, when BF cannot work, a lightweight proof by SMT (Sorted Merkle Tree) is exploited to further reduce the network overhead. The security analysis confirms LVQ's ability to enable both correctness and completeness validation. In addition, the experimental results demonstrate its lightweight.
Compared with the traditional two-level or h-level voltage source converter, modular multilevel converter has many incomparable advantages, such as long-distance high-capacity transmission, asynchronous grid interconn...
详细信息
ISBN:
(纸本)9781665435253
Compared with the traditional two-level or h-level voltage source converter, modular multilevel converter has many incomparable advantages, such as long-distance high-capacity transmission, asynchronous grid interconnection, distributed generation connected to the grid, wind power generation connected to the grid, passive Island, drilling platform and so on There are broad application prospects in many fields, such as power supply in partial suitable areas, capacity expansion and reconstruction of urban power grid, etc. This paper studies the control strategy of flexible HVDC system based on MMC in cloud computing environment.
Although cloud storage technology can provide users with convenient storage services, a large amount of duplicate data in the cloud brings a huge storage burden and the risk of privacy leakage. To improve the utilizat...
详细信息
Although cloud storage technology can provide users with convenient storage services, a large amount of duplicate data in the cloud brings a huge storage burden and the risk of privacy leakage. To improve the utilization of cloud storage resources and protect data confidentiality, random message lock encryption technology (R-MLE) can be used to delete redundant data in the cloud. But the theoretical basis of the deduplication scheme based on R-MLE is bilinear mapping, so the computational cost of finding duplicate fingerprint-tags is relatively large. To improve the deduplication efficiency, we proposed a secure deduplication scheme based on the autoencoder model in our previous research, using the model to generate the abstract-tags of the data, and using the similarity of the abstract-tags to quickly filter out the fingerprint-tags with high repeatability, which greatly reduces the number of fingerprint-tag comparisons. On this basis, this paper further proposes a secure deduplication method based on k-means clustering. First, the abstract-tags in cloud storage are clustered, and then the distance between the abstract-tags uploaded by users and the centroid is calculated. Then, the abstract-tags of the category with the closest distance are selected. Finally, duplicate data detection is performed only on the fingerprint-tags corresponding to these abstract-tags. In this way, the filtering speed of fingerprint-tags can be further accelerated. Experiments show that our method has higher performance than the secure deduplication method based on the autoencoder model.
distributed digital infrastructures for computation and analytics are now evolving towards an interconnected ecosystem allowing complex applications to be executed from IoT Edge devices to the HPC Cloud (aka the Compu...
详细信息
ISBN:
(数字)9781728166773
ISBN:
(纸本)9781728166773
distributed digital infrastructures for computation and analytics are now evolving towards an interconnected ecosystem allowing complex applications to be executed from IoT Edge devices to the HPC Cloud (aka the computing Continuum, the Digital Continuum, or the Transcontinuum). Understanding end-to-end performance in such a complex continuum is challenging. This breaks down to reconciling many, typically contradicting application requirements and constraints with low-level infrastructure design choices. One important challenge is to accurately reproduce relevant behaviors of a given application workflow and representative settings of the physical infrastructure underlying this complex continuum. In this paper we introduce a rigorous methodology for such a process and validate it through E2Clab. It is the first platform to support the complete analysis cycle of an application on the computing Continuum: (i) the configuration of the experimental environment, libraries and frameworks;(ii) the mapping between the application parts and machines on the Edge, Fog and Cloud;(iii) the deployment of the application on the infrastructure;(iv) the automated execution;and (v) the gathering of experiment metrics. We illustrate its usage with a real-life application deployed on the grid'5000 testbed, showing that our framework allows one to understand and improve performance, by correlating it to the parameter settings, the resource usage and the specifics of the underlying infrastructure.
暂无评论