For data with complex dimensions, Kernel Principal Component Analysis (KPCA) is a common method of feature extraction, it has a better generalization performance. However, it often face with significant difficulties i...
详细信息
Cement poles are an important part of building construction, and their quality problems directly affect the success of the entire project. In order to solve this problem, this paper adopts deep reinforcement learning ...
详细信息
Most power companies are undergoing a critical period of digital transformation, and information systems are an important carrier of digital transformation. The success of digitalization depends on whether the support...
详细信息
distributed and central control are two complementary paradigms to establish self-adaptation in software systems. Both approaches have their individual benefits and drawbacks, which lead to significant trade-offs rega...
详细信息
ISBN:
(数字)9781665488792
ISBN:
(纸本)9781665488792
distributed and central control are two complementary paradigms to establish self-adaptation in software systems. Both approaches have their individual benefits and drawbacks, which lead to significant trade-offs regarding certain software qualities when designing such systems. The significance of these trade-offs even increases the more complex the target system becomes. In this paper, we present our work-in-progress towards an integrated control approach, which aims at providing the best of both control paradigms. We present the basic concepts of this multi-paradigm approach and outline its inherent support for complex system hierarchies. Further, we illustrate the vision of our approach using application scenarios from the smart energy grid as an example for self-adaptive systems of systems.
With the large-scale access of distributed photovoltaic, controllable load, and energy storage devices to the low-voltage distribution network, the requirements for the transmission quality and processing efficiency o...
详细信息
The paper designed a method using grid oriented multi-objective particle swarm optimization (GOMPSO) for optimally placing and sizes the distributed generators (DG) to achieve objective of reduction in the loss of act...
详细信息
In distributed neural network training with multiple machines and devices, communication limitations often create efficiency bottlenecks due to the frequent exchange of model parameters and gradient information betwee...
详细信息
We describe the engineering of the distributed-memory multilevel graph partitioner dKaMinPar. It scales to (at least) 8192 cores while achieving partitioning quality comparable to widely used sequential and shared-mem...
详细信息
ISBN:
(纸本)9783031396977;9783031396984
We describe the engineering of the distributed-memory multilevel graph partitioner dKaMinPar. It scales to (at least) 8192 cores while achieving partitioning quality comparable to widely used sequential and shared-memory graph partitioners. In comparison, previous distributed graph partitioners scale only in more restricted scenarios and often induce a considerable quality penalty compared to non-distributed partitioners. When partitioning into a large number of blocks, they even produce infeasible solution that violate the balancing constraint. dKaMinPar achieves its robustness by a scalable distributed implementation of the deep-multilevel scheme for graph partitioning. Crucially, this includes new algorithms for balancing during refinement and coarsening.
Blasting vibration is one of the main negative effects of blasting construction. It is influenced by many external factors and has a complex nonlinear relationship with them. How to predict and reduce the blasting vib...
详细信息
Traditionally, distributed machine learning takes the guise of (i) different nodes training the same model (as in federated learning), or (ii) one model being split among multiple nodes (as in distributed stochastic g...
详细信息
ISBN:
(纸本)9781665416474
Traditionally, distributed machine learning takes the guise of (i) different nodes training the same model (as in federated learning), or (ii) one model being split among multiple nodes (as in distributed stochastic gradient descent). In this work, we highlight how fog- and IoT-based scenarios often require combining both approaches, and we present a framework for flexible parallel learning (FPL), achieving both data and model parallelism. Further, we investigate how different ways of distributing and parallelizing learning tasks across the participating nodes result in different computation, communication, and energy costs. Our experiments, carried out using state-of-the-art deep-network architectures and large-scale datasets, confirm that FPL allows for an excellent trade-off among computational (hence energy) cost, communication overhead, and learning performance.
暂无评论