Here, we propose the use of the majorization-based indicator for quantum computation complexity introduced in Vallejos et al. (Phys. Rev. A 104:012602, 2021) as a tool to benchmark the complexity within reach of quant...
详细信息
Here, we propose the use of the majorization-based indicator for quantum computation complexity introduced in Vallejos et al. (Phys. Rev. A 104:012602, 2021) as a tool to benchmark the complexity within reach of quantum processors, when taking into account hardware and noise constraints. By considering specific qubit systems and native gate sets of currently available technologies, we numerically simulate the operation of various quantum processors in the presence of typical types of noise. We characterize their complexity for different native gate sets, qubit connectivity, and increasing number of gates. We identify and assess quantum complexity by comparing the performance of each device against benchmark lines provided by randomized Clifford circuits and Haar-random pure states. In this way, we are able to specify, for each specific processor, the number of native quantum gates which are necessary, on average, for achieving those levels of complexity. Moving toward real implementations, our results validate the use of the majorization-based indicator in the presence of noise. We determine how much noise one quantum processor can admit while maintaining high levels of complexity. Our benchmarking procedure can thus be used to set target levels for noise in quantum processors, while taking into account their physical constraints.
The fundamental challenge of data analytics scheduling is the heterogeneity of both data analytics jobs and resources. Although many scheduling solutions have been developed to improve the efficiency of data analytics...
详细信息
The fundamental challenge of data analytics scheduling is the heterogeneity of both data analytics jobs and resources. Although many scheduling solutions have been developed to improve the efficiency of data analytics frameworks (e.g., Spark), they either (1) focus on the scheduling of a single type of resource, without considering the coordination between different resources;or (2) schedule multiple resources by factoring in limited information about analytics jobs without considering the heterogeneity of resources. This paper presents Stargazer, a novel, efficient system that tackles diversity data analytics jobs on heterogeneous cluster by inferring the completion times of their decomposed tasks. Specifically, Stargazer adopts a deep learning model, which takes into considerations multiple key factors of diversity data analytics jobs and heterogeneous resources, to accurately infer the completion time of different tasks. A prototype of Stargazer is fully implemented in the Spark framework. Extensive experiments show that Stargazer can reduce the average job completion time by 21% and improve average performance by 20%, while incurring little overhead.
The design and analysis of approximation algorithms for NP-hard problems is perhaps the most active research area in the theory of combinatorial algorithms. In this article, we study the notion of a combinatorial domi...
详细信息
The design and analysis of approximation algorithms for NP-hard problems is perhaps the most active research area in the theory of combinatorial algorithms. In this article, we study the notion of a combinatorial dominance guarantee as a way for assessing the performance of a given approximation algorithm. An f(n) dominance bound is a guarantee that the heuristic always returns a solution not worse than at least f(n) solutions. We give tight analysis of many heuristics, and establish novel and interesting dominance guarantees even for certain inapproximable problems and heuristic search algorithms. For example, we show that the maximal matching heuristic of VERTEX COVER offers a combinatorial dominance guarantee of 2(n) - (1.839+ o(1))(n). We also give inapproximability results for most of the problems we discuss.
A suboptimal partial transmit sequence (PTS) based on particle swarm optimization (PSO) algorithm is presented for the low computation complexity and the reduction of the peak-to-average power ratio (PAPR) of an ortho...
详细信息
A suboptimal partial transmit sequence (PTS) based on particle swarm optimization (PSO) algorithm is presented for the low computation complexity and the reduction of the peak-to-average power ratio (PAPR) of an orthogonal frequency division multiplexing (OFDM) system. In general, PTS technique can improve the PAPR statistics of an OFDM system. However, it will come with an exhaustive search over all combinations of allowed phase weighting factors and the search complexity increasing exponentially with the number of subblocks. In this paper, we work around potentially computational intractability;the proposed PSO scheme exploits heuristics to search the optimal combination of phase factors with low complexity. Simulation results show that the new technique can effectively reduce the computation complexity and PAPR reduction. Copyright (C) 2008 Jyh-Horng Wen et al.
In the large-scale deployment of federated learning (FL) systems, the heterogeneity of clients, such as mobile phones and Internet of Things (IoT) devices with different configurations, constitutes a significant probl...
详细信息
In the large-scale deployment of federated learning (FL) systems, the heterogeneity of clients, such as mobile phones and Internet of Things (IoT) devices with different configurations, constitutes a significant problem regarding fairness, training performance, and accuracy. Such system heterogeneity leads to an inevitable trade-off between model complexity and data accessibility as a bottleneck. To avoid this situation and to achieve resource-adaptive FL, we introduce CrossHeteroFL to deal with heterogeneous clients equipped with different computational and communication capabilities. Our solution enables the training of heterogeneous local models with additional computational complexity and still generates a single global inference model. We demonstrate several CrossHeteroFL training scenarios and conduct extensive empirical evaluation, covering four levels of the computational complexity of three-model architectures on two datasets. The proposed mechanism provides the system with non-elementary access to a scattered fit among clients. However, the proposed method generalizes soft handover-based solutions by adjusting the model width according to clients' capabilities and a tiered balance of data-source overviews to assess clients' interests accurately. The evaluation results indicate our method solves the challenges in previous studies and produces greater top-1 accuracy and consistent performance under heterogeneous client conditions.
An adaptive parameterized interpolator for image compression based on hierarchical grid interpolation is developed and investigated. For optimizing the interpolator parameters an approach is proposed based on the mini...
详细信息
An adaptive parameterized interpolator for image compression based on hierarchical grid interpolation is developed and investigated. For optimizing the interpolator parameters an approach is proposed based on the minimization of the entropy of the quantized post-interpolation residuals, which is used as an estimate of the volume of compressed data. A recursive procedure for calculating the parameters of the developed interpolator is proposed, and theoretical estimates of its computational complexity are calculated. As part of a hierarchical image compression method, the developed interpolator is experimentally investigated, as well as making its comparison with averaging interpolators and an adaptive interpolator based on optimizing the sum of the absolute values of the interpolation errors. The developed interpolator is shown to have an advantage over the prototypes in terms of the compressed data size for various compression errors.
The versatile video coding (VVC) standard offers improved coding efficiency compared to the high efficiency video coding (HEVC) standard in multimedia signal coding. However, this increased efficiency comes at the cos...
详细信息
The versatile video coding (VVC) standard offers improved coding efficiency compared to the high efficiency video coding (HEVC) standard in multimedia signal coding. However, this increased efficiency comes at the cost of increased coding complexity. This work proposes an efficient coding unit partitioning algorithm based on an extreme learning machine (ELM), which can reduce the coding complexity while ensuring coding efficiency. Firstly, the coding unit size decision is modeled as a classification problem. Secondly, an ELM classifier is trained to predict the coding unit size. In the experiment, the proposed approach is verified based on the VVC reference model. The results show that the proposed method can reduce coding complexity significantly, and good image quality can be obtained.
Speech processing is one of the required fields in digital signal processing that helps in processing the speech signals. The speech process is utilized in different fields such as emotion recognition, virtual assista...
详细信息
Speech processing is one of the required fields in digital signal processing that helps in processing the speech signals. The speech process is utilized in different fields such as emotion recognition, virtual assistants, voice identification, etc. Among the various applications, emotion recognition is one of the critical areas because it is used to recognize people's exact emotions and eliminate physiological issues. Several researchers utilize signal processing and machine learning techniques together to find the exact human emotions. However, they fail to attain their feelings with less computational complexity and high accuracy. This paper introduces the intelligent computational technique called cat swarm optimized spiking neural network (CSSPNN). Initially, the emotional speech signal is collected from the Toronto emotional speech set (TESS) dataset, which is then processed by applying a wavelet approach to extract the features. The derived features are further examined using the defined classifier CSSPNN, which recognizes human emotions due to the effective training and learning process. Finally, the proficiency of the system is determined using experimental results and discussions. The proposed system recognizes the speech emotions up to 99.3% accuracy compared to recurrent neural networks (RNNs), deep neural networks (DNNs) and deep shallow neural networks (DSNNs).
In this paper we study the implementation of a variant of the classic Gauss-Jordan (GJ) method which was recently introduced by Huard [8] on a shared memoryMIMDcomputer. Two parallel versions are derived by dividing t...
详细信息
In this paper we study the implementation of a variant of the classic Gauss-Jordan (GJ) method which was recently introduced by Huard [8] on a shared memoryMIMDcomputer. Two parallel versions are derived by dividing the sequential Huard method into noninterfering tasks. Taking into consideration the computation as well as the communication complexity we present a parallel scheduling algorithm for each task graph. Next, in an attempt to reduce the communication cost we introduce block versions and follow a similar approach for their study.
A new localization algorithm based on large scale unmanned aerial vehicle swarm (UAVs) is proposed in the paper. The localization algorithm is based on a spring particle model (LASPM). It simulates the dynamic process...
详细信息
A new localization algorithm based on large scale unmanned aerial vehicle swarm (UAVs) is proposed in the paper. The localization algorithm is based on a spring particle model (LASPM). It simulates the dynamic process of physical spring particle system. The UAVs form a special mobile wireless sensor network. Each UAV works as a highly-dynamic mobile sensor node. Only a few mobile sensor nodes are equipped with GPS localization devices, which are anchor nodes, and the other nodes are blind nodes. The mobile sensor nodes are set as particles with masses and connected with neighbor nodes by virtual springs. The virtual springs will force the particles to move to the original positions. The blind nodes' position can be inferred with the LASPM algorithm. The computational and communication complexity doesn't increase with the network scale size. The proposed algorithm can not only reduce the computational complexity, but also maintain the localization accuracy. The simulation results show the algorithm is effective.
暂无评论