This paper presents a detailed analysis of computationalcomplexity of Multiple Hypothesis Tracking (MHT). The result proves that the computationalcomplexity of MHT is dominated by the number of hypotheses. Effects o...
详细信息
This paper presents a detailed analysis of computationalcomplexity of Multiple Hypothesis Tracking (MHT). The result proves that the computationalcomplexity of MHT is dominated by the number of hypotheses. Effects of track merging and pruning are analyzed also. Certain common design parameters of MHT, such as thresholds, are also discussed in detail. The results of this paper provide a guidance for selecting parameters in an MHT tracker and predicting its performance. Among the design parameters discussed in this paper, track merging appears to be the most important way for controlling the computationalcomplexity of MI-IT. Thresholds for track deletion are also critical. If not all measurements are allowed to initiate new tracks, the number of new tracks can also be used for tuning the computation requirement of MHT, but it is not as significant as thresholds. (C) 1999 Elsevier Science Ltd. All rights reserved.
The paper shows summary of the author's research subjects from 1973 through 2012. Additional explanation on these subjects and related references are omitted because of space limitation. They will be given at pres...
详细信息
ISBN:
(纸本)9780769548937
The paper shows summary of the author's research subjects from 1973 through 2012. Additional explanation on these subjects and related references are omitted because of space limitation. They will be given at presentation.
Evaluating the computationalcomplexity of decoders is a very important aspect in the area of Error Control Coding. However, most evaluations have been performed based on hardware implementations. In this paper, diffe...
详细信息
Evaluating the computationalcomplexity of decoders is a very important aspect in the area of Error Control Coding. However, most evaluations have been performed based on hardware implementations. In this paper, different decoding algorithms for binary Turbo codes which are used in LTE standards are investigated. Based on the different mathematical operations in the diverse equations, the computationalcomplexity is derived in terms of the number of binary logical operations. This work is important since it demonstrates the computationalcomplexity breakdown at the binary logic level as it is not always evident to have access to hardware implementations for research purposes. Also, in contrast to comparing different Mathematical operations, comparing binary logic operations provides a standard pedestal in view to achieve a fair comparative analysis for computationalcomplexity. The usage of the decoding method with fewer number of binary logical operations significantly reduces the computationalcomplexity which in turn leads to a more energy efficient/power saving implementation. Results demonstrate the variation in computational complexities when using different algorithms for Turbo decoding as well as with the incorporation of Sign Difference Ratio (SDR) and Regression-based extrinsic information scaling and stopping mechanisms. When considering the conventional decoding mechanisms and streams of 16 bits in length, Method 3 uses 0.0065% more operations in total as compared to Method 1. Furthermore, Method 2 uses only 0.0035% of the total logical complexity required with Method 1. These computational complexity analysis at the binary logical level can be further used with other error correcting codes adopted in different communication standards.
Since most studies for estimating an angle-of-arrival (AOA) based on the antenna array have considered the antenna array with a single configuration, they are not proper to simultaneously estimate AOAs of multiple sig...
详细信息
Since most studies for estimating an angle-of-arrival (AOA) based on the antenna array have considered the antenna array with a single configuration, they are not proper to simultaneously estimate AOAs of multiple signals with various frequencies. In this paper, we introduce a cascade AOA estimation technique consisting of CAPON and Beamspace Multiple Signal Classification (MUSIC), based on a Combined Array Antenna (CAA) with Uniform Rectangular Frame Array (URFA) and Uniform Circular Array (UCA), for enhancing the above problem. In addition, we provide the computational complexity analysis for showing the low computationalcomplexity of this technique comparing to the conventional technique. (C) 2021 The Author(s). Published by Elsevier B.V. on behalf of The Korean Institute of Communications and Information Sciences.
This study addresses the challenge of forecasting short-term demand in Emergency Medical Services (EMS) using machine learning techniques, which is essential for improving resource allocation, optimizing response time...
详细信息
This study addresses the challenge of forecasting short-term demand in Emergency Medical Services (EMS) using machine learning techniques, which is essential for improving resource allocation, optimizing response times, and enhancing overall system efficiency. In emergency situations, the swift allocation of medical personnel and vehicles is crucial for ensuring faster response times and increasing survival rates. To tackle this, the study explores nine machine learning algorithms, including Gaussian Mixture Models, Naive Bayes, KNearest-Neighbors, Support Vector Machine, Random Forest, Extremely Randomized Trees, Gradient Boosting, Adaptive Boosting, and Bagging, to provide accurate forecasts. Model performance is analyzed using two variable selection methods: filter and wrapper methods. The best-performing algorithms, Gradient Boosting and AdaBoost, undergo further analysis after hyperparameter fine-tuning. Principal results show that Gradient Boosting and AdaBoost outperformed other algorithms, with Random Search and Genetic Algorithm methods enhancing model performance. The study also demonstrates how the combination of different methods influences both performance and computationalcomplexity. In this context, the developed models not only achieve high levels of accuracy in predicting short-term EMS demand but also serve as versatile tools. They distinguish priority calls and vehicle types when predicting call volume per hour from different districts and the expected number of dispatches at each base per shift, respectively. These models can be used as standalone tools or integrated with other optimization approaches, providing valuable inputs for optimizing staff scheduling, location, and relocation of emergency vehicles, thus contributing to the efficiency and performance improvement of EMS systems.
Given a task that requires some skills and a social network of individuals with differei skills, the TEAM FORMATION problem asks to find a team of individuals that together ca perform the task, while minimizing commun...
详细信息
Given a task that requires some skills and a social network of individuals with differei skills, the TEAM FORMATION problem asks to find a team of individuals that together ca perform the task, while minimizing communication costs. Since the problem is NP-hard, identify the source of intractability by analyzing its parameterized complexity with respel to parameters such as the total number of skills k, the team size 1, the communication cost budget b, and the maximum vertex degree A. We show that the computation complexity strongly depends on the communication cost measure: when using the weight of a minimum spanning tree of the subgraph formed by the selected team, we obtain fixes parameter tractability for example with respect to the parameter k. In contrast, when using the diameter as measure, the problem is intractable with respect to any single parameter however, combining A with either b or 1 yields fixed-parameter tractability. (C) 2017 Elsevier B.V. All rights reserved
In this paper, we present a significant improvement of the Quick Hypervolume algorithm, one of the state-of-the-art algorithms for calculating the exact hypervolume of the space dominated by a set of d-dimensional poi...
详细信息
In this paper, we present a significant improvement of the Quick Hypervolume algorithm, one of the state-of-the-art algorithms for calculating the exact hypervolume of the space dominated by a set of d-dimensional points. This value is often used as the quality indicator in the multiobjective evolutionary algorithms and other multiobjective metaheuristics and the efficiency of calculating this indicator is of crucial importance especially in the case of large sets or many dimensional objective spaces. We use a similar divide and conquer scheme as in the original Quick Hypervolume algorithm, but in our algorithm we split the problem into smaller sub-problems in a different way. Through both theoretical analysis and a computational study we show that our approach improves the computationalcomplexity of the algorithm and practical running times. (c) 2017 Elsevier Ltd. All rights reserved.
This paper investigates the center selection of multi-output radial basis function (RBF) networks, and a multi-output fast recursive algorithm (MFRA) is proposed. This method can not only reveal the significance of ea...
详细信息
This paper investigates the center selection of multi-output radial basis function (RBF) networks, and a multi-output fast recursive algorithm (MFRA) is proposed. This method can not only reveal the significance of each candidate center based on the reduction in the trace of the error covariance matrix, but also can estimate the network weights simultaneously using a back substitution approach. The main contribution is that the center selection procedure and the weight estimation are performed within a well-defined regression context, leading to a significantly reduced computationalcomplexity. The efficiency of the algorithm is confirmed by a computational complexity analysis, and simulation results demonstrate its effectiveness. (C) 2010 Elsevier B.V. All rights reserved.
The authors apply two approaches to reduce the computation time of the residual complexity similarity metric employed in image registration applications aimed at hardware-based implementations with low-complexity tran...
详细信息
The authors apply two approaches to reduce the computation time of the residual complexity similarity metric employed in image registration applications aimed at hardware-based implementations with low-complexity transforms. First, the similarity metric is computed in image sub-blocks, which are subsequently combined into a global metric value. Second, the discrete cosine transform (DCT) needed in the computation of the similarity measure is replaced with multiplier-free low-complexity approximate transforms. The authors propose a new low-complexity transform requiring only 18 additions in an 8 x 8 block and compare it to: the round DCT, the signed DCT, the Hadamard transform and the Walsh-Hadamard transform. Detailed computational complexity analysis reveals that block-wise processing alone reduces computational cost by a factor of 8-9 for original DCT composed of multiplications and additions, and up to similar or equal to 4.90 when the proposed DCT is utilised;being the computation performed with additions only. Results obtained from computer simulated and realistic X-ray images demonstrate block-wise processing and approximate transforms result in successful image registration, making residual complexity similarity measure available to hardware-accelerated fast image registration applications.
In this paper, novel knowledge-aided space-time adaptive processing (KA-STAP) algorithms using sparse representation/recovery (SR) techniques by exploiting the spatio-temporal sparsity are proposed to suppress the clu...
详细信息
In this paper, novel knowledge-aided space-time adaptive processing (KA-STAP) algorithms using sparse representation/recovery (SR) techniques by exploiting the spatio-temporal sparsity are proposed to suppress the clutter for airborne pulsed Doppler radar. The proposed algorithms are not simple combinations of KA and SR techniques. Unlike the existing sparsity-based STAP algorithms, they reduce the dimension of the sparse signal by using prior knowledge resulting in a lower computationalcomplexity. Different from the KA parametric covariance estimation (KAPE) scheme, they estimate the covariance matrix using SR techniques that avoids complex selections of the Doppler shift and the covariance matrix taper. The details of the selection of potential clutter array manifold vectors according to prior knowledge are discussed and compared with the KAPE scheme. Moreover, the implementation issues and the computational complexity analysis for the proposed algorithms are also considered. Simulation results show that our proposed algorithms obtain a better performance and a lower complexity compared with the sparsity-based STAP algorithms and outperform the KAPE scheme in presence of errors in prior knowledge.
暂无评论