This brief proposes an adaptation of the Generalized Predictive Control (GPC) for ramp-reference tracking. The second-order difference operation and the plant model are used to get an augmented model with two embedded...
详细信息
This brief proposes an adaptation of the Generalized Predictive Control (GPC) for ramp-reference tracking. The second-order difference operation and the plant model are used to get an augmented model with two embedded integrators and whose output is the tracking error. Differently from other GPC-based tracking algorithms, the proposed approach does not require information about the reference parameters, and the GPC prediction horizon is composed of the predicted errors instead of the expected plant outputs. Thus, the optimization function and the receding horizon strategy used in conventional GPC can be applied to get the control law. Simulation and experimental results prove that the proposed approach can successfully track constant and ramp references. The proposed method is applicable for single-input single-outputs plants. However, the mathematical background presented in this brief can be used in the development of new GPC strategies.
The adaptive filtering algorithm based on the maximum correntropy criterion (MCC) is very effective in suppressing non-Gaussian noises and therefore attracts widespread attentions. At present, some works have been don...
详细信息
The adaptive filtering algorithm based on the maximum correntropy criterion (MCC) is very effective in suppressing non-Gaussian noises and therefore attracts widespread attentions. At present, some works have been done for study the convergence and steady-state performance analysis of the MCC algorithm, but its transient performance analysis is still an open problem. To provide a comprehensive theoretical foundation for the MCC algorithm, we propose a method for transient performance analysis based on moment generating function (MGF). Since this method can efficiently calculate the expected value of the exponential term in the iterative update equation, it can avoid the discrepancies caused by introducing some approximation methods such as Taylor expansions in the analysis process. To date, there is no precedent for using this method to analyze the transient performance of the MCC algorithm. In addition, the steady-state performance and stability conditions of the MCC algorithm are discussed based on this method. Finally, the proposed analytical method is applied to the system identification problem, and the results show that the theoretical analysis results are agree well with the Monte Carlo simulation results.
We examine an out-of-sequence measurement (OoSM) update algorithm by Bar-Shalom. We demonstrate that "Algorithm C" is mathematically equivalent to initializing a track with the OoSM, predicting that track to...
详细信息
We examine an out-of-sequence measurement (OoSM) update algorithm by Bar-Shalom. We demonstrate that "Algorithm C" is mathematically equivalent to initializing a track with the OoSM, predicting that track to the current time without process noise, then using the OoSM track as a pseudomeasurement to update the system track. We argue that a more reasonable approximation includes process noise in the OoSM prediction. Like Algorithm C, the proposed algorithm makes use only of current track information.
In this correspondence, we present an asymptotic performance analysis of a subspace-based method proposed by Liu and Xu for blind signature waveform estimation in synchronous code-division multiple-access systems, In ...
详细信息
In this correspondence, we present an asymptotic performance analysis of a subspace-based method proposed by Liu and Xu for blind signature waveform estimation in synchronous code-division multiple-access systems, In particular, we derive asymptotic, i.e., as the number of samples becomes large, expressions for the covariance of the estimated channel parameters. We also derive aln algorithm independent bound on the covariance of the estimated parameters. This bound can be used as a measure against the theoretically predicted algorithmic performance. Some insight into the achievable performance of this algorithm is obtained by numerical evaluation of the bound for two cases of interest, and the results are compared to that obtained by numerical evaluation of the theoretically predicted performance. Monte Carlo simulations are used to verify the theoretical analysis.
Recent developments in signal processing have led to the introduction of a novel set of algorithms known as the quaternion-valued second-order Volterra (QSOV) least mean square family, alongside an advanced algorithm ...
详细信息
Recent developments in signal processing have led to the introduction of a novel set of algorithms known as the quaternion-valued second-order Volterra (QSOV) least mean square family, alongside an advanced algorithm termed the widely nonlinear QSOV recursive least square (WNL-QSOV-RLS). These innovative methods are adept at handling signals in 3D and 4D dimensions. Nonetheless, their effectiveness is often challenged in real-world conditions where the error sensor is subjected to impulsive or non-Gaussian noise. Under these circumstances, algorithms relying on the MSE criterion tend to underperform or even diverge. In response to these limitations, this brief introduces the General Barron cost function (GBF), leading to the creation of a novel adaptive filter - the widely nonlinear quaternion-valued second-order Volterra adaptive filter based on recursive GBF (WNL-QSOV-RGBF) algorithm. This brief includes an extensive steady-state analysis of the WNL-QSOV-RGBF algorithm. Furthermore, the efficacy of this newly proposed algorithm, along with its family, is rigorously evaluated through simulation-based system identification and wind prediction tests. These tests unequivocally demonstrate the superior performance of the WNL-QSOV-RGBF algorithm in environments plagued by impulsive noise, compared to existing methods. Complementary to the theoretical analysis, this brief also presents a corroborative simulation study to validate the performance metrics.
This letter presents a time-reliable machine learning model for accurate and rapid base station placement in urban areas. Several real-world city data are used for training the machine learning model to predict parame...
详细信息
This letter presents a time-reliable machine learning model for accurate and rapid base station placement in urban areas. Several real-world city data are used for training the machine learning model to predict parameters such as path loss values. It is developed using supervised training and multiple regressions with the use of artificial intelligence. Parameters such as coverage dimensioning, bandwidth of frequencies, building's density and height, propagation model tuning and path loss values are used to predict the locations of base stations. Once the predicted base station position is obtained, the average percentage error is calculated against the real-world data of base station that shows an accuracy of 84%. This proves that our model is a reliable tool to predict future base station locations. This model will benefit the telecommunication industries to reduce time and cost for rapid base station planning in urban areas that see frequent changes to their landscapes.
We introduce a novel feature set, which we call HDRMAX features, that when included into Video Quality Assessment (VQA) algorithms designed for Standard Dynamic Range (SDR) videos, sensitizes them to distortions of Hi...
详细信息
We introduce a novel feature set, which we call HDRMAX features, that when included into Video Quality Assessment (VQA) algorithms designed for Standard Dynamic Range (SDR) videos, sensitizes them to distortions of High Dynamic Range (HDR) videos that are inadequately accounted for by these algorithms. While these features are not specific to HDR, and also augment the equality prediction performances of VQA models on SDR content, they are especially effective on HDR. HDRMAX features modify powerful priors drawn from Natural Video Statistics (NVS) models by enhancing their measurability where they visually impact the brightest and darkest local portions of videos, thereby capturing distortions that are often poorly accounted for by existing VQA models. As a demonstration of the efficacy of our approach, we show that, while current state-of-the-art VQA models perform poorly on 10-bit HDR databases, their performances are greatly improved by the inclusion of HDRMAX features when tested on HDR and 10-bit distorted videos.
CALF is a CORDIC processor implementation of the adaptive lattice filter. Unlike previously reported results which simply use the CORDIC processor to implement the lattice algorithm formulated for conventional multipl...
详细信息
CALF is a CORDIC processor implementation of the adaptive lattice filter. Unlike previously reported results which simply use the CORDIC processor to implement the lattice algorithm formulated for conventional multiply-and-accumulate type arithmetic units, reported in this correspondence is a novel adaptive normalized lattice algorithm which directly updates the rotation angles rather than the reflection coefficients. Thus, this angle updating lattice algorithm is derived specifically for implementation with a CORDIC processor. Moreover, we further modified this algorithm to a sign-sign algorithm to take advantage of the fact that the CORDIC processors only rotate a finite number of distinct rotation angles. Computer simulation results showed good converging properties even at rather coarsely quantized angles.
The affine projection sign algorithm (APSA) has garnered significant attention in adaptive filtering due to its exceptional robustness and reduced computational demands. Nevertheless, the inherent use of a fixed proje...
详细信息
The affine projection sign algorithm (APSA) has garnered significant attention in adaptive filtering due to its exceptional robustness and reduced computational demands. Nevertheless, the inherent use of a fixed projection order in APSA can compromise filtering accuracy and convergence speed. To address this issue, we introduce an innovative strategy for dynamically updating the projection order, resulting in an enhanced version of APSA called the evolving order based APSA (E-APSA). This evolving strategy compares the instantaneous power of output error to a threshold determined by the steady-state mean-square error of APSA, thereby enabling variable projection orders. Furthermore, we provide computational complexity and convergence analyses for E-APSA. Simulation results demonstrate that, compared to other related algorithms, E-APSA offers a significantly faster convergence rate while maintaining competitive steady-state misalignment.
This letter proposes a robust polynomial-decompo- sition-based linear prediction coding algorithm, PDLPC, for formant estimation, which can effectively eliminate merger peaks. PDLPC first combines LPC with statistical...
详细信息
This letter proposes a robust polynomial-decompo- sition-based linear prediction coding algorithm, PDLPC, for formant estimation, which can effectively eliminate merger peaks. PDLPC first combines LPC with statistical analysis to obtain the peak and suspicious merger peak screening criteria;then uses Cauchy integral formula to calculate the number of poles located in the fan-shaped area near the suspicious merger peak to determine whether the merger peak occurs;finally, utilizes division algorithm for polynomial to separate those merger peaks. The evaluations on Primi and VTR show that PDLPC is effective in separating merger peaks, and has certain competitiveness compared with some existing formant estimation algorithms.
暂无评论