The paper presents a new active control algorithm for the active cancellation of vibroacoustic noise radiated from the compressor installed under each passenger's seat in autonomous bus. The compressor is used for...
详细信息
The paper presents a new active control algorithm for the active cancellation of vibroacoustic noise radiated from the compressor installed under each passenger's seat in autonomous bus. The compressor is used for a heating, ventilation, and air-conditioning (HVAC) system which provide air conditioning for each passenger in autonomous bus. The sound radiated from the compressor of the HVAC system is a high-frequency annoyance noise caused by vibroacoustic noise due to the shell vibration of the compressor. The dominant frequency components of the vibroacoustic noise are harmonics of the rotation frequency of the reciprocating compressor. The HVAC system generates vibroacoustic noise dominantly in the frequency range between 200 and 600 Hz. Such noise is not only distinctly perceptible but also contributes to passenger discomfort and negatively impacts the perceived quality of the vehicle. The aim of this paper is to attenuate the vibroacoustic noise of the HVAC system by developing an active noise control (ANC) system. Generally, the widely recognized filtered-X least mean squared (FXLMS) algorithm has been successfully implemented to active noise control of reciprocating compressor. However, its performance was found lacking outside the peak frequency of compressor operation noise. To address this, the conjugate gradient algorithm was employed to enhance ANC performance. The conjugate gradient algorithm has a lower residual error and faster convergence rate compared to the FXLMS algorithm. As a consequence, the implementation of the conjugategradient-based ANC algorithm resulted in enhanced noise reduction not only at the peak frequencies, which correspond to the compressor operation frequency, but also in frequency ranges outside these peak frequencies.
The saddle point algorithm (SPA) is an iterative method for linear programming (LP) , which convergence rate slows down in approaching the saddle point. For this deficiency of the saddle point algorithm, the saddl...
详细信息
ISBN:
(纸本)9781612848334
The saddle point algorithm (SPA) is an iterative method for linear programming (LP) , which convergence rate slows down in approaching the saddle point. For this deficiency of the saddle point algorithm, the saddle point conjugate gradient algorithm (SPCGA) was provided. Explained the principle of the new algorithm and showed results of LP problems calculated by the new algorithm.
Content-based filtering is one of the most preferred methods to combat Short Message Service (SMS) spam. Memory usage and classification time are essential in SMS spam filtering, especially when working with limited r...
详细信息
ISBN:
(纸本)9781467376822
Content-based filtering is one of the most preferred methods to combat Short Message Service (SMS) spam. Memory usage and classification time are essential in SMS spam filtering, especially when working with limited resources. Therefore, suitable feature selection metric and proper filtering technique should be used. In this paper, we investigate how a learnt Artificial Neural Network with the Scaled conjugategradient method (ANN-SCG) is suitable for content-based SMS spam filtering using a small size of features selected by Gini Index (GI) metric. The performance of ANN-SCG is evaluated in terms of true positive rate against false positive rate, Matthews Correlation Coefficient (MCC) and classification time. The evaluation results show the ability of ANN-SCG to filter SMS spam successfully with only one hundred features and a short classification time around to six microseconds. Thus, memory size and filtering time are reduced. An additional testing using unseen SMS messages is done to validate ANN-SCG with the one hundred features. The result again proves the efficiency of ANN-SCG with the one hundred features for SMS spam filtering with accuracy equal to 99.1%.
In this paper, conjugate gradient algorithms for complex-valued feedforward neural networks are proposed. Since these algorithms yielded better training results for the real-valued case, an extension to the complex-va...
详细信息
ISBN:
(纸本)9783319265353;9783319265346
In this paper, conjugate gradient algorithms for complex-valued feedforward neural networks are proposed. Since these algorithms yielded better training results for the real-valued case, an extension to the complex-valued case is a natural option to enhance the performance of the complex backpropagation algorithm. The full deduction of the classical variants of the conjugate gradient algorithm is presented, and the resulting training methods are exemplified on synthetic and real-world applications. The experimental results show a significant improvement over the complex gradient descent algorithm.
The main objective of this paper is to address the backward problem in the distributed-order time-space fractional diffusion equation (DTSFDE) with Neumann boundary conditions using final data. We began by employing t...
详细信息
The main objective of this paper is to address the backward problem in the distributed-order time-space fractional diffusion equation (DTSFDE) with Neumann boundary conditions using final data. We began by employing the Finite Difference Method (FDM) combined with matrix transformation techniques to compute the direct problem of DTSFDE. Subsequently, by using the Tikhonov regularization method, the inverse problem is transformed into a variational problem. With the help of the derived sensitivity and adjoint problems, the conjugate gradient algorithm is employed to find an approximate solution for the initial data. Finally, through numerical examples in one and two dimensions, we demonstrated the effectiveness and stability of this method, further verifying its reliability in practical applications.
The mismatches of signal and array geometry will seriously degrade the performance of adaptive beamformer. In this paper, we propose two methods for robust adaptive beamforming based on the conjugategradient (CG) alg...
详细信息
The mismatches of signal and array geometry will seriously degrade the performance of adaptive beamformer. In this paper, we propose two methods for robust adaptive beamforming based on the conjugategradient (CG) algorithm. The proposed beamformers offer a significant improvement in the computational complexity while providing the same performance of the best robust beamformers at present. The first method belongs to the diagonal loading technique. We derive a diagonal loading CGLS algorithm (CG applied to normal equations) and propose a simple method to choose the loading level based on a coarse estimation of the desired signal power. This parameter-free method can effectively reduce the signal self-cancellation at high signal-to-noise ratio. The second method belongs to the regularization technique. Since the CG algorithm has a regularizing effect with iteration number being the regularization parameter, the stopping criterion plays an important role on the robustness. We develop three fast stopping criteria for CG iteration, which reduce the stopping complexity from O(N) or O(N-2) to O(1). The former two are the fast versions of existing methods and the later one is new. Moreover, the new criterion based on fast Ritz value estimation has better performance than others.
Kernel methods have been successfully applied to nonlinear problems inmachine learning and signal processing. Various kernel-based algorithms have been proposed over the last two decades. In this paper, we investigate...
详细信息
Kernel methods have been successfully applied to nonlinear problems inmachine learning and signal processing. Various kernel-based algorithms have been proposed over the last two decades. In this paper, we investigate the kernel conjugategradient (KCG) algorithms in both batch and online modes. By expressing the solution vector of CG algorithm as a linear combination of the input vectors and using the kernel trick, we developed the KCG algorithm for batch mode. Because the CG algorithm is iterative in nature, it can greatly reduce the computations by the technique of reduced-rank processing. Moreover, the reduced-rank processing can provide the robustness against the problem of overlearning. The online KCG algorithm is also derived, which converges as fast as the kernel recursive least squares (KRLS) algorithm, but the computational cost is only a quarter of that of the KRLS algorithm. Another attractive feature of the online KCG algorithm compared with other kernel adaptive algorithms is that it does not require the user-defined parameters. To control the growth of data size in online applications, a simple sparsification criterion based on the angles among elements in reproducing kernel Hilbert space is proposed. The angle criterion is equivalent to the coherence criterion but does not require the kernel to be unit norm. Finally, numerical experiments are provided to illustrate the effectiveness of the proposed algorithms.
We suggest a revised form of a classic measure function to be employed in the optimization model of the nonnegative matrix factorization problem. More exactly, using sparse matrix approximations, the revision term is ...
详细信息
We suggest a revised form of a classic measure function to be employed in the optimization model of the nonnegative matrix factorization problem. More exactly, using sparse matrix approximations, the revision term is embedded to the model for penalizing the ill-conditioning in the computational trajectory to obtain the factorization elements. Then, as an extension of the Euclidean norm, we employ the ellipsoid norm to gain adaptive formulas for the Dai-Liao parameter in a least-squares framework. In essence, the parametric choices here are obtained by pushing the Dai-Liao direction to the direction of a well-functioning three-term conjugate gradient algorithm. In our scheme, the well-known BFGS and DFP quasi-Newton updating formulas are used to characterize the positive definite matrix factor of the ellipsoid norm. To see at what level our model revisions as well as our algorithmic modifications are effective, we seek some numerical evidence by conducting classic computational tests and assessing the outputs as well. As reported, the results weigh enough value on our analytical efforts.
The nonlinear conjugategradient (CG) algorithm is one of the most effective line search algorithms for optimization problems due to its simplicity and low memory requirements, particularly for large-scale problems. H...
详细信息
ISBN:
(纸本)9783319486741;9783319486734
The nonlinear conjugategradient (CG) algorithm is one of the most effective line search algorithms for optimization problems due to its simplicity and low memory requirements, particularly for large-scale problems. However, the results of the new conjugacy conditions are very limited. In this paper, we will propose a new conjugacy condition and two CG formulas. Global convergence is achieved for these algorithms, and numerical results are reported for Benchmark problems.
One of the most powerful iterative schemes for solving symmetric, positive definite linear systems is the conjugate gradient algorithm of Hestenes and Stiefel [J. Res. Nat. Bur. Standards, 49 ( 1952), pp. 409-435], es...
详细信息
One of the most powerful iterative schemes for solving symmetric, positive definite linear systems is the conjugate gradient algorithm of Hestenes and Stiefel [J. Res. Nat. Bur. Standards, 49 ( 1952), pp. 409-435], especially when it is combined with preconditioning (cf. [ P. Concus, G. H. Golub, and D. P. O'Leary, in Proceedings of the Symposium on Sparse Matrix Computations, Argonne National Laboratory, 1975, Academic, New York, 1976]). In many applications, the solution of a sequence of equations with the same coefficient matrix is required. We propose an approach based on a combination of the conjugategradient method with Chebyshev filtering polynomials, applied only to a part of the spectrum of the coefficient matrix, as preconditioners that target some specific convergence properties of the conjugategradient method. We show that our preconditioner puts a large number of eigenvalues near one and do not degrade the distribution of the smallest ones. This procedure enables us to construct a lower dimensional Krylov basis that is very rich with respect to the smallest eigenvalues and associated eigenvectors. A major benefit of our method is that this information can then be exploited in a straightforward way to solve sequences of systems with little extra work. We illustrate the performance of our method through numerical experiments on a set of linear systems.
暂无评论