The affinity propagation clustering (APC) algorithm is popular for fingerprint database clustering because it can cluster without pre-defining the number of clusters. However, the clustering performance of the APC alg...
详细信息
The affinity propagation clustering (APC) algorithm is popular for fingerprint database clustering because it can cluster without pre-defining the number of clusters. However, the clustering performance of the APC algorithm heavily depends on the chosen fingerprint similarity metric, with distance-based metrics being the most commonly used. Despite its popularity, the APC algorithm lacks comprehensive research on how distance-based metrics affect clustering performance. This emphasizes the need for a better understanding of how these metrics influence its clustering performance, particularly in fingerprint databases. This paper investigates the impact of various distance-based fingerprint similarity metrics on the clustering performance of the APC algorithm. It identifies the best fingerprint similarity metric for optimal clustering performance for a given fingerprint database. The analysis is conducted across five experimentally generated online fingerprint databases, utilizing seven distance-based metrics: Euclidean, squared Euclidean, Manhattan, Spearman, cosine, Canberra, and Chebyshev distances. Using the silhouette score as the performance metric, the simulation results indicate that structural characteristics of the fingerprint database, such as the distribution of fingerprint vectors, play a key role in selecting the best fingerprint similarity metric. However, Euclidean and Manhattan distances are generally the preferable choices for use as fingerprint similarity metrics for the APC algorithm across most fingerprint databases, regardless of their structural characteristics. It is recommended that other factors, such as computational intensity and the presence or absence of outliers, be considered alongside the structural characteristics of the fingerprint database when choosing the appropriate fingerprint similarity metric for maximum clustering performance.
In recent years, with the development of brain science, the value of electroencephalography (EEG) data has become prominent. However, due to the characteristics of real-time transmission and large amounts of data, the...
详细信息
In recent years, with the development of brain science, the value of electroencephalography (EEG) data has become prominent. However, due to the characteristics of real-time transmission and large amounts of data, there is an urgent need for efficient and lightweight EEG compression algorithms. The existing EEG compression methods have many shortcomings, such as limited compression ratio (CR), poor reconstruction signal quality, and too large model scale, which cannot meet the developmental needs of portable wearable EEG detection devices. In this letter, a method of EEG signal compression based on 2-D rhythm feature maps is proposed. Through discrete wavelet transformation (DWT) extraction of the signal rhythm characteristics, the signal is compressed and reconstructed using encoding and reconstruction channels based on an autoencoder network. At the output end of the encoding channel, entropy coding is carried out to further compress the data volume. Through the discussion of several coding algorithms, JPEG2000 is selected as the local optimal coding algorithm. In addition, based on the idea of grouping convolution and void convolution kernel, a lightweight structure is designed to simplify the process of the proposed network and greatly reduce the number of model parameters. Experiments show that, compared with other similar algorithms, the percentage-root-mean-square distortion and mean squared error (MSE) of the proposed algorithm are 14.76% and 2.95%, respectively, at a relatively high CR (CR is about 16). And only 87.9-k parameters are used, which is more suitable for embedded scenarios and wearable devices.
Despite the impressive technological strides made over the years, human lives still depend very much on the natural environment. Fortunately, technology can now be used to help address critical environmental concerns ...
详细信息
Despite the impressive technological strides made over the years, human lives still depend very much on the natural environment. Fortunately, technology can now be used to help address critical environmental concerns in air quality, soil condition, and weather events. In all of these areas and many others, signalprocessing is supporting the ability to provide immediate and long-term observations and insights.
In recent years, correntropy-based algorithms which include maximum correntropy criterion (MCC), generalized MCC (GMCC), kernel MCC (KMCC) and hyperbolic cosine function-based algorithms such as hyperbolic cosine adap...
详细信息
In recent years, correntropy-based algorithms which include maximum correntropy criterion (MCC), generalized MCC (GMCC), kernel MCC (KMCC) and hyperbolic cosine function-based algorithms such as hyperbolic cosine adaptive filter (HCAF), logarithmic HCAF (LHCAF), least lncosh (Llncosh) have been widely utilized in adaptive filtering due to their robustness towards non-Gaussian/impulsive background noises. However, the performance of such algorithms suffers from high steady-state misalignment. To minimize the steady-state misalignment along with having comparable computational complexity, an exponential hyperbolic cosine function (EHCF) based new robust norm is introduced and a corresponding EHCF based adaptive filter called exponential hyperbolic cosine adaptive filter (EHCAF) is developed in this letter. Further, computational complexity and bound on learning rate for stability of the proposed algorithm is also studied. A set of simulation studies has been carried out for system identification scenario to assess the performance of the proposed algorithm. Further, EHCAF algorithm has been extended and the filtered-x EHCAF (Fx-EHCAF) algorithm is proposed for robust room equalization.
This paper proposes a novel beam-domain channel estimation (CE) algorithm via sparse Bayesian learning (SBL) using complex t-prior for massive multi-user multiple-input multiple-output (MIMO) systems. Due to the sidel...
详细信息
This paper proposes a novel beam-domain channel estimation (CE) algorithm via sparse Bayesian learning (SBL) using complex t-prior for massive multi-user multiple-input multiple-output (MIMO) systems. Due to the sidelobe leakage and insufficient observation resolution resulting from physical constraints, the equivalent channel after digital beamforming at the receiver has a structure with many small but non-zero elements, which cannot be modeled strictly as a sparse signal. To fully capture this pseudo-sparse structure characterized by the signal strength variations among elements, we design a novel SBL algorithm that incorporates a complex t-distribution using a hierarchical Bayesian model. By utilizing a high degree of adaptability of this heavy-tailed prior, it is possible to efficiently learn the signal strength, accounting for elements with non-zero but small values, which is verified by the regularization analysis based on an equivalent optimization problem. The efficacy of the proposed CE algorithm is confirmed by numerical simulations, which show that the proposed method not only significantly outperforms the state-of-the-art (SotA) sparse signal recovery (SSR)-based algorithms but also achieves the performance of a genie-aided scheme over a wide signal-to-noise ratio (SNR) range in both sub-6 GHz and millimeter-wave (mmWave) wireless communication scenarios.
Currently, adaptive filtering algorithms have been widely applied in frequency estimation for power systems. However, research on diffusion tasks remains insufficient. Existing diffusion adaptive frequency estimation ...
详细信息
Currently, adaptive filtering algorithms have been widely applied in frequency estimation for power systems. However, research on diffusion tasks remains insufficient. Existing diffusion adaptive frequency estimation algorithms exhibit certain limitations in handling input noise and lack robustness against impulsive noise. Moreover, traditional adaptive filtering algorithms designed based on the strictly-linear (SL) model fail to effectively address frequency estimation challenges in unbalanced three-phase power systems. To address these issues, this letter proposes an improved diffusion augmented complex maximum total correntropy (DAMTCC) algorithm based on the widely linear (WL) model. The proposed algorithm not only significantly enhances the capability to handle input noise but also demonstrates superior robustness to impulsive noise. Furthermore, it successfully resolves the critical challenge of frequency estimation in unbalanced three-phase power systems, offering an efficient and reliable solution for diffusion power system frequency estimation. Finally, we analyze the stability of the algorithm and computer simulations verify the excellent performance of the algorithm.
Three-dimensional (3D) point clouds are important data representations in visualization applications. The rapidly growing utility and popularity of point cloud processing strongly motivate a plethora of research activ...
详细信息
Three-dimensional (3D) point clouds are important data representations in visualization applications. The rapidly growing utility and popularity of point cloud processing strongly motivate a plethora of research activities on large-scale point cloud processing and feature extraction. In this work, we investigate point cloud resampling based on hypergraph signalprocessing (HGSP). We develop a novel method to extract sharp object features and reduce the data size of point cloud representation. By directly estimating hypergraph spectrum based on hypergraph stationary processing, we design a spectral kernel-based filter to capture high-dimensional interactions among point signal nodes and to better preserve object surface outlines. Experimental results validate the effectiveness of hypergraph in representing point clouds, and demonstrate the robustness of the proposed algorithm under noise.
The robust estimation of the covariance matrix is a frequent task in practical applications in which, more often than not, some data samples are outliers. There are several methods that can be used to robustly estimat...
详细信息
The robust estimation of the covariance matrix is a frequent task in practical applications in which, more often than not, some data samples are outliers. There are several methods that can be used to robustly estimate a covariance matrix from corrupted data, a representative example of which is the minimum covariance determinant (MCD) method. In this paper we present a maximum conditional likelihood interpretation of MCD that provides a new motivation of as well as further insights into this method. To perform at its best MCD requires information on the number of outliers in the data, which usually is not available. We propose two new methods for covariance matrix estimation from data with outliers that do not suffer from this problem: TEST (multiple-hypothesis testing method) which uses the FDR (false discovery rate) to test a set of model hypotheses and hence estimate the number of outliers and their locations, and LIKE (penalized likelihood method) that solves the outlier estimation problem using a GIC (generalized information criterion) to penalize the complexity of a high-dimensional data model. We show by means of numerical simulations that the performances of TEST and LIKE are relatively similar to one another as well as to the performance of the oracle MCD (which uses the true number of outliers) and significantly better than the performance of MCD that uses an upper bound on the outlier number.
Nonnegative CANDECOMP/PARAFAC(NCP) tensor decomposition is a powerful tool for multiway signalprocessing. The alternating direction method of multipliers(ADMM) optimization algorithm has become increasingly popul...
详细信息
Nonnegative CANDECOMP/PARAFAC(NCP) tensor decomposition is a powerful tool for multiway signalprocessing. The alternating direction method of multipliers(ADMM) optimization algorithm has become increasingly popular for solving tensor decomposition problems in the block coordinate descent framework. However,the ADMM-based NCP algorithm suffers from rank deficiency and slow convergence for some large-scale and highly sparse tensor data. The proximal algorithm is preferred to enhance optimization algorithms and improve convergence properties. In this study, we propose a novel NCP algorithm using the alternating direction proximal method of multipliers(ADPMM) that consists of the proximal algorithm. The proposed NCP algorithm can guarantee convergence and overcome the rank deficiency. Moreover, we implement the proposed NCP using an inexact scheme that alternatively optimizes the subproblems. Each subproblem is optimized by a finite number of inner iterations yielding fast computation speed. Our NCP algorithm is a hybrid of alternating optimization and ADPMM and is named *** experimental results on synthetic and real-world tensors demonstrate the effectiveness and efficiency of our proposed algorithm.
Traditional real-time object detection networks deployed in autonomous aerial vehicles (AAVs) struggle to extract features from small objects in complex backgrounds with occlusions and overlapping objects. To address ...
详细信息
Traditional real-time object detection networks deployed in autonomous aerial vehicles (AAVs) struggle to extract features from small objects in complex backgrounds with occlusions and overlapping objects. To address this challenge, we propose FM-RTDETR, a real-time object detection algorithm optimized for small object detection. We redesign the encoder of RT-DETRv2 by integrating the Feature Aggregation and Diffusion Network (FADN), improving the algorithm's ability to capture contextual information. Subsequently, we introduce the Parallel Atrous Mamba Feature Fusion Module (PAMFFM), which combines shallow and deep semantic information to better capture small object features. Furthermore, we propose the Cross-stage Enhanced Feature Fusion Module (CEFFM), merging features for small objects to provide richer and more detailed information. Finally, we propose STIoU Loss, which incorporates a penalty term to adjust the scaling of the loss function, improving detection granularity for small objects. FM-RTDETR achieves AP(50) scores of 54.0% and 56.3% on the VisDrone2019-DET and AI-TOD datasets. Compared with other state-of-the-art methods, our method shows great potential in small object detection from drones.
暂无评论