A prototype-based method is developed to discriminate different types of clutter (ground clutter, sea clutter, and insects) from weather echoes using polarimetric measurements and their textures. This method employs a...
详细信息
A prototype-based method is developed to discriminate different types of clutter (ground clutter, sea clutter, and insects) from weather echoes using polarimetric measurements and their textures. This method employs a clustering algorithm to generate data groups from the training dataset, each of which is modeled as a weighted Gaussian distribution called a "prototype." Two classification algorithms are proposed based on the prototypes, namely maximum prototype likelihood classifier (MPLC) and Bayesian classifier (BC). In the MPLC, the probability of a data point with respect to each prototype is estimated to retrieve the final class label under the maximum likelihood criterion. The BC models the probability density function as a Gaussian mixture composed by the prototypes. The class label is obtained under the maximum a posterior criterion. The two algorithms are applied to S-band dual-polarization CP-2 weather radar data in Southeast Queensland, Australia. The classification results for the test dataset are compared with the NCAR fuzzy-logic particle identification algorithm. Generally good agreement is found for weather echo and ground clutter;however, the confusion matrix indicates that the techniques tend to differ from each other on the recognition of insects.
Community detection is a fundamental task in complex network analysis, aiming to partition networks into tightly multiple dense subgraphs. While community detection has been widely studied, existing methods often lack...
详细信息
Community detection is a fundamental task in complex network analysis, aiming to partition networks into tightly multiple dense subgraphs. While community detection has been widely studied, existing methods often lack interpretability, making it challenging to explain key aspects such as node assignments, community boundaries, and inter-community relationships. In this paper, the explainable community detection issue is addressed, in which each community is characterized using a central node and its corresponding radius. The central node represents the most representative node in the community, while the radius defines its influence scope. Such an explainable community detection issue is formulated as an optimization problem in which the objective is to maximize central node's coverage and accuracy in explaining its associated community. To solve this problem, two algorithms are developed: a naive algorithm and a fast approximate algorithm that incorporate heuristic strategies to improve computational efficiency. Experimental results on 9 real-world networks demonstrate that the proposed methods can effectively interpret community structures with high accuracy and efficiency. More precisely, the objective function values achieved by the identified pairs of center and radius exceed 0.7 on most communities and the running time is generally no more than 10 s on a network with approximatively one thousand nodes. The source code of the proposed methods can be found at: https://***/xuannnn523/CCTS.
Dysarthric speech recognition (DSR) presents a formidable challenge due to inherent inter-speaker variability, leading to severe performance degradation when applying DSR models to new dysarthric speakers. Traditional...
详细信息
Dysarthric speech recognition (DSR) presents a formidable challenge due to inherent inter-speaker variability, leading to severe performance degradation when applying DSR models to new dysarthric speakers. Traditional speaker adaptation methodologies typically involve fine-tuning models for each speaker, but this strategy is cost-prohibitive and inconvenient for disabled users, requiring substantial data collection. To address this issue, we introduce a prototype-based approach that markedly improves DSR performance for unseen dysarthric speakers without additional fine-tuning. Our method employs a feature extractor trained with HuBERT to produce per-word prototypes that encapsulate the characteristics of previously unseen speakers. These prototypes serve as the basis for classification. Additionally, we incorporate supervised contrastive learning to refine feature extraction. By enhancing representation quality, we further improve DSR performance, enabling effective personalized DSR.
Crosslingual text classification consists of exploiting labeled documents in a source language to classify documents in a different target language. In addition to the evident translation problem, this task also faces...
详细信息
ISBN:
(纸本)9783642147692
Crosslingual text classification consists of exploiting labeled documents in a source language to classify documents in a different target language. In addition to the evident translation problem, this task also faces some difficulties caused by the cultural discrepancies manifested in both languages by means of different topic distributions. Such discrepancies make the classifier unreliable for the categorization task. In order to tackle this problem we propose to improve the classification performance by using information embedded in the own target dataset. The central idea of the proposed approach is that similar documents must belong to the same category. Therefore, it classifies the documents by considering not only their own content but also information about;the assigned category to other similar documents from the same target dataset. Experimental results using three different languages evidence the appropriateness of the proposed approach.
The paper discusses a techniques methodbased on prototypes for generating controllable human body models. Users are assisted in modifying an existing one by controlling the parameters provided. The prototypes from th...
详细信息
暂无评论