High-fidelity kinship face synthesis has many potential applications, such as kinship verification, missing child identification, and social media analysis. However, it is challenging to synthesize high-quality descen...
High-fidelity kinship face synthesis has many potential applications, such as kinship verification, missing child identification, and social media analysis. However, it is challenging to synthesize high-quality descendant faces with genetic relations due to the lack of large-scale, high-quality annotated kinship data. This paper proposes RFG (Region-level Facial Gene) extraction framework to address this issue. We propose to use IGE (Image-based Gene Encoder), LGE (Latent-based Gene Encoder) and Gene Decoder to learn the RFGs of a given face image, and the relationships between RFGs and the latent space of Style-GAN2. As cycle-like losses are designed to measure the $\mathcal{L}_{2}$ distances between the output of Gene Decoder and image encoder, and that between the output of LGE and IGE, only face images are required to train our framework, i.e. no paired kinship face data is required. Based upon the proposed RFGs, a crossover and mutation module is further designed to inherit the facial parts of parents. A Gene Pool has also been used to introduce the variations into the mutation of RFGs. The diversity of the faces of descendants can thus be significantly increased. Qualitative, quantitative, and subjective experiments on FIW, TSKinFace, and FF-databases clearly show that the quality and diversity of kinship faces generated by our approach are much better than the existing state-of-the-art methods.
Traffic Engineering (TE) is an efficient technique to balance network flows and thus improves the performance of a hybrid Software Defined Network (SDN). Previous TE solutions mainly leverage heuristic algorithms to c...
详细信息
Existing methods for modeling recommendation systems based on knowledge graphs include embedding-based, pathbased, and propagation-based methods. The embedding-based approach is flexible but more suitable for intra-gr...
Existing methods for modeling recommendation systems based on knowledge graphs include embedding-based, pathbased, and propagation-based methods. The embedding-based approach is flexible but more suitable for intra-graph applications, the path-based approach can model complex relationships but has a high computational cost, and the propagation-based approach considers global information but may introduce noise. This study proposed a simple and efficient model, called SEKGAT, which comprehensive the ideology of path-based and propagation approach to personalized recommendation by aggregating the user preferences through graph attention mechanism and fusing multiple feature representations on the knowledge graph into item features through pooling aggregators. Experimental results for the CTR prediction and Top-K recommendation tasks on three datasets of real-world scenarios show that our model approach is competitive.
Medical report generation is crucial for clinical diagnosis and patient management, summarizing diagnoses and recommendations based on medical imaging. However, existing work often overlook the clinical pipeline invol...
详细信息
Attributed graph clustering, aiming to discover the underlying graph structure and partition the graph nodes into several disjoint categories, is a basic task in graph data analysis. Although recent efforts over graph...
详细信息
Stable consumer electronic systems can assist traffic better. Good traffic consumer electronic systems require collaborative work between traffic algorithms and hardware. However, performance of popular traffic algori...
详细信息
Adversarial training, represented by the projected gradient descent(PGD) adversarial training, effectively improves adversarial robustness within the upper bound of perturbation to the deep neural network(DNN) by appl...
详细信息
作者:
Liu, LinZhou, Jian-TaoXing, Hai-FengGuo, Xiao-YongCollege of Computer Science
Ecological Big Data Engineering Research Center of the Ministry of Education Cloud Computing and Service Software Engineering Laboratory of Inner Mongolia Autonomous Region National and Local Joint Engineering Research Center of Intelligent Information Processing Technology for Mongolian Social Computing and Data Processing Key Laboratory of Inner Mongolia Autonomous Region Big Data Analysis Technology Engineering Research Center of Inner Mongolia Autonomous Region Inner Mongolia University Inner Mongolia Hohhot China College of Computer Science and Technology
Inner Mongolia Normal University Inner Mongolia Hohhot China College of Computer Information and Management
Inner Mongolia University of Finance and Economics Inner Mongolia Hohhot China
Nowadays, the best-effort service can not guarantee the quality of service (QoS) for all kinds of services. QoS routing is an important method to guarantee QoS requirements. It involves path selection for flows based ...
详细信息
Fine-tuning pre-trained language models, such as BERT, has shown enormous success among various NLP tasks. Though simple and effective, the process of fine-tuning has been found unstable, which often leads to unexpect...
详细信息
Fine-tuning pre-trained language models, such as BERT, has shown enormous success among various NLP tasks. Though simple and effective, the process of fine-tuning has been found unstable, which often leads to unexpected poor performance. To increase stability and generalizability, most existing works resort to maintaining the parameters or representations of pre-trained models during fine-tuning. Nevertheless, very little work explores mining the reliable part of pre-learned information that can help to stabilize fine-tuning. To address this challenge, we introduce a novel solution in which we fine-tune BERT with stabilized cross-layer mutual information. Our method aims to preserve the reliable behaviors of cross-layer information propagation, instead of preserving the information itself, of the pre-trained model. Therefore, our method circumvents the domain conflicts between pre-trained and target tasks. We conduct extensive experiments with popular pre-trained BERT variants on NLP datasets, demonstrating the universal effectiveness and robustness of our method.
暂无评论