Face Image Quality Assessment (FIQA) techniques have seen steady improvements over recent years, but their performance still deteriorates if the input face samples are not properly aligned. This alignment sensitivity ...
详细信息
Face Image Quality Assessment (FIQA) techniques have seen steady improvements over recent years, but their performance still deteriorates if the input face samples are not properly aligned. This alignment sensitivity ...
ISBN:
(数字)9798350354478
ISBN:
(纸本)9798350354485
Face Image Quality Assessment (FIQA) techniques have seen steady improvements over recent years, but their performance still deteriorates if the input face samples are not properly aligned. This alignment sensitivity comes from the fact that most FIQA techniques are trained or designed using a specific face alignment procedure. If the alignment technique changes, the performance of most existing FIQA techniques quickly becomes suboptimal. To address this problem, we present in this paper a novel knowledge distillation approach, termed AI-KD that can extend on any existing FIQA technique, improving its robustness to alignment variations and, in turn, performance with different alignment procedures. To validate the proposed distillation approach, we conduct comprehensive experiments on 6 face datasets with 4 recent face recognition models and in comparison to 7 state-of-the-art FIQA techniques. Our results show that AI-KD consistently improves performance of the initial FIQA techniques not only with misaligned samples, but also with properly aligned facial images. Furthermore, it leads to a new state-of-the-art, when used with a competitive initial FIQA approach. The code for AI-KD is made publicly available from: https://***/LSIbabnikz/AI-KD.
Deep generative models have recently presented impressive results in generating realistic face images of random synthetic identities. To generate multiple samples of a certain synthetic identity, previous works propos...
Deep generative models have recently presented impressive results in generating realistic face images of random synthetic identities. To generate multiple samples of a certain synthetic identity, previous works proposed to disentangle the latent space of GANs by incorporating additional supervision or regularization, enabling the manipulation of certain attributes. Others proposed to disentangle specific factors in unconditional pretrained GANs latent spaces to control their output, which also requires supervision by attribute classifiers. Moreover, these attributes are entangled in GAN’s latent space, making it difficult to manipulate them without affecting the identity information. We propose in this work a framework, ExFaceGAN, to disentangle identity information in pretrained GANs latent spaces, enabling the generation of multiple samples of any synthetic identity. Given a reference latent code of any synthetic image and latent space of pretrained GAN, our ExFaceGAN learns an identity directional boundary that disentangles the latent space into two sub-spaces, with latent codes of samples that are either identity similar or dissimilar to a reference image. By sampling from each side of the boundary, our ExFaceGAN can generate multiple samples of synthetic identity without the need for designing a dedicated architecture or supervision from attribute classifiers. We demonstrate the generalizability and effectiveness of ExFaceGAN by integrating it into learned latent spaces of three SOTA GAN approaches. As an example of the practical benefit of our ExFaceGAN, we empirically prove that data generated by ExFaceGAN can be successfully used to train face recognition models (https://***/fdbtrs/ExFaceGAN).
Synthetic data is emerging as a substitute for authentic data to solve ethical and legal challenges in handling authentic face data. The current models can create real-looking face images of people who do not exist. H...
详细信息
Face recognition (FR) systems continue to spread in our daily lives with an increasing demand for higher explainability and interpretability of FR systems that are mainly based on deep learning. While bias across demo...
详细信息
Estimation of importance for considered features is an important issue for any knowledge exploration process and it can be executed by a variety of approaches. In the research reported in this study, the primary aim w...
详细信息
Estimation of importance for considered features is an important issue for any knowledge exploration process and it can be executed by a variety of approaches. In the research reported in this study, the primary aim was the development of a methodology for creating attribute rankings. Based on the properties of the greedy algorithm for inducing decision rules, a new application of this algorithm has been proposed. Instead of constructing a single ordering of features, attributes were weighted multiple times. The input datasets were discretised with several algorithms representing supervised and unsupervised discretisation approaches. Each resulting discrete data variant was exploited to construct a ranking of attributes. The effectiveness of the obtained rankings was confirmed through a rule filtering process governed by weighted attributes. The methodology was applied to the stylometric task of authorship attribution. The experimental outcomes demonstrate the value of the proposed research method, as it generally led to improved predictions while taking into account a noticeably decreased sets of attributes and decision rules.
In recent years, advances in deep learning techniques and large-scale identity-labeled datasets have enabled facial recognition algorithms to rapidly gain performance. However, due to privacy issues, ethical concerns,...
In recent years, advances in deep learning techniques and large-scale identity-labeled datasets have enabled facial recognition algorithms to rapidly gain performance. However, due to privacy issues, ethical concerns, and regulations governing the processing, transmission, and storage of biometric samples, several publicly available face image datasets are being withdrawn by their creators. The reason is that these datasets are mostly crawled from the web with the possibility that not all users had properly consented to processing their biometric data. To mitigate this problem, synthetic face images from generative approaches are motivated to substitute the need for authentic face images to train and test face recognition. In this work, we investigate both the relation between synthetic face image data and the generator authentic training data and the relation between the authentic data and the synthetic data in general under two aspects, i.e. the general image quality and face image quality. The first term refers to perceived image quality and the second measures the utility of a face image for automatic face recognition algorithms. To further quantify these relations, we build the analyses under two terms denoted as the dissimilarity in quality values expressing the general difference in quality distributions and the dissimilarity in quality diversity expressing the diversity in the quality values.
Investigating new methods of creating face morphing attacks is essential to foresee novel attacks and help mitigate them. Creating morphing attacks is commonly either performed on the image-level or on the representat...
Investigating new methods of creating face morphing attacks is essential to foresee novel attacks and help mitigate them. Creating morphing attacks is commonly either performed on the image-level or on the representation-level. The representation-level morphing has been performed so far based on generative adversarial networks (GAN) where the encoded images are interpolated in the latent space to produce a morphed image based on the interpolated vector. Such a process was constrained by the limited reconstruction fidelity of GAN architectures. Recent advances in the diffusion autoencoder models have overcome the GAN limitations, leading to high reconstruction fidelity. This theoretically makes them a perfect candidate to perform representation-level face morphing. This work investigates using diffusion autoencoders to create face morphing attacks by comparing them to a wide range of image-level and representation-level morphs. Our vulnerability analyses on four state-of-the-art face recognition models have shown that such models are highly vulnerable to the created attacks, the MorDIFF, especially when compared to existing representation-level morphs. Detailed detectability analyses are also performed on the MorDIFF, showing that they are as challenging to detect as other morphing attacks created on the image- or representation-level. Data and morphing script are made public 1 .
State-of-the-art face recognition (FR) systems are based on overparameterized deep neural networks (DNN) which commonly use face images with 256 3 colors. The use of DNN and the storage of face images as references f...
State-of-the-art face recognition (FR) systems are based on overparameterized deep neural networks (DNN) which commonly use face images with 256 3 colors. The use of DNN and the storage of face images as references for comparison are limited in resource-restricted domains, which are hemmed in storage and computational capacity. A possible solution is to store the image only as a feature, which renders the human evaluation of the image impossible and forces the use of a single DNN (vendor) across systems. In this paper, we present a novel study on the possibility and effect of image color quantization on FR performance and storage efficiency. We leverage our conclusions to propose harmonizing the color quantization with the low-bit quantization of FR models. This combination significantly reduces the bits required to represent both the image and the FR model. In an extensive experiment on diverse sets of DNN architectures and color quantization steps, we validate on multiple benchmarks that the proposed methodology can successfully reduce the number of bits required for image pixels and DNN data while maintaining nearly equal recognition rates. The code and pre-trained models are available at https://***/jankolf/ColorQuantization.
Recently, significant progress has been made in face presentation attack detection (PAD), which aims to secure face recognition systems against presentation attacks, owing to the availability of several face PAD datas...
Recently, significant progress has been made in face presentation attack detection (PAD), which aims to secure face recognition systems against presentation attacks, owing to the availability of several face PAD datasets. However, all available datasets are based on privacy and legally-sensitive authentic biometric data with a limited number of subjects. To target these legal and technical challenges, this work presents the first synthetic-based face PAD dataset, named SynthASpoof, as a large-scale PAD development dataset. The bona fide samples in SynthASpoof are synthetically generated and the attack samples are collected by presenting such synthetic data to capture systems in a real attack scenario. The experimental results demonstrate the feasibility of using SynthASpoof for the development of face PAD. Moreover, we boost the performance of such a solution by incorporating the domain generalization tool MixStyle into the PAD solutions. Additionally, we showed the viability of using synthetic data as a supplement to enrich the diversity of limited authentic training data and consistently enhance PAD performances. The SynthASpoof dataset, containing 25,000 bona fide and 78,800 attack samples, the implementation, and the pre-trained weights are made publicly available 1 .
暂无评论