版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Vanderbilt Univ 221 Kirkland Hall Nashville TN 37235 USA Lawrence Livermore Natl Lab Livermore CA 94550 USA
出 版 物:《COMPUTER GRAPHICS FORUM》 (计算机图形学论坛)
年 卷 期:2022年第41卷第3期
页 面:85-95页
核心收录:
学科分类:08[工学] 0835[工学-软件工程] 0812[工学-计算机科学与技术(可授工学、理学学位)]
基 金:National Science Foundation [IIS-2007444] U.S. Department of Energy by Lawrence Livermore National Laboratory [DE-AC52-07NA27344, LLNL-CONF-832727]
主 题:CCS Concepts • Computing methodologies → Artificial intelligence Model verification and validation
摘 要:Generative adversarial networks (GAN) have witnessed tremendous growth in recent years, demonstrating wide applicability in many domains. However, GANs remain notoriously difficult for people to interpret, particularly for modern GANs capable of generating photo-realistic imagery. In this work we contribute a visual analytics approach for GAN interpretability, where we focus on the analysis and visualization of GAN disentanglement. Disentanglement is concerned with the ability to control content produced by a GAN along a small number of distinct, yet semantic, factors of variation. The goal of our approach is to shed insight on GAN disentanglement, above and beyond coarse summaries, instead permitting a deeper analysis of the data distribution modeled by a GAN. Our visualization allows one to assess a single factor of variation in terms of groupings and trends in the data distribution, where our analysis seeks to relate the learned representation space of GANs with attribute-based semantic scoring of images produced by GANs. Through use-cases, we show that our visualization is effective in assessing disentanglement, allowing one to quickly recognize a factor of variation and its overall quality. In addition, we show how our approach can highlight potential dataset biases learned by GANs.