Conditional Generative Adversarial Networks (cGANs) are increasingly popular web-based synthesis services accessed through a query API, e.g., cGANs generate a cat image based on a "cat" query. However, cGAN-...
详细信息
ISBN:
(纸本)9798331530044;9798331530037
Conditional Generative Adversarial Networks (cGANs) are increasingly popular web-based synthesis services accessed through a query API, e.g., cGANs generate a cat image based on a "cat" query. However, cGAN-based synthesizers can be stolen via adversaries' queries, i.e., model thieves. The prevailing adversarial assumption is that thieves act independently: they query the deployed cGAN (i.e., the victim), and train a stolen cGAN using the images obtained from the victim. A popular anti-theft defense consists in throttling down the number of queries from any given user. We consider a more realistic adversarial scenario: model thieves collude to query the victim, and then train the stolen cGAN. CLUES is a new collusive model stealing framework, enabling thieves to bypass throttle-based defenses and steal cGANs more efficiently than through individual efforts. Thieves collect queried images and train a stolen cGAN in a federated manner. We evaluate CLUES on three image datasets, e.g., MNIST, FashionMNIST and CelebA. We experimentally show the scalability of the proposed attack strategies against the number of thieves and the queried images, the impact of a classical noise-based defense, a passive watermarking defense and a JPEG-based countermeasure. Our evaluation shows that such a collusive stealing strategy gets close to 4 units of Frechet Inception Distance from a victim model. Our code is readily available to the research community: https://***/records/10224340.
暂无评论