Kernel-Guided Training of Implicit Generative Models with Stability Guarantees
Published in Arxiv, 2019
Recommended citation: Mehrjou, Arash. (2009). "Kernel-Guided Training of Implicit Generative Models with Stability Guarantees." Arxiv. https://arxiv.org/abs/1910.14428
The modern implicit generative models such as generative adversarial networks (GANs) are generally known to suffer from issues such as instability, uninterpretability, and difficulty in assessing their performance. If we see these implicit models as dynamical systems, some of these issues are caused by being unable to control their behavior in a meaningful way during the course of training. In this work, we propose a theoretically grounded method to guide the training trajectories of GANs by augmenting the GAN loss function with a kernel-based regularization term that controls local and global discrepancies between the model and true distributions. This control signal allows us to inject prior knowledge into the model. We provide theoretical guarantees on the stability of the resulting dynamical system and demonstrate different aspects of it via a wide range of experiments.
Leave a Comment