Abstract
Recent studies based on generative adversarial networks (GAN) have shown remarkable success in unpaired image-to-image translation, the key idea of which is to translate images from a source domain to a target domain. However, these prior studies mainly focus on the target domain. We are aware that the source domain can also be involved in the training process and promote the matching to the data distribution in the target domain. In this paper, we present a novel adversarial network for unpaired image-to-image translation, adopting one generator and two discriminators. With the constraint of our newly introduced adversarial loss, one of the discriminators targets at matching the model distribution to the target domain distribution, while the other one pushes the model distribution away from the source domain distribution, thus boosting the whole model learning efficiency. Experiments show that our proposed GAN loss can replace the vanilla GAN loss, which has been used in many state-of-the-art methods for image-to-image translation. Moreover, compared to vanilla GAN, the framework of our GAN can contribute to a better translation result.