Since their inception, Generative Adversarial Networks (GANs) have revolutionized the field of generative models due to their flexibility and ability for generating fully synthetic samples of very complex phenomena with high resolution. However, it is well known that the convergence of the training process of these models is typically very poor. Getting accurate results with GANs usually requires the implementation of several heuristics, a very fine hyper-parameter tuning, and a heavy training.
In this talk, we will review how GANs are formulated as a competitive game and their optima as Nash equilibria. We shall comment some of the known results about the convergence of GANs and their relation to the minimization of the Jensen-Shannon divergence and optimum transport problems. Additionally, we will discuss how, when the parameter space is compact, very interesting features arise through the interplay of the GAN flow and the topology of the parameter space. In the case of GANs on tori, we shall show that their dynamics can be understood by decomposing the objective function of the adversary min-max game into its truncated Fourier series. This viewpoint sheds some light upon the instability of the training process, showing that the spiral attractors of the GAN flow actually arise as small perturbations of periodic orbits.