GANs are pretty well covered elsewhere due to their popularity, and the basic idea is pretty simple, so I just treat them briefly.
GANs were dominant for several years, and they are relatively inexpensive to train and sample from and they can produce high quality samples (e.g., with BigGAN). But GANs are known to suffer from mode collapse and training instabilities. And they are kind of dead as SOTA architecture for quality results. Everything these days is either transformer autoregressive model or a diffusion model. GANs are probably worth still worth studying a bit, and many interesting ideas in generative modeling originated from GANs or relate in some way. Plus perceptual losses may still be a thing.