Generative Adversarial Networks (GANs)

Learn the fundamentals of Generative Adversarial Networks, how they work using a generator and discriminator, and implement a simple GAN to generate synthetic data using PyTorch.

🚀 advanced
⏱️ 60 minutes
👤 SuperML Team

· Deep Learning · 3 min read

📋 Prerequisites

  • Understanding of neural networks and backpropagation
  • Basic PyTorch familiarity
  • Conceptual understanding of optimization

🎯 What You'll Learn

  • Understand the concept of GANs and how they work
  • Learn the structure of generator and discriminator networks
  • Implement a simple GAN using PyTorch
  • Visualize and interpret generated data

Introduction

Generative Adversarial Networks (GANs) are a class of generative models where two neural networks, a generator and a discriminator, compete in a game-theoretic setting to produce realistic synthetic data.

GANs are widely used for:

✅ Image generation and enhancement.
✅ Data augmentation.
✅ Super-resolution tasks.
✅ Art and creative generation.


How GANs Work

1️⃣ Generator (G): Learns to generate realistic-looking data from random noise.
2️⃣ Discriminator (D): Learns to distinguish between real data and fake data generated by G.

The two networks train simultaneously:

  • G tries to fool D into classifying generated data as real.
  • D tries to correctly classify real vs fake data.

This adversarial process leads to G improving its outputs to become more realistic over time.


Challenges with GANs

  • Training instability: GANs can be difficult to train due to non-convergence.
  • Mode collapse: The generator may produce limited varieties of outputs.
  • Hyperparameter sensitivity.

Example: Implementing a Simple GAN with PyTorch

We will build a basic GAN to generate 1D synthetic data (sine wave distribution) for clear conceptual understanding.


1️⃣ Install and Import Libraries

pip install torch matplotlib
import torch
import torch.nn as nn
import torch.optim as optim
import matplotlib.pyplot as plt

2️⃣ Define Generator and Discriminator

class Generator(nn.Module):
    def __init__(self):
        super(Generator, self).__init__()
        self.model = nn.Sequential(
            nn.Linear(10, 16),
            nn.ReLU(),
            nn.Linear(16, 1)
        )

    def forward(self, x):
        return self.model(x)

class Discriminator(nn.Module):
    def __init__(self):
        super(Discriminator, self).__init__()
        self.model = nn.Sequential(
            nn.Linear(1, 16),
            nn.ReLU(),
            nn.Linear(16, 1),
            nn.Sigmoid()
        )

    def forward(self, x):
        return self.model(x)

3️⃣ Training the GAN

G = Generator()
D = Discriminator()

criterion = nn.BCELoss()
optimizer_G = optim.Adam(G.parameters(), lr=0.001)
optimizer_D = optim.Adam(D.parameters(), lr=0.001)

real_data = torch.sin(torch.linspace(0, 3.14, 100)).unsqueeze(1)

for epoch in range(2000):
    # Train Discriminator
    D.zero_grad()
    real_labels = torch.ones(real_data.size(0), 1)
    output_real = D(real_data)
    loss_real = criterion(output_real, real_labels)

    noise = torch.randn(real_data.size(0), 10)
    fake_data = G(noise)
    fake_labels = torch.zeros(real_data.size(0), 1)
    output_fake = D(fake_data.detach())
    loss_fake = criterion(output_fake, fake_labels)

    loss_D = loss_real + loss_fake
    loss_D.backward()
    optimizer_D.step()

    # Train Generator
    G.zero_grad()
    output = D(fake_data)
    loss_G = criterion(output, real_labels)
    loss_G.backward()
    optimizer_G.step()

    if epoch % 500 == 0:
        print(f"Epoch {epoch} - Loss D: {loss_D.item():.4f}, Loss G: {loss_G.item():.4f}")

4️⃣ Visualize Generated Data

noise = torch.randn(100, 10)
generated = G(noise).detach().numpy()

plt.hist(generated, bins=20, alpha=0.7, label='Generated')
plt.hist(real_data.numpy(), bins=20, alpha=0.7, label='Real')
plt.legend()
plt.title('GAN Generated Data vs Real Data')
plt.show()

Conclusion

✅ GANs enable the generation of synthetic data that closely resembles real data.
✅ They are a powerful tool in creative and scientific applications.
✅ Understanding the adversarial training process is key to building GANs effectively.


What’s Next?

✅ Experiment with DCGANs for image generation.
✅ Learn Wasserstein GANs for stable training.
✅ Explore CycleGANs for image-to-image translation.


Join the SuperML Community to share your GAN projects and collaborate with others in advanced deep learning.


Happy Generating! 🌀

Back to Tutorials

Related Tutorials

🚀advanced ⏱️ 70 minutes

Understanding Transformer Architecture

Learn the architecture behind transformers, the model powering state-of-the-art NLP and vision systems, with a breakdown of multi-head attention, positional encoding, and practical implementation in PyTorch.

Deep Learning2 min read
deep learningtransformersattention +3
🚀advanced ⏱️ 60 minutes

Deep Neural Networks

Understand the architecture and training of deep neural networks, explore their power in learning complex patterns, and learn how to build and train deep networks using Keras.

Deep Learning2 min read
deep learningneural networkspython +1
🚀advanced ⏱️ 60 minutes

Convolutional Neural Networks (CNNs)

Learn the fundamentals of Convolutional Neural Networks, understand how they process image data, and build your first CNN for image classification using Keras.

Deep Learning2 min read
deep learningcnncomputer vision +2
🚀advanced ⏱️ 2-4 hours

NLP Project with Advanced Deep Learning

Learn how to structure and execute an advanced NLP project using transformers for text classification, including data preparation, model training, evaluation, and deployment.

Deep Learning2 min read
deep learningnlptransformers +2