A Generative Adversarial Network (GAN) is a machine learning architecture consisting of two neural networks—a generator and a discriminator—that compete against each other in a game-theoretic framework. The generator learns to create realistic synthetic data while the discriminator learns to distinguish between real and generated data, leading to increasingly sophisticated data generation capabilities across domains like images, text, and audio.
GPT (Generative Pre-trained Transformer) is a series of large language models developed by OpenAI that use transformer architecture for autoregressive text generation. Starting with GPT-1 in 2018, the series evolved through GPT-2, GPT-3, and GPT-4, demonstrating how scaling model size and training data leads to emergent capabilities in language understanding, reasoning, and code generation.
Gradient descent is a fundamental optimization algorithm used to minimize loss functions in machine learning by iteratively adjusting parameters in the direction of steepest decrease. It forms the backbone of neural network training and most machine learning optimization, using the gradient to guide parameter updates toward optimal solutions.