1. What does the term "GAN" stand for in deep learning?
-
Generalized Artificial Neural networks
-
Generative Adversarial Networks
-
Graphical Activation Nodes
-
Gradient Approximation Network
2. The goal of the Generator in a GAN is to:
-
Optimize hyperparameters automatically
-
Classify data into categories
-
Reduce the computational complexity of models
-
Generate data that is indistinguishable from real data
3. What is the primary role of the Discriminator in a GAN?
-
To generate synthetic data
-
To differentiate between real and generated data
-
To increase the accuracy of classification
-
To reduce loss in backpropagation
4. GANs are primarily used in which of the following applications?
-
Data encryption
-
Image synthesis and enhancement
-
Statistical regression
-
Signal compression
5. Which loss function is commonly used in the training of GANs?
-
Mean Squared Error
-
Cross-Entropy Loss
-
Binary Cross-Entropy Loss
-
Hinge Loss
6. The training of GANs is often described as a:
-
Zero-sum game
-
Cooperative learning process
-
Multi-label classification task
-
Optimization problem with fixed weights
7. What is "mode collapse" in GANs?
-
The generator produces limited diversity in outputs
-
The discriminator fails to learn patterns
-
The training process halts prematurely
-
The generator stops generating any outputs
8. Conditional GANs (cGANs) allow:
-
Training on unlabeled data
-
Controlling the output based on input labels
-
Generating only text data
-
Higher accuracy in classification tasks
9. What is the key difference between a Vanilla GAN and a Wasserstein GAN (WGAN)?
-
WGAN uses the Wasserstein distance metric for training
-
Vanilla GANs require less computational power
-
WGAN uses multiple discriminators
-
Vanilla GANs do not use a loss function
10. Which activation function is commonly used in the output layer of a GAN's generator?
-
ReLU
-
Sigmoid
-
Tanh
-
Softmax
11. Which of the following is a real-world use case of GANs?
-
Fraud detection in financial systems
-
Generating realistic faces from random noise
-
Optimizing supply chain logistics
-
Enhancing the performance of linear regression models
12. GAN training is computationally expensive because:
-
Both the generator and discriminator are updated simultaneously
-
It requires a vast amount of labeled data
-
It relies heavily on manual tuning of hyperparameters
-
It involves solving a multi-objective optimization problem
13. What is the primary challenge in training GANs?
-
Balancing the generator and discriminator performance
-
Lack of sufficient training data
-
High variance in model predictions
-
Over-reliance on GPU hardware
14. What does "latent space" refer to in the context of GANs?
-
The feature representation learned by the generator
-
The training data distribution
-
The hyperparameter space of the model
-
The output space of the discriminator
15. StyleGAN is a specialized GAN architecture used for:
-
Data compression
-
Generating highly realistic images with customizable features
-
Optimizing neural network weights
-
Creating video content
16. CycleGAN is primarily used for:
-
Enhancing audio signals
-
Generating text data
-
Translating images from one domain to another without paired examples
-
Improving video resolution
17. What is the typical input to a GAN's generator?
-
A sequence of tokens
-
Labeled data
-
Pre-trained embeddings
-
Random noise vector
18. GANs are unsuitable for tasks requiring:
-
Supervised classification
-
Data augmentation
-
Image restoration
-
Video frame prediction
19. Pix2Pix GAN requires:
-
Paired training data
-
Unlabeled data
-
Pre-trained weights
-
Real-time feedback during training
20. What is the primary advantage of Wasserstein GANs (WGANs)?
-
Lower memory usage
-
Faster inference speed
-
Improved stability during training
-
Simplified architecture
21. Which optimization algorithm is commonly used in GAN training?
-
Gradient Descent
-
Adam Optimizer
-
Stochastic Gradient Descent
-
Newton's Method
22. What is the key feature of Progressive GANs?
-
Incremental training with growing image resolution
-
Training with reduced computational cost
-
Use of hybrid neural networks
-
Faster training on small datasets
23. The discriminator loss in a GAN measures:
-
The ability to distinguish between real and fake data
-
The quality of generated data
-
The performance of the generator
-
The variance in training data
24. BigGAN improves on traditional GANs by:
-
Reducing model complexity
-
Employing unsupervised training techniques
-
Using larger batch sizes and higher-capacity models
-
Using fewer layers in the generator
25. What does "mode balancing" in GANs aim to achieve?
-
Equal representation of all modes in generated data
-
Reduction in model complexity
-
Improved discriminator loss
-
Faster convergence during training
26. DCGAN stands for:
-
Deep Convolutional Generative Adversarial Network
-
Distributed Cognitive GAN
-
Dynamic Convolution GAN
-
Differentiable Convolution GAN
27. Which dataset is often used for benchmarking GANs?
-
MNIST
-
ImageNet
-
CIFAR-10
-
All of the above
28. GANs have been used in deepfake creation because:
-
They can generate realistic images and videos
-
They require minimal training data
-
They rely solely on text data
-
They use supervised learning
29. Which of these is NOT a type of GAN?
30. What role does the ReLU activation function play in GANs?
-
It helps in non-linear transformations in the generator
-
It calculates discriminator loss
-
It standardizes inputs
-
It minimizes overfitting
31. GANs struggle with generating:
-
Realistic images
-
Sequential text data
-
Complex image transformations
-
High-resolution videos
32. What does the term "generator loss" indicate in GAN training?
-
The quality of data generated by the generator
-
The failure of the discriminator
-
Overfitting during training
-
Latent space optimization
33. Transfer learning can be used in GANs to:
-
Simplify discriminator architecture
-
Train from scratch on large datasets
-
Enhance performance on related tasks
-
Reduce memory requirements
34. Which of the following represents a real-world application of GANs?
-
Art generation and style transfer
-
Text classification
-
Signal processing optimization
-
Web scraping
35. Why are spectral normalization techniques used in GANs?
-
To increase the generatorβs capacity
-
To stabilize training by controlling discriminator weight updates
-
To reduce memory usage during training
-
To simplify the network architecture
36. The inception score is a metric for evaluating GANs based on:
-
The quality and diversity of generated images
-
Training efficiency
-
Model complexity
-
Computational cost
37. What is the key benefit of using batch normalization in GANs?
-
It accelerates the convergence of the generator
-
It reduces the need for labeled data
-
It helps prevent overfitting in the discriminator
-
It stabilizes training by normalizing layer inputs
38. What type of loss function is typically used for the discriminator in a traditional GAN?
-
Mean Squared Error Loss
-
Binary Cross-Entropy Loss
-
Hinge Loss
-
Categorical Cross-Entropy Loss
39. Which of the following is a commonly used evaluation metric for the performance of a GAN?
-
F1-score
-
Precision
-
Inception score
-
Mean Squared Error
40. In the context of GANs, what is meant by "adversarial training"?
-
The process of training the generator to be adversarial to the discriminator
-
The simultaneous training of two models (generator and discriminator) to compete against each other
-
Using adversarial attacks to improve model robustness
-
Training the models using only unsupervised learning