


2014) algorithm to diversify input feature sets - allowing more DL insight to the data rather than generic constraints.īackpropagation takes place over the gradient of the Expectation (E) of the outputs of each of the Discriminator(D) and the Generator (G) networks. Generative Adversarial Networks Recently researched (Goodfellow et.al. workers killed, buildings built, buildings destroyed, etc. Classifies game states in an adversarial setting, i.e.

Neural Networks Computational model based on structure and function the Discriminator fails), we have an optimal feature set! Essentially it becomes a game where the Generator keeps trying to fool the Discriminator - and when it succeeds (i.e. the original Data Distribution using Kullback-Leibler Divergence). This enables stacking the two networks together feeding the output of the Generator to the Discriminator (which is trying to discriminate between data coming from the Generator vs. Generative networks, on the other hand, learn to predict the feature set given a label in essence, generate a 'fake' feature set that qualifies as that label, which the discriminator can't tell whether it came from the original data distribution, or was generated by the Generator network. they learn to predict a label given a feature set. Discriminative networks are well known they find a concept boundary to discriminate/classify between a set of values, or regress over them i.e. Generative Adversarial Networks implement a minimax game between two completely opposite types of networks - Generative vs.
