Peer-Reviewed Journal Details
Mandatory Fields
Bazrafkan, S,Javidnia, H,Corcoran, P
2018
December
Pattern Recognition Letters
Latent space mapping for generation of object elements with corresponding data annotation
Published
()
Optional Fields
Generative models Latent space mapping Deep neural networks
116
179
186
Deep neural generative models such as Variational Auto-Encoders (VAE) and Generative Adversarial Networks (GAN) give promising results in estimating the data distribution across a range of machine learning fields of application. Recent results have been especially impressive in image synthesis where learning the spatial appearance information is a key goal. This enables the generation of intermediate spatial data that corresponds to the original dataset. In the training stage, these models learn to decrease the distance of their output distribution to the actual data and, in the test phase, they map a latent space to the data space. Since these models have already learned their latent space mapping, one question is whether there is a function mapping the latent space to any aspect of the database for the given generator. In this work, it has been shown that this mapping is relatively straightforward using small neural network models and by minimizing the mean square error. As a demonstration of this technique, two example use cases have been implemented: firstly, the idea to generate facial images with corresponding landmark data and secondly, generation of low-quality iris images (as would be captured with a smartphone user-facing camera) with a corresponding ground-truth segmentation contour. (C) 2018 Elsevier B.V. All rights reserved.
10.1016/j.patrec.2018.10.025
Grant Details
Publication Themes