PhD Defence | Learning Image Decomposition for Face Analysis and Synthesis
Given an image of a face, prior knowledge about its interaction with the environment, light source, and how the face may geometrically change, are considered. By incorporating such kind of prior knowledge, one can achieve a better, more robust predictive models which generalize well on unseen data. In the case of the infeasibility of modeling a prior, a commonality between face analysis or synthesis tasks can be considered to learn a joint attribute representation conditioned on constraints from those tasks.
Prior knowledge about face can help to achieve better discriminative learning models. In this thesis, we show how domain knowledge about face can be explicitly incorporated into age estimation and deception detection models. A similar idea can be adapted for generative learning models for the monocular face image manipulation task. In the absence / incapability of explicit modeling, implicit modeling of prior knowledge via multi-task learning can be used.