AI-Cafe presents: Face de-identification for privacy protection
(Professor at the Department of Informatics of AUTH and Director of the Artificial Intelligence and Information Analysis (AIIA) lab)
Privacy protection is a very important issue, in the context of social media and GDPR. This lecture overviews the face de-identification problem from an engineering perceptive. In principle, face de-identification methods aim at calculating an affine or a non-linear transformation to an input facial image, so that the depicted person's identity is no longer recognized by humans or automated human analysis tools. Traditional applications in the media mainly involve applying additive noise (e.g., pixilation, blurring) or reconstruction-based techniques to the facial image region, achieving sufficient de-identification performance at the expense of corroding image quality. Recently proposed deep learning-based generative methods for face de-identification promise excellent de-identification performance against automated tools while producing visually pleasing yet still not useful images for the human viewers. Finally, adversarial-based face de-identification methods optimally generate the minimum required additive noise that disables automated face detection/recognition systems, thus the de-identified images maintain maximal utility for human viewers.

