Improving Shadow Suppression for Illumination Robust Face Recognition
KAUST DepartmentComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
Visual Computing Center (VCC)
Permanent link to this recordhttp://hdl.handle.net/10754/626504
MetadataShow full item record
Abstract2D face analysis techniques, such as face landmarking, face recognition and face verification, are reasonably dependent on illumination conditions which are usually uncontrolled and unpredictable in the real world. The current massive data-driven approach, e.g., deep learning-based face recognition, requires a huge amount of labeled training face data that hardly cover the infinite lighting variations that can be encountered in real-life applications. An illumination robust preprocessing method thus remains a very interesting but also a significant challenge in reliable face analysis. In this paper we propose a novel model driven approach to improve lighting normalization of face images. Specifically, we propose to build the underlying reflectance model which characterizes interactions between skin surface, lighting source and camera sensor, and elaborate the formation of face color appearance. The proposed illumination processing pipeline enables generation of the Chromaticity Intrinsic Image (CII) in a log chromaticity space which is robust to illumination variations. Moreover, as an advantage over most prevailing methods, a photo-realistic color face image is subsequently reconstructed, which eliminates a wide variety of shadows whilst retaining the color information and identity details. Experimental results under different scenarios and using various face databases show the effectiveness of the proposed approach in dealing with lighting variations, including both soft and hard shadows, in face recognition.
CitationZhang W, Zhao X, Morvan J-M, Chen L (2019) Improving Shadow Suppression for Illumination Robust Face Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 41: 611–624. Available: http://dx.doi.org/10.1109/TPAMI.2018.2803179.
SponsorsThis work was partially supported by the French Research Agency, Agence Nationale de Recherche (ANR) through the Jemime Project under Grant ANR-13-CORD-0004-02, the Biofence project under Grant ANR-13-INSE-0004-02, the National Nature Science Foundation of China under Grant 91746111, 61303121, the Partner University Fund (PUF) through the 4D Vision project, and Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University.