WO2011055164A1 - Method for illumination normalization on a digital image for performing face recognition - Google Patents
Method for illumination normalization on a digital image for performing face recognition Download PDFInfo
- Publication number
- WO2011055164A1 WO2011055164A1 PCT/IB2009/008066 IB2009008066W WO2011055164A1 WO 2011055164 A1 WO2011055164 A1 WO 2011055164A1 IB 2009008066 W IB2009008066 W IB 2009008066W WO 2011055164 A1 WO2011055164 A1 WO 2011055164A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- illumination normalization
- digital image
- adaptation factor
- input
- Prior art date
Links
- 238000005286 illumination Methods 0.000 title claims abstract description 53
- 238000000034 method Methods 0.000 title claims abstract description 50
- 238000010606 normalization Methods 0.000 title claims abstract description 33
- 230000006978 adaptation Effects 0.000 claims abstract description 29
- 230000003044 adaptive effect Effects 0.000 claims abstract description 11
- 238000012886 linear function Methods 0.000 claims abstract description 7
- 230000001815 facial effect Effects 0.000 claims description 13
- 239000013598 vector Substances 0.000 claims description 13
- 238000007781 pre-processing Methods 0.000 claims description 4
- 230000004301 light adaptation Effects 0.000 description 10
- 108091008695 photoreceptors Proteins 0.000 description 7
- 210000004027 cell Anatomy 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 238000013459 approach Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 210000002287 horizontal cell Anatomy 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 210000001525 retina Anatomy 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000002207 retinal effect Effects 0.000 description 2
- 208000003098 Ganglion Cysts Diseases 0.000 description 1
- 208000005400 Synovial Cyst Diseases 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 210000000411 amacrine cell Anatomy 0.000 description 1
- 210000003986 cell retinal photoreceptor Anatomy 0.000 description 1
- 230000036755 cellular response Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 210000000608 photoreceptor cell Anatomy 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G06T5/94—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/60—Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Definitions
- the present invention relates to a method for performing illumination normalization on a digital image, a face recognition system and a method for performing face recognition.
- the first approach seeks illumination invariant features to represent face images.
- Examples of such features include edge maps, image intensity derivation and images convolved with 2D Gabor-like filters.
- an image l(x, y) is modeled as the product of the reflectance R and the illumination L.
- the problem of obtaining R from an input image I can be solved by estimating L.
- Several methods have been presented to estimate L, such as Single Scale Retinex (SSR), Multi Scale Retinex (MSR) and self-quotient image (SQI).
- SSR Single Scale Retinex
- MSR Multi Scale Retinex
- SQL self-quotient image
- step f applying to the resulting image (Ila2) of step e a Difference of Gaussians (DoG) filter to get a normalized image (In).
- DoG Difference of Gaussians
- the method is an improvement of the retina filter to normalize illumination.
- the proposed method combines two adaptive nonlinear functions and a Difference of Gaussians filter. This can be related to the performance of two layers of the retina: the photoreceptors and the outer plexiform layer.
- the modified image is further applied to the task of face detection.
- the resulting recognition performances are considerably enhanced.
- the method further comprises a first dynamic range reseating step on the normalized image (In) using a zero-mean normalization to get a reseated normalized image (Rln).
- the method further comprises a truncation step on the rescaled normalized image (Rln) in order to enhance the image contrast and get a truncated rescaled normalized image (TRIn).
- This step is advantageous in order to adjust the image data with a median value corresponding for instance to 0. The truncation threshold level is thus easier to determine.
- the method preferably comprises a second dynamic range rescaling step on the truncated rescaled normalized image (Trln) to get a final normalized image (Ifn).
- Trln truncated rescaled normalized image
- Ifn final normalized image
- at least one adaptive nonlinear function, and preferably both operations, is applied in order to perform a light adaption filter.
- the first adaptation factor F1(p) depends on the intensity of the input image and the second adaptation factor F2(p) depends on the intensity of the first modified image (Ila1).
- the first adaptation factor F1(p) for a given pixel depends on the intensity of said pixel and on the mean value of the input image
- the second adaptation factor F2(p) for a given pixel depends on the intensity of said pixel and on the mean value of on the first modified image.
- the first adaptation factor F1(p) is based on a two dimensional Gaussian low pass filter with a first standard deviation Si. ,.
- the second adaptation factor F2(p) is based on a two dimensional Gaussian low pass filter with a second standard deviation s 2 .
- Si and s 2 values are comprised between 0,3 and 5.
- si value is set to 1 and s 2 value is set to 3.
- the Difference of Gaussians (DoG) filter uses two low pass filters having standard deviations s P h and Sh values comprised between 0,1 and 5.
- s ph value is set to 0,5 and s h value is set to 4.
- each two dimensional Gaussian low pass filter is preferably replaced by two one dimensional Gaussian low pass filters (the horizontal and vertical ones) with the same standard deviation. This allows reducing the processing time.
- the input digital image includes a human face representation.
- the invention also provides a face recognition system, comprising:
- step a and/or b said data in step a and/or b being pre-processed using the illumination normalization as previously described;
- f means for calculating, for each vector in the reference set, its similarity score with said query
- g means for providing a recognition signal based on the set of similarity scores .
- the invention further provides a method for performing face recognition, comprising:
- step a pre-processing said data of step a and/or b using the illumination normalization method previously described;
- the retina is made of three layers: the photo receptors layer with cones and rods; the Outer Plexiform Layer (OPL) with horizontal, bipolar and amacrine cells; the Inner Plexiform Layer (IPL) with ganglion cells.
- OPL Outer Plexiform Layer
- IPL Inner Plexiform Layer
- Light adaptation filter rods and cones have quite different properties: rods have the ability to see at night, under conditions of very low illumination; cones have the ability to deal with bright signals. But both photoreceptors are sensible to light variations and play a crucial rule as light adaptation filter. To exploit this property, an adaptive nonlinear function could be applied on the input signal.
- Photoreceptors perform not only as a light adaptation filter but also as a low pass filter.
- Horizontal cells perform the second low pass filter.
- bipolar cells calculate the difference between photoreceptor and horizontal cell responses. Then, bipolar cells act as a band pass filter: they remove high frequency noise and low frequency illumination.
- two Gaussian low pass filters with different standard deviations corresponding to the effects of photoreceptors and horizontal cells are used.
- bipolar cells are simulated with a Difference of Gaussians filter (DoG), to enhance the image edges.
- DoG Difference of Gaussians
- a model with a nonlinear operation and a DoG filter can be used for variation illumination removing.
- two consecutive nonlinear operations are used for a more efficient light adaptation filter and a truncation is used to enhance the global image contrast.
- Duplex nonlinear operations act as an efficient light adaptation filter. Then, two consecutive adaptive nonlinear functions are applied.
- the adaptation factor of the first non-linear function is preferably computed for each pixel by performing a low pass filter on the input image as follows: where p is the current pixel; F1(p) is the adaptation factor at pixel p; Iin is the intensity of the input image; * denotes the convolution operation; Iin is the mean value of the input; and G1 is a 2D Gaussian low pass filter with standard deviation ah
- ⁇ and ⁇ 2 are set to 1 and 3 respectively.
- the image Ila2 is then transmitted to bipolar cells and processed by using a difference of Gaussians (DoG) filter, as follows: in which DoG is given by: the terms oph and OH correspond to the standard deviations of the low pass filters modeling photoreceptors and horizontal cells. In a preferred embodiment, they are set to 0.5 and 4 respectively. A zero-mean normalization is used in the next step to rescale the dynamic range of the image. The substraction of the mean ⁇ /b/p is not necessary because it is near to 0.
- DoG difference of Gaussians
- a drawback of DoG filter is an inherent reduction in overall image contrast. Therefore, a preferred final step is recommended, to enhance the image contrast: the extreme values are removed by a truncation with a threshold Th which is set to 5 in this preferred embodiment.
- the method is of low complexity, it can be applied as a preprocessing technique to real-time applications such as video surveillance.
- Step 1 illustrates the main steps of the illumination normalization method.
- Steps 1 to 9 present an example of a preferred embodiment.
- Step 1 relates to the acquisition of an image to be processed.
- Image 10 illustrates an example of such an image.
- Step 2 relates to a light adaptation filter, detailed in Figure 2.
- Image 20 illustrates the evolution of the input image due to the light adaptation filter.
- Steps 3 and 4 correspond to Gaussian low pass filters.
- Images 30 and 40 illustrate how the processed images may be modified in each corresponding step.
- Step 5 and image 50 illustrate the bipolar cell step.
- Step 6 and image 60 correspond to a dynamic range rescaling.
- Image 70 is related to steps 7 truncation, 8 rescaling and 9 output. In fact, in most cases, the results of the calculations preformed in these steps are usually not visible, even when considering the data modification from step to step.
- step 21 the image 10 is received as an input.
- step 22 and image 220 relate to a first adaptation factor.
- Step 23 and image 230 relate to a first light adapted image.
- Step 24 and image 240 relate to a second adaptation factor.
- Step 25 and image 250 relate to a second light adapted image.
- the resulting image 20 is shown once again in step 2 of Figure 1.
- Figure 3 illustrates the main steps of a know type face recognition process.
- Figure 4 presents the main steps of a face recognition process involving illumination normalization.
- Data relating to an unknown subject and data relating to known individuals are used in input steps 201 and 202.
- the data related to the facial image to be detected are processed with illumination normalization as previously described.
- the data related to the known individuals are also processed with illumination normalization, though these data could be processed under a different approach or not processed according to variants.
- the reference database could be build with images acquired under standard optimized light conditions, whereas those of the subject to be detected are obtained under various light conditions, requiring further processing to normalize illumination.
- the further steps are typical face recognition steps, such as feature extraction 206, vector transformation of query 207 and reference 208. Similarity scores and then calculated in step 209. Recognition results are obtained in step 210.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
Abstract
A method for performing illumination normalization on a digital image, comprising : a) receiving an input image; b) determining, for each pixel, a first adaptation factor F1(p); c) applying to said input image a first adaptive non linear function using said first adaptation factor F1(p) to get a first modified image (Ila1); d) determining, for each pixel, a second adaptation factor F2(p) using the first modified image (Ila1); e) applying to the resulting image (Ila1) of step c a second adaptive non linear function using said second adaptation factor F2(p) to get a second modified image (IIa2); f) applying to the resulting image (Ila2) of step e a Difference of Gaussians (DoG) filter to get a normalized image (In). A face recognition system and a method for performing face recognition are also provided.
Description
METHOD FOR ILLUMINATION NORMALIZATION ON A DIGITAL IMAGE FOR PERFORMING FACE RECOGNITION
FIELD OF THE INVENTION
[0001]The present invention relates to a method for performing illumination normalization on a digital image, a face recognition system and a method for performing face recognition.
BACKGROUND OF THE INVENTION
[0002] Illumination variations that occur on face images degrade dramatically the performance of face recognition systems. Methods developed in order to overcome the illumination problem can be divided into three categories: illumination invariant features extraction, illumination modeling and illumination variation removing.
[0003]The first approach seeks illumination invariant features to represent face images. Examples of such features include edge maps, image intensity derivation and images convolved with 2D Gabor-like filters.
[0004] However, none of these representations is sufficient by itself to overcome illumination variations because of changes in the illumination direction. Methods in the second category use multiple images of each person under various lighting conditions to learn an appropriate model of illumination variations. Generally, methods belonging to this category can achieve good recognition results. However, they require many images captured under different lighting conditions for each subject. Such methods are not adapted to many applications such as video surveillance for example. The third approach transforms image to canonical form in which the illumination variations are erased. Example of such techniques are
Histogram Equalization (HE), Gamma Correction (GC) or recent methods based on retinex theory.
[0005] In retinex theory, an image l(x, y) is modeled as the product of the reflectance R and the illumination L. The problem of obtaining R from an input image I can be solved by estimating L. Several methods have been presented to estimate L, such as Single Scale Retinex (SSR), Multi Scale Retinex (MSR) and self-quotient image (SQI). However, these methods still can not exactly estimate L so that large illumination variations are not completely removed.
SUMMARY OF THE INVENTION
[0006] It is therefore an object of the present invention to provide a method and a device adapted to remove or reduce illumination variations in digital images, and more particularly to improve the performances of face recognition systems and methods.
[0007] It is another object of the invention to provide a method that is usable in any type of application or any type of environment.
[0008] It is a further objet of the invention to provide a face recognition system enabling to perform face recognition in video surveillance applications, in various lightning conditions.
[0009]According to the invention, these aims are achieved by means of a method for performing illumination normalization on a digital image, comprising :
a) receiving an input image ;
b) determining, for each pixel , a first adaptation factor F1(p) ;
c) applying to said input image a first adaptive non linear function using said first adaptation factor F1(p) to get a first modified image (Ila1) ;
d) determining, for each pixel, a second adaptation factor F2(p) using the first modified image (Ila1) ;
e) applying to the resulting image (Ila1) of step c a second adaptive non linear function using said second adaptation factor F2(p) to get a second modified image (Ha2) ;
f) applying to the resulting image (Ila2) of step e a Difference of Gaussians (DoG) filter to get a normalized image (In).
[0010]The method is an improvement of the retina filter to normalize illumination. The proposed method combines two adaptive nonlinear functions and a Difference of Gaussians filter. This can be related to the performance of two layers of the retina: the photoreceptors and the outer plexiform layer.
[0011]The different steps of the disclosed method enables removing illumination variations, noise but also enhance the image edges.
[0012] When used for face recognition, the modified image is further applied to the task of face detection. The resulting recognition performances are considerably enhanced.
[0013] In a preferred embodiment, the method further comprises a first dynamic range reseating step on the normalized image (In) using a zero-mean normalization to get a reseated normalized image (Rln).
[0014] In a further embodiment, the method further comprises a truncation step on the rescaled normalized image (Rln) in order to enhance the image contrast and get a truncated rescaled normalized image (TRIn). This step is advantageous in order to adjust the image data with a median value corresponding for instance to 0. The truncation threshold level is thus easier to determine.
[0015]The method preferably comprises a second dynamic range rescaling step on the truncated rescaled normalized image (Trln) to get a final normalized image (Ifn).
[0016] In a preferred embodiment, at least one adaptive nonlinear function, and preferably both operations, is applied in order to perform a light adaption filter.
[0017]ln an aspect of the invention, the first adaptation factor F1(p) depends on the intensity of the input image and the second adaptation factor F2(p) depends on the intensity of the first modified image (Ila1).
[0018] In another aspect of the invention, the first adaptation factor F1(p) for a given pixel depends on the intensity of said pixel and on the mean value of the input image, the second adaptation factor F2(p) for a given pixel depends on the intensity of said pixel and on the mean value of on the first modified image.
[0019] In a further aspect of the invention, the first adaptation factor F1(p) is based on a two dimensional Gaussian low pass filter with a first standard deviation Si. ,.
[0020] In a still further embodiment, the second adaptation factor F2(p) is based on a two dimensional Gaussian low pass filter with a second standard deviation s2.
[0021] In a still further embodiment, Si and s2 values are comprised between 0,3 and 5. In a prefered variant, si value is set to 1 and s2 value is set to 3.
[0022] In a still further embodiment, the Difference of Gaussians (DoG) filter uses two low pass filters having standard deviations sPh and Sh values comprised between 0,1 and 5. In a preferred variant, sph value is set to 0,5 and sh value is set to 4.
[0023] In a further variant, each two dimensional Gaussian low pass filter is preferably replaced by two one dimensional Gaussian low pass filters (the horizontal and vertical ones) with the same standard deviation. This allows reducing the processing time.
[0024] In a preferred embodiment, the input digital image includes a human face
representation.
[0025] In another aspect, the invention also provides a face recognition system, comprising:
a) a facial database with data of known individuals;
b) an input for providing a facial image of unknown subject;
c) said data in step a and/or b being pre-processed using the illumination normalization as previously described;
d) means for representing the reference set of images of individuals as a set of reference feature vectors;
e) means for representing said input facial image as a query feature vector;
f) means for calculating, for each vector in the reference set, its similarity score with said query;
g) means for providing a recognition signal based on the set of similarity scores .
[0026] The image variations of the same face due to illumination are often larger than the variations due to the change of face identity. Therefore, illumination normalization steps making the intra-person faces compact by removing all illumination variation of images improves the recognition performances.
[0027]The invention further provides a method for performing face recognition, comprising:
a) providing a facial database with data of known individuals;
b) providing an input facial image of unknown subject;
c) pre-processing said data of step a and/or b using the illumination normalization method previously described;
d) representing the reference set of images of individuals as a set of reference feature vectors;
e) representing said input facial image as a query feature vector;
f) calculating, for each vector in the reference set, its similarity score with said query;
g) providing a recognition signal based on the set of similarity scores.
BRIEF DESCRIPTION OF THE DRAWINGS
[0028]The foregoing and other purposes, features, aspects and advantages of the invention will become apparent from the following detailed description of embodiments, given by way of illustration and not limitation with reference to the accompanying drawings, in which:
-Figure 1 presents the main steps of an illumination normalization method according to the invention;
-Figure 2 presents the main steps relating to the light adaptation filter;
-Figure 3 presents the main steps of a known face recognition method;
-Figure 4 presents the main steps of a method for performing face recognition with specific steps involving illumination normalizations.
DETAILED DESCRIPTION OF THE INVENTION
[0029] Basically, the retina is made of three layers: the photo receptors layer with cones and rods; the Outer Plexiform Layer (OPL) with horizontal, bipolar and amacrine cells; the Inner Plexiform Layer (IPL) with ganglion cells. The goal is not to precisely model the dynamics of retinal processing, but rather to identify which processing acts on the retinal signal for illumination normalization.
[0030] Light adaptation filter rods and cones have quite different properties: rods have the ability to see at night, under conditions of very low illumination; cones have the ability to deal with bright signals. But both photoreceptors are sensible to light variations and play a crucial rule as light adaptation filter. To exploit this property, an adaptive nonlinear function could be applied on the input signal.
[0031] Photoreceptors perform not only as a light adaptation filter but also as a low pass filter. Horizontal cells perform the second low pass filter. In OPL, bipolar cells calculate the difference between photoreceptor and horizontal cell responses. Then, bipolar cells act as a band pass filter: they remove high frequency noise and low frequency illumination. To simulate the processes of OPL, two Gaussian low
pass filters with different standard deviations corresponding to the effects of photoreceptors and horizontal cells are used.
[0032] Finally, bipolar cells are simulated with a Difference of Gaussians filter (DoG), to enhance the image edges.
[0033]As mentioned above, a model with a nonlinear operation and a DoG filter can be used for variation illumination removing. In this model, two consecutive nonlinear operations are used for a more efficient light adaptation filter and a truncation is used to enhance the global image contrast.
[0034] Duplex nonlinear operations act as an efficient light adaptation filter. Then, two consecutive adaptive nonlinear functions are applied. The adaptation factor of the first non-linear function is preferably computed for each pixel by performing a low pass filter on the input image as follows:
where p is the current pixel; F1(p) is the adaptation factor at pixel p; Iin is the intensity of the input image; * denotes the convolution operation; Iin is the mean value of the input; and G1 is a 2D Gaussian low pass filter with standard deviation ah
[0035]The input image is then processed according to a Naka-Rushton type equation using the adaptation factor F1: ϊΐαιίρ) = (/iniraa-e) + i¾(p)}
[0036] The term lin(max) + F1(p) is a normalization factor where lin(max) is the maximal value of the image intensity. The second nonlinear function works similarly, the light adapted image Ila2 is obtained by:
in which 2(p)
[0037]An advantage of this light adaptation filter is that the image Ila2 does not change with different low pass filter sizes. In a preferred embodiment, σΐ and σ2 are set to 1 and 3 respectively.
[0038] The image Ila2 is then transmitted to bipolar cells and processed by using a difference of Gaussians (DoG) filter, as follows:
in which DoG is given by:
the terms oph and OH correspond to the standard deviations of the low pass filters modeling photoreceptors and horizontal cells. In a preferred embodiment, they are set to 0.5 and 4 respectively. A zero-mean normalization is used in the next step to rescale the dynamic range of the image. The substraction of the mean μ/b/p is not necessary because it is near to 0.
[0039]A drawback of DoG filter is an inherent reduction in overall image contrast. Therefore, a preferred final step is recommended, to enhance the image contrast: the extreme values are removed by a truncation with a threshold Th which is set to 5 in this preferred embodiment.
[0040] In comparison with other methods, the method if the invention is more robust to illumination variations and is of lower complexity. Only one image per subject is required for the training set, the used techniques are simple. The calculation requiring the highest processing resources is the convolution operation with Gaussian kernel. Supposing that the size of a normalized image is m x n. By replacing one 2D Gaussian calculation by two independent 1 D Gaussian ones, the computational complexity of algorithm is O (mnw) where w is the size of 1D Gaussian kernel and w = 6σ in this example.
[0041]As the method is of low complexity, it can be applied as a preprocessing technique to real-time applications such as video surveillance.
[0042] Figure 1 illustrates the main steps of the illumination normalization method. Steps 1 to 9 present an example of a preferred embodiment. Step 1 relates to the acquisition of an image to be processed. Image 10 illustrates an example of such an image. Step 2 relates to a light adaptation filter, detailed in Figure 2. Image 20 illustrates the evolution of the input image due to the light adaptation filter. Steps 3 and 4 correspond to Gaussian low pass filters. Images 30 and 40 illustrate how the processed images may be modified in each corresponding step.
[0043] Step 5 and image 50 illustrate the bipolar cell step. Step 6 and image 60 correspond to a dynamic range rescaling. Image 70 is related to steps 7 truncation, 8 rescaling and 9 output. In fact, in most cases, the results of the calculations preformed in these steps are usually not visible, even when considering the data modification from step to step.
[0044] In Figure 2, the light adaptation filter steps are presented with more details. In step 21, the image 10 is received as an input. Step 22 and image 220 relate to a first adaptation factor. Step 23 and image 230 relate to a first light adapted image. Step 24 and image 240 relate to a second adaptation factor. Step 25 and image 250 relate to a second light adapted image. The resulting image 20 is shown once again in step 2 of Figure 1.
Figure 3 illustrates the main steps of a know type face recognition process. Figure 4 presents the main steps of a face recognition process involving illumination normalization. Data relating to an unknown subject and data relating to known individuals are used in input steps 201 and 202. The data related to the facial image to be detected are processed with illumination normalization as previously described. In a preferred embodiment, the data related to the known individuals are also processed with illumination normalization, though these data could be processed under a different approach or not processed according to variants. For instance, the reference database could be build with images acquired under standard optimized light conditions, whereas those of the subject to be detected are obtained under various light conditions, requiring further processing to normalize illumination.
The further steps are typical face recognition steps, such as feature extraction 206, vector transformation of query 207 and reference 208. Similarity scores and then calculated in step 209. Recognition results are obtained in step 210.
[0045] The proposed algorithm has been tested and evaluated on the Yale B database, the Feret illumination database by using two face recognition methods:
PCA based and Local Binary Pattern based (LBP). Experimental results show that the proposed method achieves very high recognition rates even for the most challenging illumination conditions. Moreover, processing resources are optimized.
[0046]Those skilled in the art will also understand that the here above described face recognition approach can be modified, considering the fact that many variants of face recognition approaches are well known in the art. Such alterations, modifications and improvements are intended to be within the spirit and scope of the invention.
[0047]Accordingly, the foregoing description is by way of example only and is not intended to be limiting.
Claims
1. A method for performing illumination normalization on a digital image |, comprising :
a) receiving an input image ;
b) determining, for each pixel , a first adaptation factor F1(p) ;
c) applying to said input image a first adaptive non linear function using said first adaptation factor F1(p) to get a first modified image (Ila1) ;
d) determining, for each pixel, a second adaptation factor F2(p) using the first modified image (Ila1)| ;
e) applying to the resulting image (Ila1) of step c a second adaptive non linear function using said second adaptation factor F2(p) to get a second modified image (Ila2) ;
f) applying to the resulting image (Ila2) of step e a Difference of Gaussians (DoG) filter to get a normalized image (In).
2. A method for performing illumination normalization on a digital image according to claim 1, further comprising a first dynamic range rescaling step on the normalized image (In) using a zero-mean normalization to get a rescaled normalized image (Rln).
3. A method for performing illumination normalization on a digital image according to claim 2, further comprising a truncation step on the rescaled normalized image (Rln) in order to enhance the image contrast and get a truncated rescaled normalized image (TRIn).
4. A method for performing illumination normalization on a digital image according to claim 3, further comprising a second dynamic range rescaling step on the truncated rescaled normalized image (Trln) to get a final normalized image (Ifn).
5. A method for performing illumination normalization on a digital image according to any one of preceding claims, wherein at least one adaptive nonlinear function, and preferably both operations, is applied in order to perform a light adaption filter.
6. A method for performing illumination normalization on a digital image according to any one of preceding claims, wherein the first adaptation factor F1(p) depends on the intensity of the input image and the second adaptation factor F2(p) depends on the intensity of the first modified image (Ila1).
7. A method for performing illumination normalization on a digital image according to any one of preceding claims, wherein the first adaptation factor F1(p) for a given pixel depends on the intensity of said pixel and on the mean value of the input image, the second adaptation factor F2(p) for a given pixel depends on the intensity of said pixel and on the mean value of on the first modified image.
8. A method for performing illumination normalization on a digital image according to any one of preceding claims, wherein the first adaptation factor F1(p) is based on a two dimensional Gaussian low pass filter with a first standard deviation Si.
9. A method for performing illumination normalization on a digital image according to any one of preceding claims, wherein the second adaptation factor F2(p) is based on a two dimensional Gaussian low pass filter with a second standard deviation S2..
10. A method for performing illumination normalization on a digital image according to claim 8 or 9, wherein Si and s2 values are comprised between 0,3 and 5.
11. A method for performing illumination normalization on a digital image according any one of preceding claims, wherein said Difference of Gaussians (DoG) filter uses two low pass filters having standard deviations sph and sh values comprised between 0,1 and 5.
12. A method for performing illumination normalization on a digital image according to any one of preceding claims, wherein said input digital image includes a human face representation.
13. A face recognition system, comprising:
a) a facial database with data of known individuals;
b) an input for providing a facial image of unknown subject;
c) pre-processing said data of step a and/or b using the illumination normalization method according to any one of claims 1 to 12;
d) means for representing the reference set of images of individuals as a set of reference feature vectors;
e) means for representing said input facial image as a query feature vector;
f) means for calculating, for each vector in the reference set, its similarity score with said query;
g) means for providing a recognition signal based on the set of similarity scores.
14. A method for performing face recognition, comprising:
a) providing a facial database with data of known individuals;
b) providing an input facial image of unknown subject;
c) pre-processing said data of step a and/or b using the illumination normalization method according to any one of claims 1 to 12;
d) representing the reference set of images of individuals as a set of reference feature vectors;
e) representing said input facial image as a query feature vector;
f) calculating, for each vector in the reference set, its similarity score with said query;
g) providing a recognition signal based on the set of similarity scores.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/IB2009/008066 WO2011055164A1 (en) | 2009-11-06 | 2009-11-06 | Method for illumination normalization on a digital image for performing face recognition |
EP09835896A EP2497052A1 (en) | 2009-11-06 | 2009-11-06 | Method for illumination normalization on a digital image for performing face recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/IB2009/008066 WO2011055164A1 (en) | 2009-11-06 | 2009-11-06 | Method for illumination normalization on a digital image for performing face recognition |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011055164A1 true WO2011055164A1 (en) | 2011-05-12 |
Family
ID=42320094
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2009/008066 WO2011055164A1 (en) | 2009-11-06 | 2009-11-06 | Method for illumination normalization on a digital image for performing face recognition |
Country Status (2)
Country | Link |
---|---|
EP (1) | EP2497052A1 (en) |
WO (1) | WO2011055164A1 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013084233A1 (en) * | 2011-12-04 | 2013-06-13 | Digital Makeup Ltd | Digital makeup |
WO2014059201A1 (en) * | 2012-10-12 | 2014-04-17 | Microsoft Corporation | Illumination sensitive face recognition |
CN103870820A (en) * | 2014-04-04 | 2014-06-18 | 南京工程学院 | Illumination normalization method for extreme illumination face recognition |
WO2015122789A1 (en) * | 2014-02-11 | 2015-08-20 | 3Divi Company | Facial recognition and user authentication method |
EP3319010A4 (en) * | 2015-06-30 | 2019-02-27 | Yutou Technology (Hangzhou) Co., Ltd. | Face recognition system and face recognition method |
CN110046559A (en) * | 2019-03-28 | 2019-07-23 | 广东工业大学 | A kind of face identification method |
CN112036064A (en) * | 2020-08-18 | 2020-12-04 | 中国人民解放军陆军军医大学第二附属医院 | Human jaw face explosive injury simulation and biomechanical simulation method, system and medium |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107392866B (en) * | 2017-07-07 | 2019-09-17 | 武汉科技大学 | A kind of facial image local grain Enhancement Method of illumination robust |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070065015A1 (en) * | 2005-09-07 | 2007-03-22 | Masashi Nishiyama | Image processing apparatus and method |
-
2009
- 2009-11-06 WO PCT/IB2009/008066 patent/WO2011055164A1/en active Application Filing
- 2009-11-06 EP EP09835896A patent/EP2497052A1/en not_active Withdrawn
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070065015A1 (en) * | 2005-09-07 | 2007-03-22 | Masashi Nishiyama | Image processing apparatus and method |
Non-Patent Citations (2)
Title |
---|
DANIEL J JOBSON ET AL: "A Multiscale Retinex for Bridging the Gap Between Color Images and the Human Observation of Scenes", IEEE TRANSACTIONS ON IMAGE PROCESSING, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 6, no. 7, 1 July 1997 (1997-07-01), pages 965 - 975, XP011026175, ISSN: 1057-7149 * |
LUIGI CINQUE ET AL: "Retinex Combined with Total Variation for Image Illumination Normalization", 8 September 2009, IMAGE ANALYSIS AND PROCESSING Â ICIAP 2009, SPRINGER BERLIN HEIDELBERG, BERLIN, HEIDELBERG, PAGE(S) 958 - 964, ISBN: 978-3-642-04145-7, XP019128122 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013084233A1 (en) * | 2011-12-04 | 2013-06-13 | Digital Makeup Ltd | Digital makeup |
US9495582B2 (en) | 2011-12-04 | 2016-11-15 | Digital Makeup Ltd. | Digital makeup |
WO2014059201A1 (en) * | 2012-10-12 | 2014-04-17 | Microsoft Corporation | Illumination sensitive face recognition |
CN104823201A (en) * | 2012-10-12 | 2015-08-05 | 微软技术许可有限责任公司 | Illumination sensitive face recognition |
US9165180B2 (en) | 2012-10-12 | 2015-10-20 | Microsoft Technology Licensing, Llc | Illumination sensitive face recognition |
WO2015122789A1 (en) * | 2014-02-11 | 2015-08-20 | 3Divi Company | Facial recognition and user authentication method |
CN103870820A (en) * | 2014-04-04 | 2014-06-18 | 南京工程学院 | Illumination normalization method for extreme illumination face recognition |
EP3319010A4 (en) * | 2015-06-30 | 2019-02-27 | Yutou Technology (Hangzhou) Co., Ltd. | Face recognition system and face recognition method |
CN110046559A (en) * | 2019-03-28 | 2019-07-23 | 广东工业大学 | A kind of face identification method |
CN112036064A (en) * | 2020-08-18 | 2020-12-04 | 中国人民解放军陆军军医大学第二附属医院 | Human jaw face explosive injury simulation and biomechanical simulation method, system and medium |
Also Published As
Publication number | Publication date |
---|---|
EP2497052A1 (en) | 2012-09-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2011055164A1 (en) | Method for illumination normalization on a digital image for performing face recognition | |
WO2019232831A1 (en) | Method and device for recognizing foreign object debris at airport, computer apparatus, and storage medium | |
CN107392866B (en) | A kind of facial image local grain Enhancement Method of illumination robust | |
Vu et al. | Illumination-robust face recognition using retina modeling | |
CN106897673B (en) | Retinex algorithm and convolutional neural network-based pedestrian re-identification method | |
JP6192271B2 (en) | Image processing apparatus, image processing method, and program | |
KR20180109665A (en) | A method and apparatus of image processing for object detection | |
CN109978848B (en) | Method for detecting hard exudation in fundus image based on multi-light-source color constancy model | |
CN111401145B (en) | Visible light iris recognition method based on deep learning and DS evidence theory | |
CN111612741B (en) | Accurate reference-free image quality evaluation method based on distortion recognition | |
CN109102475B (en) | Image rain removing method and device | |
EP1964028A1 (en) | Method for automatic detection and classification of objects and patterns in low resolution environments | |
CN109360179B (en) | Image fusion method and device and readable storage medium | |
CN104200437A (en) | Image defogging method | |
WO2020029874A1 (en) | Object tracking method and device, electronic device and storage medium | |
CN112446379B (en) | Self-adaptive intelligent processing method for dynamic large scene | |
CN113723309A (en) | Identity recognition method, identity recognition device, equipment and storage medium | |
Asmuni et al. | An improved multiscale retinex algorithm for motion-blurred iris images to minimize the intra-individual variations | |
CN117392733B (en) | Acne grading detection method and device, electronic equipment and storage medium | |
CN106940904A (en) | Attendance checking system based on recognition of face and speech recognition | |
Han et al. | Low contrast image enhancement using convolutional neural network with simple reflection model | |
Widynski et al. | A contrario edge detection with edgelets | |
Singh et al. | Multiscale reflection component based weakly illuminated nighttime image enhancement | |
CN110163489B (en) | Method for evaluating rehabilitation exercise effect | |
CN111914749A (en) | Lane line recognition method and system based on neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 09835896 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
REEP | Request for entry into the european phase |
Ref document number: 2009835896 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2009835896 Country of ref document: EP |