CN103870820A - Illumination normalization method for extreme illumination face recognition - Google Patents

Illumination normalization method for extreme illumination face recognition Download PDF

Info

Publication number
CN103870820A
CN103870820A CN201410136046.6A CN201410136046A CN103870820A CN 103870820 A CN103870820 A CN 103870820A CN 201410136046 A CN201410136046 A CN 201410136046A CN 103870820 A CN103870820 A CN 103870820A
Authority
CN
China
Prior art keywords
illumination
unitary
face
recognition
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410136046.6A
Other languages
Chinese (zh)
Inventor
程勇
焦良葆
曹雪虹
陈瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Institute of Technology
Original Assignee
Nanjing Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Institute of Technology filed Critical Nanjing Institute of Technology
Priority to CN201410136046.6A priority Critical patent/CN103870820A/en
Publication of CN103870820A publication Critical patent/CN103870820A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an illumination normalization method for extreme illumination face recognition. The method comprises the steps that through a retina model imitating primates, an illumination normalization model based on the retina is built through an improved Naka-Rushton equation; when adaption factors in the Naka-Rushton equation are calculated, the adaption factors are estimated under the condition of considering illumination differences, the influence of light conditions is considered, low-frequency information and high-frequency information in a face image are mainly processed through different algorithms, and therefore the overall robustness of a system is improved. Through experimental verification, the method is rarely influenced by training set selection, and is quite good in stability.

Description

Extreme path is according to the unitary of illumination method of recognition of face
Technical field
The present invention relates to the face recognition technology under foundation and the extreme light conditions of unitary of illumination processing, Modified Retinal Model.
Background technology
Many years in the past, the research of recognition of face made remarkable progress.But under uncontrolled physical environment, the research of recognition of face still faces many challenges.To a certain extent, for an effective face identification system, illumination variation is an open question all the time, and it has seriously reduced the performance of face identification system.
The problem of illumination variation on recognition of face impact, has in recent years caused researchist's attention, and has proposed several different methods.These methods roughly can be classified as 3 classes: illumination three-dimensional model, illumination invariant extract and unitary of illumination method.
Illumination three-dimensional model refers to that by three-dimensional model, facial image being carried out to matching eliminates illumination effect.In order to obtain effective three-dimensional model, these class methods not only need sufficient training sample, and will suppose that facial image is protruding structure, and the complexity of algorithm is higher, and these conditions are often difficult to realize in actual applications.
Illumination invariant extracts the feature that refers to that extraction and illumination condition are irrelevant, eliminates the impact of illumination variation.For addressing this problem, many methods have been proposed, comprise multiple dimensioned retinex (MSR), logarithm total variation (LTV) and gradient face etc.But these methods can not fully be eliminated different lighting effects, halation phenomenon still exists.
Unitary of illumination refers to and utilizes basic image processing techniques to carry out pre-service to light image, obtains the image of illumination robust.The method mainly comprises: log-transformation, histogram equalization, Gamma correction, adaptive histogram equalization and partial histogram equalization etc.Although these class methods can be eliminated the impact of illumination variation on recognition of face to a certain extent, in complex illumination situation, discrimination is difficult to satisfactory.Recently, be subject to human visual system's inspiration, many methods are in the middle of image processing and feature extraction.The people such as Meylan have proposed a kind of improved Naka-Rushton equation, and on this equational basis, use two kinds of continuous nonlinear operations to carry out simulating human vision system, thereby avoid halation, improve overall outward appearance.
Subsequently, by using the difference of above-mentioned two kinds of continuous nonlinear operations and Gaussian filter, a kind of unitary of illumination method based on Modified Retinal Model has been proposed.The method can realize good performance in the recognition of face under different illumination conditions.But, still need the impact of further elimination illumination variation on recognition of face at present badly.
Summary of the invention
In order to solve the effect unsafty problem of said method in extreme light conditions human face identification, the present invention seeks to from human retina model, more effective unitary of illumination method is proposed, to improve the identification accuracy of extreme light conditions human face identification.
The technical solution used in the present invention is in order to achieve the above object:
Extreme path shines a unitary of illumination method for recognition of face,
Analyze the functional layer of primate Modified Retinal Model, consult existing Modified Retinal Model, for the foundation based on amphiblestroid unitary of illumination model provides reference;
Utilize Naka-Rushton equation, for the foundation of unitary of illumination model provides concrete principle foundation;
In the time of the Adaptation factor calculating in Naka-Rushton equation, Adaptation factor is estimated in the situation that considering illumination difference;
Facial image is divided into low frequency and HFS by unitary of illumination processing section, and then take respectively different algorithms to process.
Preferably, set up unitary of illumination model: the Modified Retinal Model that utilizes primate, functional layer comprises photoreceptor layer, outside lamina reticularis OPL and inner lamina reticularis IPL, set up the unitary of illumination model based on Modified Retinal Model, comprise two continuous local image irradiation compression and space bandpass filterings.
Preferably, Naka-Rushton equation: calculate each pixel by carry out gaussian filtering on neighborhood, be defined as follows:
X 0 = X ( p ) * G H + X ‾ / 2 - - - ( 1 )
Wherein, p is the pixel in image, X 0be to be the Adaptation factor that p is in pixel, X (p) is the intensity of input picture, and * represents convolution algorithm, G ha 2-d gaussian filters device, corresponding to the mean value of the image pixel intensities of X image.
Preferably, estimation Adaptation factor: I represents original image, and I represents original image, and A represents the low frequency sub-band of I, and DH, DV and DD represent respectively the high-frequency sub-band of I;
Illumination-classification based on Otsu carries out on low frequency sub-band A, supposes that t is the threshold value being obtained by Otsu, and the illumination condition of A can approximately be divided into two classes:
M 1={A(x,y)|A(x,y)≥t} (2)
M 2={A(x,y)|A(x,y)<t) (3)
Wherein, A (x, y) represents the intensity of A at point (x, y);
The Adaptation factor A of A 0 ρbe defined as follows:
A 0 ρ ( x , y ) = A ( x , y ) * G ρ + mean A ‾ / 2 - - - ( 4 )
mean A ‾ = M A 1 ‾ A ( x , y ) ∈ M 1 M A 2 ‾ A ( x , y ) ∈ M 2 - - - ( 5 )
Wherein, G ρbe gaussian filtering, ρ is standard deviation,
Figure BDA0000487071580000033
with
Figure BDA0000487071580000034
respectively M 1and M 2mean value.
Preferably, for the unitary of illumination of recognition of face, set up visual model:
Single-stage discrete two-dimensional wavelet transformation is used for realizing facial image is extracted to low frequency and high-frequency information;
Process low-frequency information by imitating retinal information treatment mechanism, and large-scale high frequency coefficient is blocked by a threshold value;
Ask discrete two-dimensional to convert and get its inverse by the low frequency to applicable and high-frequency information and obtain unitary of illumination result.
Preferably, low-frequency information processing:
The first step, sets up photographic layer: the processing equation of low frequency sub-band A is as follows,
A p ( x , y ) = ( max A + A 0 ρ 1 ( x , y ) ) A ( x , y ) / ( A ( x , y ) + A 0 ρ 1 ( x , y ) ) - - - ( 6 )
Wherein, max athe maximal value of A,
Figure BDA0000487071580000036
the Adaptation factor of A at point (x, y), ρ 1standard deviation and the A of gaussian filtering p(x, y) is corresponding output;
Second step, sets up outside lamina reticularis: A pprocessing as follows,
A o ( x , y ) = ( max A p + A 0 ρ 2 p ( x , y ) ) A p ( x , y ) / ( A p ( x , y ) + A 0 ρ 2 p ( x , y ) ) - - - ( 7 )
Wherein,
Figure BDA0000487071580000039
a pmaximal value,
Figure BDA0000487071580000038
a pat the Adaptation factor of point (x, y), ρ 2standard deviation and the A of gaussian filtering o(x, y) exports accordingly.
The 3rd step, rim detection: adopt space bandpass filtering to carry out the information processing of simulated interior lamina reticularis, be defined as follows:
A e ( x , y ) = ( exp ( - ( x 2 + y 2 ) 2 ρ L 2 ) / 2 πρ L 2 - exp ( - ( x 2 + y 2 ) 2 ρ H 2 ) / 2 πρ H 2 ) * A o - - - ( 8 )
Wherein, ρ land ρ hthe respective standard that is two Gaussian functions is poor, A ethe edge of presentation video.
Preferably, high-frequency information processing:
Make H represent DH, DV or DD, the truncation of high-frequency sub-band is as follows:
H &prime; ( x , y ) = t H ( x , y ) > t H ( x , y ) | H ( x , y ) | &le; t - t H ( x , y ) < - t - - - ( 9 )
t=median(|H(x,y)|) (10)
Wherein, median () is used for returning determinant | and the intermediate value of H|, by truncation, high-frequency sub-band is divided into DH ', DV ', DD '.
The invention has the beneficial effects as follows: a kind of extreme path of the present invention is according to the unitary of illumination method of recognition of face, by imitating the Modified Retinal Model of primate, utilize the Naka-Rushton equation after improving to set up based on amphiblestroid unitary of illumination model, in the time of the Adaptation factor calculating in Naka-Rushton equation, Adaptation factor is estimated in the situation that considering illumination difference, consider the impact of illumination condition, in embodiment, adopt different algorithms to process to the low frequency in facial image and high-frequency information emphatically, thereby improve entire system robustness; And through experimental verification, it is minimum that the method is affected by training set selection, has good stability.
Accompanying drawing explanation
Fig. 1 is that embodiment is to a comparative result comparison diagram that people uses distinct methods to obtain under different illumination conditions in Yale B face database.
Embodiment
Describe the preferred embodiments of the present invention in detail below in conjunction with accompanying drawing.
For improving the accuracy of extreme illumination condition human face identification, a kind of unitary of illumination method of new recognition of face is proposed: by imitating the Modified Retinal Model of primate, utilize the Naka-Rushton equation after improving to set up based on amphiblestroid unitary of illumination model, consider the impact of illumination condition, in embodiment, adopt different algorithms to process to the low frequency in facial image and high-frequency information emphatically, thereby improve entire system robustness.
The core technology of embodiment is estimation and the unitary of illumination processing section of Adaptation factor.First analyze the functional layer of primate Modified Retinal Model, consult existing Modified Retinal Model, for the foundation based on amphiblestroid unitary of illumination model provides reference; Utilize improved Naka-Rushton equation, for the foundation of unitary of illumination model provides concrete principle foundation; In the time of the Adaptation factor calculating in Naka-Rushton equation, to consider under complex illumination condition, the mean value of piece image pixel, as part Adaptation factor, is but applied to whole pixel domain, and this is inappropriate.For obtaining Adaptation factor accurately, Adaptation factor must be to estimate in the situation that considering illumination difference; Facial image is divided into low frequency and HFS by unitary of illumination processing section, and then take respectively different algorithms to process.
The detailed step of embodiment is as follows:
Unitary of illumination model: the Modified Retinal Model that utilizes primate, comprise 3 functional layers: photoreceptor layer, outside lamina reticularis OPL and inner lamina reticularis IPL, set up the unitary of illumination model based on Modified Retinal Model, comprise two continuous local image irradiation compression and space bandpass filterings.
Naka-Rushton equation: utilize a kind of improved Naka-Rushton equation, mainly calculate each pixel by carry out gaussian filtering on its neighborhood, be defined as follows:
X 0 = X ( p ) * G H + X &OverBar; / 2 - - - ( 1 )
Wherein, p is the pixel in image, X 0be to be the Adaptation factor that p is in pixel, X (p) is the intensity of input picture, and * represents convolution algorithm, G ha 2-d gaussian filters device,
Figure BDA0000487071580000052
corresponding to the mean value of the image pixel intensities of X image.
Unitary of illumination based on improvement type can be avoided halation phenomenon, improves overall outward appearance.Still there is defect.Such as, use gaussian filtering to estimate that local intensity of illumination often can produce inaccurate result to the edge of image to original image.Particularly under complex illumination condition, the mean value of piece image
Figure BDA0000487071580000053
as part Adaptation factor, be but applied to whole pixel domain, this is inappropriate.In unitary of illumination is after a while processed, these produce error possibly.Therefore in embodiment, for obtaining Adaptation factor accurately, be, mainly to complete unitary of illumination processing in the low-frequency band of image by two-dimensional wavelet transformation.In addition Adaptation factor X, 0to estimate in the situation that considering illumination difference.
Estimation Adaptation factor: I represents original image, and A represents the low frequency sub-band of I, and DH, DV and DD represent respectively the high-frequency sub-band of I.First, based on the illumination-classification of Otsu, adopt the maximum difference between various classification, on low frequency sub-band A, carry out.Suppose that t is the threshold value being obtained by Otsu, the illumination condition of A can approximately be divided into two classes:
M 1={A(x,y)|A(x,y)≥t} (2)
M 2={A(x,y)|A(x,y)<t) (3)
Wherein, A (x, y) represents the intensity of A at point (x, y).
The Adaptation factor A of A 0 ρbe defined as follows:
A 0 &rho; ( x , y ) = A ( x , y ) * G &rho; + mean A &OverBar; / 2 - - - ( 4 )
mean A &OverBar; = M A 1 &OverBar; A ( x , y ) &Element; M 1 M A 2 &OverBar; A ( x , y ) &Element; M 2 - - - ( 5 )
Wherein, G ρbe gaussian filtering, ρ is standard deviation,
Figure BDA0000487071580000063
with respectively M 1and M 2mean value.
Unitary of illumination: for the unitary of illumination of recognition of face, set up a kind of new effective visual model.First, single-stage discrete two-dimensional wavelet transformation is used for realizing facial image is extracted to low frequency and high-frequency information.Then, consider illumination condition, process low-frequency information by imitating retinal information treatment mechanism, and large-scale high frequency coefficient is blocked by a threshold value.Finally, ask discrete two-dimensional to convert and get its inverse by the low frequency to applicable and high-frequency information and obtain unitary of illumination result.The detailed process of low frequency and high-frequency information is as follows:
Low-frequency information processing:
The first step, sets up photographic layer: the processing equation of low frequency sub-band A is as follows,
A p ( x , y ) = ( max A + A 0 &rho; 1 ( x , y ) ) A ( x , y ) / ( A ( x , y ) + A 0 &rho; 1 ( x , y ) ) - - - ( 6 )
Wherein, max athe maximal value of A,
Figure BDA0000487071580000067
the Adaptation factor of A at point (x, y), ρ 1standard deviation and the A of gaussian filtering p(x, y) is corresponding output.
Second step, sets up outside lamina reticularis: A pprocessing as follows,
A o ( x , y ) = ( max A p + A 0 &rho; 2 p ( x , y ) ) A p ( x , y ) / ( A p ( x , y ) + A 0 &rho; 2 p ( x , y ) ) - - - ( 7 )
Wherein,
Figure BDA0000487071580000074
a pmaximal value,
Figure BDA0000487071580000071
a pat the Adaptation factor of point (x, y), ρ 2standard deviation and the A of gaussian filtering o(x, y) exports accordingly.
The 3rd step, rim detection: adopt space bandpass filtering to carry out the information processing of simulated interior lamina reticularis.Be defined as follows:
A e ( x , y ) = ( exp ( - ( x 2 + y 2 ) 2 &rho; L 2 ) / 2 &pi;&rho; L 2 - exp ( - ( x 2 + y 2 ) 2 &rho; H 2 ) / 2 &pi;&rho; H 2 ) * A o - - - ( 8 )
Wherein ρ land ρ hthe respective standard that is two Gaussian functions is poor, A ethe edge of presentation video.
High-frequency information processing:
Because concerning the facial image under different illumination conditions, a large amount of wavelet coefficients of high-frequency sub-band are unsettled, can have a negative impact in the situation that not processing for the high-frequency information of recognition of face.Therefore,, for improving the illumination robustness of high-frequency information, large high-frequency parameter is carried out to truncation.
Make H represent DH, DV or DD, the truncation of high-frequency sub-band is as follows:
H &prime; ( x , y ) = t H ( x , y ) > t H ( x , y ) | H ( x , y ) | &le; t - t H ( x , y ) < - t - - - ( 9 )
t=median(|H(x,y)|) (10)
Wherein median () is used for returning determinant | and the intermediate value of H|, by truncation, high-frequency sub-band is divided into DH ', DV ', DD '.
Experimental verification
In order to verify validity of the present invention, select the Yale B illumination face database of expansion.This storehouse comprises 38 personage's samples, nine attitudes, and 64 kinds of different illumination, amount to 21888 width images.Original image is of a size of 640 × 480, and all images, by artificial cutting, only comprise face's part, and image size is adjusted to 192 × 168 as far as possible.At cognitive phase, adopt classical PCA method for reducing dimension, embodiment retains 90% energy in experiment, and uses nearest neighbor classifier based on Euclidean distance to classify to facial image, then carries out discrete two-dimensional wavelet transformation with wavelet function db1.
In Fig. 1, top line is the original image of a people from subset one to subset five; Middle row is the design sketch that adopts Vu ' s method to obtain; Last column is the design sketch that adopts the present invention to obtain.
Table 1 is everyone discrimination (%) of selecting a width random image to reach as training set from subset one
Method Subset one Subset two Subset three Subset four Subset five All subsets
Original image 2.63 2.78 3.06 4.13 3.33 3.25
LTV 92.06 80.22 92.18 85.99 91.46 88.50
Gradient face 99.47 98.36 89.15 77.71 90.25 90.04
Vu 96.05 90.15 95.65 90.84 94.11 93.26
Embodiment 97.75 96.00 96.33 95.24 95.22 95.86
In order to assess the performance of embodiment institute put forward the methods, test.From subset one, everyone selects a width random image as training set, and all the other are as test set, because above-mentioned training set and test set are random, have carried out 30 emulation and has got their mean value, and table 1 has been listed the average recognition rate of distinct methods.In sum, in table 1, data show that embodiment of the present invention algorithm used is subject to training set to select affect minimum and have good stability.

Claims (7)

1. extreme path, according to a unitary of illumination method for recognition of face, is characterized in that:
Analyze the functional layer of primate Modified Retinal Model, consult existing Modified Retinal Model, for the foundation based on amphiblestroid unitary of illumination model provides reference;
Utilize Naka-Rushton equation, for the foundation of unitary of illumination model provides concrete principle foundation;
In the time of the Adaptation factor calculating in Naka-Rushton equation, Adaptation factor is estimated in the situation that considering illumination difference;
Facial image is divided into low frequency and HFS by unitary of illumination processing section, and then take respectively different algorithms to process.
2. extreme path as claimed in claim 1 is according to the unitary of illumination method of recognition of face, it is characterized in that: set up unitary of illumination model: the Modified Retinal Model that utilizes primate, functional layer comprises photoreceptor layer, outside lamina reticularis OPL and inner lamina reticularis IPL, set up the unitary of illumination model based on Modified Retinal Model, comprise two continuous local image irradiation compression and space bandpass filterings.
3. extreme path as claimed in claim 1, according to the unitary of illumination method of recognition of face, is characterized in that: Naka-Rushton equation: calculate each pixel by carry out gaussian filtering on neighborhood, be defined as follows:
Figure 2014101360466100001DEST_PATH_IMAGE001
Wherein, p is the pixel in image, X 0be to be the Adaptation factor that p is in pixel, X (p) is the intensity of input picture, and * represents convolution algorithm, G ha 2-d gaussian filters device,
Figure FDA0000487071570000012
corresponding to the mean value of the image pixel intensities of X image.
4. extreme path as claimed in claim 1, according to the unitary of illumination method of recognition of face, is characterized in that: estimation Adaptation factor: I represents original image, and A represents the low frequency sub-band of I, and DH, DV and DD represent respectively the high-frequency sub-band of I;
Illumination-classification based on Otsu carries out on low frequency sub-band A, supposes that t is the threshold value being obtained by Otsu, and the illumination condition of A can approximately be divided into two classes:
M 1={A(x,y)|A(x,y)≥t} (2)
M 2={A(x,y)|A(x,y)<t} (3)
Wherein, A (x, y) represents the intensity of A at point (x, y);
The Adaptation factor A of A o ρbe defined as follows:
Figure 2014101360466100001DEST_PATH_IMAGE002
Figure 2014101360466100001DEST_PATH_IMAGE003
Wherein, G ρbe gaussian filtering, ρ is standard deviation,
Figure 2014101360466100001DEST_PATH_IMAGE004
with
Figure 2014101360466100001DEST_PATH_IMAGE005
respectively M 1and M 2mean value.
5. the extreme path as described in claim 1-4 any one, according to the unitary of illumination method of recognition of face, is characterized in that: for the unitary of illumination of recognition of face, set up visual model:
Single-stage discrete two-dimensional wavelet transformation is used for realizing facial image is extracted to low frequency and high-frequency information;
Process low-frequency information by imitating retinal information treatment mechanism, and large-scale high frequency coefficient is blocked by a threshold value;
Ask discrete two-dimensional to convert and get its inverse by the low frequency to applicable and high-frequency information and obtain unitary of illumination result.
6. extreme path as claimed in claim 5, according to the unitary of illumination method of recognition of face, is characterized in that:
Low-frequency information processing:
The first step, sets up photographic layer: the processing equation of low frequency sub-band A is as follows,
Figure 2014101360466100001DEST_PATH_IMAGE006
Wherein, max a-the maximal value of A,
Figure FDA0000487071570000026
the Adaptation factor of A at point (x, y), ρ 1standard deviation and the A of gaussian filtering p(x, y) is corresponding output;
Second step, sets up outside lamina reticularis: A pprocessing as follows,
Figure 2014101360466100001DEST_PATH_IMAGE007
Wherein,
Figure FDA0000487071570000028
a pmaximal value, a pat the Adaptation factor of point (x, y), ρ 2standard deviation and the A of gaussian filtering o(x, y) exports accordingly.
The 3rd step, rim detection: adopt space bandpass filtering to carry out the information processing of simulated interior lamina reticularis, be defined as follows:
Figure FDA0000487071570000029
Wherein, ρ land ρ hthe respective standard that is two Gaussian functions is poor, A ethe edge of presentation video.
7. extreme path as claimed in claim 5, according to the unitary of illumination method of recognition of face, is characterized in that:
High-frequency information processing:
Make H represent DH, DV or DD, the truncation of high-frequency sub-band is as follows:
Figure 2014101360466100001DEST_PATH_IMAGE008
t=median(|H(x,y)|) (10)
Wherein, median () is used for returning determinant | and the intermediate value of H|, by truncation, high-frequency sub-band is divided into DH ', DV ', DD '.
CN201410136046.6A 2014-04-04 2014-04-04 Illumination normalization method for extreme illumination face recognition Pending CN103870820A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410136046.6A CN103870820A (en) 2014-04-04 2014-04-04 Illumination normalization method for extreme illumination face recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410136046.6A CN103870820A (en) 2014-04-04 2014-04-04 Illumination normalization method for extreme illumination face recognition

Publications (1)

Publication Number Publication Date
CN103870820A true CN103870820A (en) 2014-06-18

Family

ID=50909336

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410136046.6A Pending CN103870820A (en) 2014-04-04 2014-04-04 Illumination normalization method for extreme illumination face recognition

Country Status (1)

Country Link
CN (1) CN103870820A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105118032A (en) * 2015-08-19 2015-12-02 湖南优象科技有限公司 Wide dynamic processing method based on visual system
CN105404855A (en) * 2015-10-29 2016-03-16 深圳怡化电脑股份有限公司 Image processing methods and devices
CN106803240A (en) * 2016-12-30 2017-06-06 大连海事大学 A kind of industrial picture light processing method
CN108780508A (en) * 2016-03-11 2018-11-09 高通股份有限公司 System and method for normalized image
CN109190487A (en) * 2018-08-07 2019-01-11 平安科技(深圳)有限公司 Face Emotion identification method, apparatus, computer equipment and storage medium
CN110046559A (en) * 2019-03-28 2019-07-23 广东工业大学 A kind of face identification method
CN112241745A (en) * 2020-10-29 2021-01-19 东北大学 Characteristic point extraction method based on illumination invariant color space

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011055164A1 (en) * 2009-11-06 2011-05-12 Vesalis Method for illumination normalization on a digital image for performing face recognition
CN103400114A (en) * 2013-07-18 2013-11-20 上海交通大学 Illumination normalization processing system aiming at face recognition

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011055164A1 (en) * 2009-11-06 2011-05-12 Vesalis Method for illumination normalization on a digital image for performing face recognition
CN103400114A (en) * 2013-07-18 2013-11-20 上海交通大学 Illumination normalization processing system aiming at face recognition

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
周巍等: "基于光照归一化分块完备LBP特征的人脸识别", 《计算机工程与应用》 *
程勇: "人脸识别中光照不变量提取算法研究", 《中国博士学位论文全文库信息科技辑》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105118032A (en) * 2015-08-19 2015-12-02 湖南优象科技有限公司 Wide dynamic processing method based on visual system
CN105118032B (en) * 2015-08-19 2018-11-06 湖南优象科技有限公司 A kind of wide method for dynamically processing of view-based access control model system
CN105404855A (en) * 2015-10-29 2016-03-16 深圳怡化电脑股份有限公司 Image processing methods and devices
CN108780508A (en) * 2016-03-11 2018-11-09 高通股份有限公司 System and method for normalized image
CN108780508B (en) * 2016-03-11 2023-04-04 高通股份有限公司 System and method for normalizing images
CN106803240A (en) * 2016-12-30 2017-06-06 大连海事大学 A kind of industrial picture light processing method
CN106803240B (en) * 2016-12-30 2020-07-14 大连海事大学 Industrial image light equalizing processing method
CN109190487A (en) * 2018-08-07 2019-01-11 平安科技(深圳)有限公司 Face Emotion identification method, apparatus, computer equipment and storage medium
CN110046559A (en) * 2019-03-28 2019-07-23 广东工业大学 A kind of face identification method
CN112241745A (en) * 2020-10-29 2021-01-19 东北大学 Characteristic point extraction method based on illumination invariant color space

Similar Documents

Publication Publication Date Title
CN103870820A (en) Illumination normalization method for extreme illumination face recognition
Fan et al. Homomorphic filtering based illumination normalization method for face recognition
Hu et al. Singular value decomposition and local near neighbors for face recognition under varying illumination
CN104834922B (en) Gesture identification method based on hybrid neural networks
Xie et al. Extraction of illumination invariant facial features from a single image using nonsubsampled contourlet transform
Vu et al. Illumination-robust face recognition using retina modeling
Cheng et al. Robust face recognition based on illumination invariant in nonsubsampled contourlet transform domain
CN107392866A (en) A kind of facial image local grain Enhancement Method of illumination robust
CN107292250A (en) A kind of gait recognition method based on deep neural network
CN102750964B (en) Method and device used for controlling background music based on facial expression
CN104637037B (en) A kind of SAR image noise-reduction method based on non-local classification rarefaction representation
CN107239729B (en) Illumination face recognition method based on illumination estimation
CN101833658B (en) Illumination invariant extracting method for complex illumination face recognition
CN103679157A (en) Human face image illumination processing method based on retina model
CN114973307B (en) Finger vein recognition method and system for generating antagonism and cosine ternary loss function
Xie Illumination preprocessing for face images based on empirical mode decomposition
CN107516083A (en) A kind of remote facial image Enhancement Method towards identification
EP2497052A1 (en) Method for illumination normalization on a digital image for performing face recognition
CN106056076B (en) A kind of method of the illumination invariant of determining complex illumination facial image
CN103955893A (en) Image denoising method based on separable total variation model
CN104021387B (en) The facial image illumination processing method of view-based access control model modeling
Zheng et al. Illumination normalization via merging locally enhanced textures for robust face recognition
Xu et al. Raw vs. processed: How to use the raw and processed images for robust face recognition under varying illumination
Cheng et al. Illumination normalization for face recognition under extreme lighting conditions
CN102279925A (en) Chain processing face recognition method and system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20140618