CN103530634A - Face characteristic extraction method - Google Patents

Face characteristic extraction method Download PDF

Info

Publication number
CN103530634A
CN103530634A CN201310469608.4A CN201310469608A CN103530634A CN 103530634 A CN103530634 A CN 103530634A CN 201310469608 A CN201310469608 A CN 201310469608A CN 103530634 A CN103530634 A CN 103530634A
Authority
CN
China
Prior art keywords
log
face
image
extraction method
mode decomposition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310469608.4A
Other languages
Chinese (zh)
Inventor
谢晓华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201310469608.4A priority Critical patent/CN103530634A/en
Publication of CN103530634A publication Critical patent/CN103530634A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a face characteristic extraction method, which comprises the following steps of 11, inputting a face photo; 12, performing log transformation on the face photo to obtain a log-domain image; 13, decomposing the log-domain image in an EMD (empirical mode decomposition) way to obtain an EMD image; 14, extracting a high-frequency component from the EMD image as the estimate of a reflecting component to obtain a lighting-invariable face expression and an estimated lighting component. According to the face characteristic extraction method, the reflecting component and the lighting component of the log-domain image in a log space form a linear superposition relationship by performing log space transformation on the face photo, and the log-domain image is decomposed in the EMD way to extract the lighting-invariable face expression from the log-domain image, so that the lighting component can be effectively distinguished from the lighting component, and a better effect can be achieved.

Description

Face feature extraction method
Technical field
The present invention relates to face recognition technology field, relate in particular to the face feature extraction method based on empirical mode decomposition.
Background technology
Face recognition technology based on human face photo is all widely used in fields such as public safety, digital art, game.Illumination variation problem is the key factor that affects recognition of face performance always.In order to overcome the impact of illumination variation on face recognition technology, need to from the human face photo of input, obtain the illumination invariant of people's face and express.There are some researches show: illumination variation major effect is to the low-frequency component of photo, and less on radio-frequency component impact.Therefore, existing main stream approach is to carry out spectrum analysis by comparison film, extracts the radio-frequency component of photo and expresses as illumination invariant.In existing algorithm, the main spectral analysis algorithm adopting comprises: wavelet transformation is (as document T.Zhang, B.Fang, Y.Yuan etal., Multiscale facial structure representation for face recognition undervarying illumination, Pattern Recognition, 42 (2009) 251-258. and document C.Garcia, G.Zikos, and G.Tziritas, A wavelet-based framework for facerecognition, in Proc.Workshop on Advances in Facial Image Analysis andRecognition Technology, ECCV, Freiburg, 1998, pp.84-92.), Gabor conversion is (as document K.Okada, J.Steffens, T.Maurer et al., The Bochum/USC facerecognition system, Face Recognition:From Theory to Applications, Springer, Berlin, 1998, pp.186-205.), Weighted Gauss filtering is (as document H.Wang, S.Z.Li and Y.Wang, Face Recognition under Varying Lighting Conditions usingSelf Quotient Image, in Proc.Conf.Automatic Face and GestureRecognition, Seoul, 2004, pp.819-824.), discrete cosine transform is (as document Z.Hafed, and M.Levine, Face recognition using the discrete cosine transform, International Journal of Computer Vision, 43 (2001) 167-188.), total variation model is (as document T.Chen, W.Yin, X.Zhou, D.Comaniciu, T.S.Huang, Totalvariation models for variable lighting face recognition, IEEE Trans.PatternAnal.Mach.Intel., 28 (2006) 1519 – 1524.), profile wave convert is (as document X.Xie, J.Lai, W.-S.Zheng, Extraction of Illumination Invariant Facial Features froma Single Image Using Nonsubsampled Contourlet Transform, PatternRecognition, 43 (2010) 4177-4189.) and Fourier transform (as document J.Lai, P.C.Yuen, and G.Feng, Face recognition using holistic Fourier invariantfeatures, Pattern Recognition, 34 (2001) 95-109.).
The main thought of above-mentioned spectral analysis algorithm is to be the linear combination of (calling " base ") of one group of baseband signal picture signal decomposition.But these methods " base " used are manually and design in advance, and want decomposed signal irrelevant.
For addressing this problem, there is the method for researching and proposing field experience Mode Decomposition to decompose (as document D.Zhang and Y.Y.Tang facial image, Extraction ofIllumination-Invariant Features in Face Recognition by Empirical ModeDecomposition, International Conference on Biometrics (ICB), 2009., document D.Zhang, J.Pan, Y.Y.Tang, C.Wang, Illumination invariant facerecognition based on the new phase features, IEEE InternationalConference on Systems Man and Cybernetics, 2010, pp.3909-3914. with document Wang Ke, Dang Deyu, Sun Bin. a kind of face identification method based on Bidimensional Empirical Mode Decomposition. software guide, 2010, 09 (2)).Empirical mode decomposition proposed (N.E.Huang in 1998 by people such as N.E.Huang, Z.Shen, S.R.Long, M.C.Wu, H.H.Shih, Q.Zheng, N.C.Yen, C.C.Tung, and H.H.Liu, The empirical mode Decomposition and theHilbert Spectrum for Nonlinear and nonstationary time series analysis, inProceedings of the Royal Society London A, 1998, pp.903-1005.), English Empirical Mode Decomposition by name, is called for short EMD.EMD is a kind of adaptive Algorithm of Signal Decomposition, can according to signal adaptive to be decomposed calculate base and then carry out signal decomposition, on signal analysis and signal are processed, there is larger application potential quality.But the method for existing field experience Mode Decomposition is all directly EMD to be acted on to original human face photo, original human face photo be decomposed into a plurality of subsignals and, and then select parton signal and identify as face characteristic.Because illumination imaging model is not simple signal stack, therefore, the effect of the method for existing field experience Mode Decomposition is also more limited.
Summary of the invention
For the problems referred to above, the object of this invention is to provide a kind of face feature extraction method solving the problems of the technologies described above.
, it comprises the steps:
S11, input human face photo;
S12, described human face photo is carried out to log-transformation, obtain log-domain image;
S13, field experience Mode Decomposition decompose described log-domain image, obtain empirical mode decomposition image;
S14, extract the radio-frequency component in described empirical mode decomposition image, as the estimation of reflex components, can obtain the illumination composition that people's face of illumination invariant is expressed and estimated to obtain.
In a preferred embodiment of the present invention, in described step S11, according to Lambertian reflection model, obtain the expression formula of described human face photo:
I(x,y)=R(x,y)L(x,y),
Wherein, R is reflex components, is that a kind of people's face of illumination invariant is expressed, and L is illumination composition.
In a preferred embodiment of the present invention, in described step S12, the expression formula of described human face photo is carried out to log-transformation, obtain the expression formula of described log-domain image: f=v+u, wherein, v and u are respectively R and L in the value of log-domain.
In a preferred embodiment of the present invention, in described step S13, field experience Mode Decomposition decomposes the expression formula of described log-domain image, obtains the expression formula of described empirical mode decomposition image:
Figure BDA0000393360620000031
wherein, d kto decompose the subsignal obtaining, along with the increase of k, d kfrom high frequency to low frequency variations, r decomposes residual error gradually.
In a preferred embodiment of the present invention, in described step S14, the expression formula of described empirical mode decomposition image is carried out to the extraction of radio-frequency component, obtains:
People's face of illumination invariant is expressed:
Figure BDA0000393360620000041
Estimate the illumination composition obtaining:
Figure BDA0000393360620000042
Wherein, v ^ ≈ Σ k = 1 K 0 d k , u ^ = f - v , ^ K 0=1 or K 0=2.
Compared to prior art, described face feature extraction method provided by the invention first carries out log space conversion to human face photo, obtain log-domain image, and make interior in log space of log-domain image form linear superposition relation at composition (reflex components and illumination composition), then use EMD to decompose log-domain image, therefrom extract people's face of illumination invariant and express.Because EMD is decomposed into input signal (i.e. the human face photo of input) linear superposition of some subsignals (being reflex components and illumination composition) just, so described face feature extraction method can be distinguished illumination composition and illumination invariant composition effectively, obtains preferably effect.
Above-mentioned explanation is only the general introduction of technical solution of the present invention, in order to better understand technological means of the present invention, and can be implemented according to the content of instructions, and for above and other objects of the present invention, feature and advantage can be become apparent, below especially exemplified by embodiment, and coordinate accompanying drawing, be described in detail as follows.
Accompanying drawing explanation
The process flow diagram of the face feature extraction method that Fig. 1 provides for a preferred embodiment of the present invention;
Fig. 2 is for to carry out the corresponding experimenter's performance curve of recognition of face figure based on method not of the same race;
Fig. 3 utilizes face feature extraction method shown in Fig. 1 to carry out the effect schematic diagram of face characteristic extraction.
Embodiment
Below in conjunction with drawings and the specific embodiments, the present invention is further detailed explanation.
Refer to Fig. 1, a preferred embodiment of the present invention provides a kind of face feature extraction method, and it comprises the following steps:
S11, input human face photo I.
In the present embodiment, according to Lambertian reflection model, obtain the expression formula of described human face photo:
I(x,y)=R(x,y)L(x,y) (1)
Wherein, R is reflex components, and L is illumination composition.Usually, R is a kind of people's face expression of illumination invariant.In the present embodiment, need to obtain the estimated value of R and L.
S12, described human face photo I is carried out to log-transformation, obtain log-domain image.
Expression formula (1) is carried out to log-transformation, thus, described human face photo I can be converted into log space (log-domain), obtain the expression formula of log-domain image:
f=logI
=logR+logL
Δ
=v+u,(2)
Wherein, v and u are respectively R and L in the value of log-domain, i.e. v=logR, u=logL.
Be understandable that, in log-domain image f=v+u, its inherent composition v and u form linear superposition relation, and reflex components and illumination composition form linear superposition relation.
S13, field experience Mode Decomposition (EMD) decompose described log-domain image f, obtain empirical mode decomposition image.
In the present embodiment, expression formula (2) is carried out to EMD decomposition, the expression formula that obtains EMD image is:
f = Σ k = 1 k d k + r - - - ( 3 )
Wherein, d kbe to decompose the subsignal obtaining, be also called intrinsic mode function, along with the increase of k, d kgradually from high frequency to low frequency variations; R decomposes residual error.
In the present embodiment, concrete EMD algorithm can list of references N.E.Huang, Z.Shen, S.R.Long, M.C.Wu, H.H.Shih, Q.Zheng, N.C.Yen, C.C.Tung, and H.H.Liu, The empirical mode Decomposition and the Hilbert Spectrum forNonlinear and nonstationary time series analysis, in Proceedings of theRoyal Society London A, 1998, pp.903-1005., repeat no more herein.
S14, extract the radio-frequency component in described empirical mode decomposition image, as the estimation of reflex components, can obtain the illumination composition that people's face of illumination invariant is expressed and estimated to obtain.
In the present embodiment, the expression formula of described empirical mode decomposition image is carried out to the extraction of radio-frequency component, obtains:
People's face of illumination invariant is expressed:
R ^ = exp ( v ^ ) - - - ( 4 )
Estimate the illumination composition obtaining:
L ^ = exp ( u ^ ) - - - ( 5 )
Wherein, v ^ ≈ Σ k = 1 K 0 d k , u ^ = f - u ^ K 0=1 or K 0=2.
Be understandable that,
Figure BDA0000393360620000064
people's face of the illumination invariant that obtained is expressed,
Figure BDA0000393360620000065
estimate the illumination composition obtaining.
Described face feature extraction method first carries out log space conversion to human face photo, obtain log-domain image, and make interior in log space of log-domain image form linear superposition relation at composition (reflex components and illumination composition), then use EMD to decompose log-domain image, therefrom extract people's face of illumination invariant and express.Because EMD is decomposed into input signal (i.e. the human face photo of input) linear superposition of some subsignals (being reflex components and illumination composition) just, so described face feature extraction method can be distinguished illumination composition and illumination invariant composition effectively, obtains preferably effect.
The face feature extraction method proposing for the present invention, the present inventor has carried out recognition of face test on people's face test library Extended Yale B of Yale University's issue, refer to Fig. 2, Fig. 2 is for to carry out the corresponding experimenter's performance curve of recognition of face figure (receiver operating characteristic curve based on method not of the same race, be called for short ROC curve), as shown in Figure 2, the inventive method can obtain preferably ROC curve (obtaining higher checking rate under identical false acceptance rate), utilize described face feature extraction method provided by the invention to carry out face characteristic extraction, the wavelet transformation of mentioning in obtained recognition of face Performance Ratio background technology, Gabor conversion, Weighted Gauss filtering, discrete cosine transform, total variation model, the obtained performance of profile wave convert and original EMD method is all obviously better.
Referring to Fig. 3, is the effect schematic diagram that the present invention utilizes described face feature extraction method to carry out face characteristic extraction, and wherein, Fig. 3 a is input photo, the illumination composition of Fig. 3 b for estimating to obtain
Figure BDA0000393360620000071
the illumination invariant expression of Fig. 3 c for estimating to obtain
Figure BDA0000393360620000072
known, utilize described face feature extraction method to carry out feature extraction to human face photo and can obtain preferably effect.
The above, only embodiments of the invention, not the present invention is done to any pro forma restriction, although the present invention discloses as above with embodiment, yet not in order to limit the present invention, any those skilled in the art, do not departing within the scope of technical solution of the present invention, when can utilizing the technology contents of above-mentioned announcement to make a little change or being modified to the equivalent embodiment of equivalent variations, in every case be not depart from technical solution of the present invention content, any simple modification of above embodiment being done according to technical spirit of the present invention, equivalent variations and modification, all still belong in the scope of technical solution of the present invention.

Claims (5)

1. a face feature extraction method, is characterized in that, described face feature extraction method comprises the steps:
S11, input human face photo;
S12, described human face photo is carried out to log-transformation, obtain log-domain image;
S13, field experience Mode Decomposition decompose described log-domain image, obtain empirical mode decomposition image;
S14, extract the radio-frequency component in described empirical mode decomposition image, as the estimation of reflex components, can obtain the illumination composition that people's face of illumination invariant is expressed and estimated to obtain.
2. face feature extraction method as claimed in claim 1, is characterized in that, in described step S11, obtains the expression formula of described human face photo according to Lambertian reflection model:
I(x,y)=R(x,y)L(x,y),
Wherein, R is reflex components, is that a kind of people's face of illumination invariant is expressed, and L is illumination composition.
3. face feature extraction method as claimed in claim 2, is characterized in that, in described step S12, the expression formula of described human face photo is carried out to log-transformation, obtain the expression formula of described log-domain image: f=v+u, wherein, v and u are respectively R and L in the value of log-domain.
4. face feature extraction method as claimed in claim 3, is characterized in that, in described step S13, field experience Mode Decomposition decomposes the expression formula of described log-domain image, obtains the expression formula of described empirical mode decomposition image: wherein, d kto decompose the subsignal obtaining, along with the increase of k, d kfrom high frequency to low frequency variations, r decomposes residual error gradually.
5. face feature extraction method as claimed in claim 4, is characterized in that, in described step S14, the expression formula of described empirical mode decomposition image is carried out to the extraction of radio-frequency component, obtains:
People's face of illumination invariant is expressed:
Figure FDA0000393360610000012
Estimate the illumination composition obtaining:
Figure FDA0000393360610000013
Wherein, v ^ ≈ Σ k = 1 K 0 d k , u ^ = f - v , ^ K 0=1 or K 0=2.
CN201310469608.4A 2013-10-10 2013-10-10 Face characteristic extraction method Pending CN103530634A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310469608.4A CN103530634A (en) 2013-10-10 2013-10-10 Face characteristic extraction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310469608.4A CN103530634A (en) 2013-10-10 2013-10-10 Face characteristic extraction method

Publications (1)

Publication Number Publication Date
CN103530634A true CN103530634A (en) 2014-01-22

Family

ID=49932631

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310469608.4A Pending CN103530634A (en) 2013-10-10 2013-10-10 Face characteristic extraction method

Country Status (1)

Country Link
CN (1) CN103530634A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106056076A (en) * 2016-05-30 2016-10-26 南京工程学院 Method for determining illumination invariant of complex illumination face image
CN106897672A (en) * 2017-01-19 2017-06-27 重庆三峡学院 A kind of human face light invariant feature extraction method of utilization logarithmic transformation and Priwitt operators
CN106934340A (en) * 2017-01-19 2017-07-07 重庆三峡学院 A kind of human face light invariant feature extraction method of utilization logarithmic transformation and Sobel operators
CN106934341A (en) * 2017-01-19 2017-07-07 重庆三峡学院 A kind of human face light invariant feature extraction method of utilization logarithmic transformation and Kirsch operators
CN106934399A (en) * 2017-01-19 2017-07-07 重庆三峡学院 A kind of human face light invariant feature extraction method of utilization logarithmic transformation and Laplacian operators
CN106971143A (en) * 2017-02-24 2017-07-21 重庆三峡学院 A kind of human face light invariant feature extraction method of utilization logarithmic transformation and smothing filtering
CN107437061A (en) * 2017-06-27 2017-12-05 重庆三峡学院 It is a kind of to utilize logarithmic transformation and the human face light invariant feature extraction method of Roberts operators

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
N.E.HUANG 等: "The Empirical mode Decomposition and the Hilbert Spectrum for Nonlinear and Nonstationary Time Series Analysis", 《IN PROCEEDINGS OF THE ROYAL SOCIETY LONDON》 *
程勇: "人脸识别中光照不变量提取算法研究", 《中国博士学位论文全文数据库(信息科技辑)》 *
蒋永馨等: "一种基于光照补偿的图像增强算法", 《电子学报》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106056076A (en) * 2016-05-30 2016-10-26 南京工程学院 Method for determining illumination invariant of complex illumination face image
CN106056076B (en) * 2016-05-30 2019-06-14 南京工程学院 A kind of method of the illumination invariant of determining complex illumination facial image
CN106897672A (en) * 2017-01-19 2017-06-27 重庆三峡学院 A kind of human face light invariant feature extraction method of utilization logarithmic transformation and Priwitt operators
CN106934340A (en) * 2017-01-19 2017-07-07 重庆三峡学院 A kind of human face light invariant feature extraction method of utilization logarithmic transformation and Sobel operators
CN106934341A (en) * 2017-01-19 2017-07-07 重庆三峡学院 A kind of human face light invariant feature extraction method of utilization logarithmic transformation and Kirsch operators
CN106934399A (en) * 2017-01-19 2017-07-07 重庆三峡学院 A kind of human face light invariant feature extraction method of utilization logarithmic transformation and Laplacian operators
CN106971143A (en) * 2017-02-24 2017-07-21 重庆三峡学院 A kind of human face light invariant feature extraction method of utilization logarithmic transformation and smothing filtering
CN107437061A (en) * 2017-06-27 2017-12-05 重庆三峡学院 It is a kind of to utilize logarithmic transformation and the human face light invariant feature extraction method of Roberts operators

Similar Documents

Publication Publication Date Title
CN103530634A (en) Face characteristic extraction method
CN108549078B (en) Cross-channel combination and detection method for radar pulse signals
Yuan et al. Switching median and morphological filter for impulse noise removal from digital images
CN107578053B (en) Contour extraction method and device, computer device and readable storage medium
Chen et al. Deep residual learning in modulation recognition of radar signals using higher-order spectral distribution
Hosseini et al. Image sharpness metric based on maxpol convolution kernels
Liu et al. Infrared and visible image fusion based on region of interest detection and nonsubsampled contourlet transform
CN102547477B (en) Video fingerprint method based on contourlet transformation model
Genin et al. Background first-and second-order modeling for point target detection
Nagarathinam et al. Moving shadow detection based on stationary wavelet transform and Zernike moments
Fu et al. A noise-resistant superpixel segmentation algorithm for hyperspectral images
Ruchay et al. Impulsive noise removal from color video with morphological filtering
EP3156943A1 (en) Method and device for clustering patches of a degraded version of an image
CN105608701A (en) Color image segmentation method based on quaternion circular harmonic moment
Hou et al. An image information extraction algorithm for salt and pepper noise on fractional differentials
CN102693530A (en) Synthetic aperture radar (SAR) image despeckle method based on target extraction and speckle reducing anisotropic diffusion (SRAD) algorithm
Elakkiya et al. Feature based object recognition using discrete wavelet transform
CN104299199A (en) Video raindrop detection and removal method based on wavelet transform
Ren et al. Target detection of maritime search and rescue: Saliency accumulation method
Wang et al. Image edge detection algorithm based onwavelet fractional differential theory
Kumar et al. Text detection and localization in low quality video images through image resolution enhancement technique
Kumar et al. Image defencing via signal demixing
CN111382632A (en) Target detection method, terminal device and computer-readable storage medium
Janardhana et al. Image noise removal framework based on morphological component analysis
CN101937511A (en) Rapid image matching method based on stochastic parallel optimization algorithm

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20140122

RJ01 Rejection of invention patent application after publication