CN101916371B - Method for illuminating/normalizing image and method for identifying image by using same - Google Patents

Method for illuminating/normalizing image and method for identifying image by using same Download PDF

Info

Publication number
CN101916371B
CN101916371B CN2010102713221A CN201010271322A CN101916371B CN 101916371 B CN101916371 B CN 101916371B CN 2010102713221 A CN2010102713221 A CN 2010102713221A CN 201010271322 A CN201010271322 A CN 201010271322A CN 101916371 B CN101916371 B CN 101916371B
Authority
CN
China
Prior art keywords
image
scale part
small scale
illumination
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN2010102713221A
Other languages
Chinese (zh)
Other versions
CN101916371A (en
Inventor
孙艳丰
刘嘉文
王立春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN2010102713221A priority Critical patent/CN101916371B/en
Publication of CN101916371A publication Critical patent/CN101916371A/en
Application granted granted Critical
Publication of CN101916371B publication Critical patent/CN101916371B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to a method for illuminating/normalizing an image and a method for identifying an image by using the same, wherein the method for illuminating/normalizing the image comprises the steps of: dividing the input image into a shadow zone and a normal illumination zone; obtaining the small scale part of the image when lambda L1 is valued within 0 to 0.5 according to a TVQI model or LTV model, preferably lambda L1=0.2; obtaining the small scale part of the image when the lambda L1 is valued within 0.6 to 1, preferably lambda L1=0.8; taking the small scale part of the image obtained when the lambda L1 is valued within 0 to 0.5 (preferably 0.2) as the small scale part of the shadow zone of the image, then taking the small scale part of the image obtained when lambda L1 is valued within 0.6 to 1 (preferably 0.8) as the small scale part of the normal illumination zone of the image, and splicing to obtain the small scale part v of the whole image. The invention can enhance the effectiveness of image identification under a complicated illumination condition without knowing the light source information previously.

Description

Image is carried out method that unitary of illumination handles and the image-recognizing method that adopts it
Technical field
The present invention relates to a kind of method that image is carried out the unitary of illumination processing, also relate to a kind of image-recognizing method that adopts it, belong to area of pattern recognition.
Background technology
In recent years, recognition of face research has obtained paying close attention to widely.Illumination, attitude and expression three big problems are the key factors that influences the recognition of face precision always, and wherein illumination factor, the particularly variation of physical environment light are not artificially can control, so photo-irradiation treatment is the step that each face identification system must carry out.Most of face identification systems are made certain limitation to illumination condition usually; Suppose that pending image obtains under basic evenly illumination condition; Illumination condition in they only allow among a small circle changes; Yet actual illumination condition is often inhomogeneous, and bright excessively, dark excessively, shade that polarisation, sidelight, high photoconduction cause all can make effect descend significantly.Therefore, how to reduce numerous researchists of concern illumination has obtained to(for) the influence of recognition of face.
TVQI (Total Variation based quotient image model) model is a kind of use TV-L1 yardstick decomposition model and the human face light normalization model that combines from quotient images (SQI) model.Propositions such as Wang be to use a kind of unitary of illumination model of the balanced outside photechic effect of the smoothness information of image itself from the quotient images model; Wherein the smoothness information of image is the result of weighting Gaussian filter filtering; Therefore but Gaussian filter can not keep the edge details of image HFS, from quotient images models treated lost part image feature information as a result.Proposed the TVQI model for addressing this problem Chen etc., the TVQI model uses TV-L1 to decompose the photechic effect in the large scale characteristics of image balance facial image that obtains.TV-L1 is the PDE of anisotropy diffusion, has the yardstick selectivity characteristic, can keep the edge feature in the image simultaneously.It is simple from quotient images that this method has not only kept, effective characteristics, and can keep more people's face detailed information, improved recognition effect.
Face TV-L1 model and TV-L2 model down and do a brief introduction.
The TV-L1 model:
The TV-L1 model be a kind of come by PDE method development image is carried out the image processing method of smoothing denoising.Its basic thought is to establish a functional J λ[u]: Try to achieve and make J λ[u] obtains the u of minimum value.Wherein u is the result of TV-L1 model; The image behind the smoothing denoising just; I is an input picture; Original image just, λ is for regulating the parameter of denoising degree, and
Figure BSA00000255931800022
is gradient operator.
J λFirst in [u] Being used for making image smoothing, second portion λ ∫ | I-u|dx makes u keep the principal character of I.Can regulate the similarity degree of u and I through λ.
From quotient images model (SQI) is a kind of unitary of illumination model, and its basic skills is to utilize low-pass filter to obtain the low-frequency component of facial image, with the computing of discussing of original image pointwise, obtains from quotient images again, is formulated as follows:
Q ( x , y ) = I ( x , y ) S ( x , y ) = I ( x , y ) F * I ( x , y )
Wherein (x y) is every pixel value in quotient images to Q, and (x y) is every pixel value in the original image to I, and (x is every a pixel value in the low-frequency component of image y) to S, and F is a low-pass filter.Having certain illumination unchangeability from quotient images Q, is the result from the quotient images model.
The TV-L2 model:
The TV-L2 model is identical with the basic thought of TV-L1 model, and the constraint condition that only is to use is different, the TV-L2 model uses be the L2 norm as constraint condition, the TV-L1 model uses be the L1 norm as constraint condition, the difference of constraint condition makes functional J λSecond portion in [u] is different with TV-L1, and the functional in the TV-L2 model is expressed as: Wherein u is the result of TV-L2 model, the image behind the smoothing denoising just, and I is an input picture, original image just, λ be for regulating the parameter of denoising degree,
Figure BSA00000255931800026
Be gradient operator.
Summary of the invention
Technology of the present invention is dealt with problems and is: overcome the influence of illumination factor in the input picture; Provide a kind of image is carried out the method that unitary of illumination is handled; A kind of image-recognizing method that adopts it also is provided, and this method can strengthen the validity of image recognition under the complex illumination condition.
Technical solution of the present invention is:
The method that image is carried out the unitary of illumination processing provided by the invention may further comprise the steps:
Input picture is divided into shadow region and illumination normal region;
According to TVQI model or LTV model, obtain λ L1The small scale part of image when getting the arbitrary value of 0-0.5 scope; Preferably, λ L1=0.2;
According to TVQI model or LTV model, obtain λ L1The small scale part of image when getting the arbitrary value of 0.6-1 scope; Preferably, v λ=0.8
With λ L1The image small scale part that (preferred 0.2) obtains when getting the arbitrary value of 0-0.5 scope is as the small scale part of image shadow region, with λ L1The image small scale part that (preferred 0.8) obtains during for arbitrary value of 0.6-1 scope is as the small scale part in image normal illumination zone, and splicing obtains the small scale part v of entire image.
The method that image is carried out the unitary of illumination processing provided by the invention can further include following steps:
With input picture substitution TV-L2 model, obtain the large scale part u of image L2
Image large scale part
Figure BSA00000255931800031
to obtaining is carried out histogram equalization; Blocked histogram equalization preferably;
The image large scale part
Figure BSA00000255931800032
that obtains after the small scale part v of said entire image and the equalization is merged.
The method that image is carried out the unitary of illumination processing provided by the invention can further include following steps:
Image large scale part
Figure BSA00000255931800033
to obtaining after the equalization is carried out homomorphic filtering;
The image large scale part
Figure BSA00000255931800034
that obtains after the small scale part v of said entire image and the homomorphic filtering is merged.
Provided by the invention image is carried out the method that unitary of illumination is handled, can further include following steps: it is 100 * 100 gray level image I that input picture is normalized into pixel.
When said input picture is gray level image, image pixel is revised as 100 * 100; When said input picture is coloured image, it is become gray level image through following formula:
gray = r + g + b 3
In the formula, r, g, b represent in the coloured image red respectively, and be green, the value of blue component, gray represents the gray-scale value of gray level image.
The method that input picture is divided into shadow region and illumination normal region is:
A) the average gray value aver of calculating input image:
aver = ( Σ i n Σ j m I ( i , j ) ) / ( m * n )
In the formula, m, the length of n difference presentation video and wide;
B) calculate each pixel flag information flag (x, y):
flag ( x , y ) = 1 if I ( x , y ) > aver 0 if I ( x , y ) ≤ aver
Flag (x, y) equaling 1 is that this point of expression is the illumination normal region, (x, y) equal 0 is that this point of expression is the shadow region to flag.
Image-recognizing method provided by the invention is after image being carried out the unitary of illumination processing, uses the subspace analysis method that the high dimensional data that obtains is projected in the lower dimensional space; Carry out feature extraction; Obtain the less vector of dimension, utilize sorter to classify then, obtain recognition result.The preferred arest neighbors method of said sorter.
The present invention compared with prior art has following advantage:
The method that image is carried out the unitary of illumination processing provided by the invention does not need to know in advance light source information, does not need training set yet, can overcome the influence of illumination factor in the input picture, strengthens the validity of image recognition under the complex illumination condition greatly.
The present invention is specially adapted to facial image identification, and when carrying out recognition of face, the photechic effect in can balanced facial image effectively improves the robustness of recognition of face for illumination effect.The method is very simple and can be applicable to any individual facial image.
Description of drawings
Below will combine the accompanying drawing specific embodiments of the invention to describe.
Fig. 1 is for carrying out the method flow diagram that unitary of illumination is handled according to the present invention to facial image.
Embodiment
In embodiment, be that facial image is that example is elaborated to the present invention with the image of importing.Certainly, the present invention also is applicable to other image.
Embodiment one:
The unitary of illumination of present embodiment is on the TVQI model based, to improve.
As shown in Figure 1, image is carried out the method that unitary of illumination is handled, may further comprise the steps:
(1) input picture is normalized to 100 * 100 gray level image I
If input picture is a gray level image, then image pixel is revised as 100 * 100;
If input picture is a coloured image, then coloured image is become gray level image through following formula:
gray = r + g + b 3
In the formula, r, g, b represent in the coloured image red respectively, and be green, the value of blue component, gray represents the gray-scale value of gray level image.
(2) input picture is divided into shadow region and illumination normal region
The method that input picture I after step (1) normalization is divided into shadow region and illumination normal region is:
A) the average gray value aver of calculating input image:
aver = ( Σ i n Σ j m I ( i , j ) ) / ( m * n )
In the formula, m, the length of n difference presentation video and wide;
B) calculate each pixel flag information flag (x, y):
flag ( x , y ) = 1 if I ( x , y ) > aver 0 if I ( x , y ) ≤ aver
Flag (x, y) equaling 1 is that this point of expression is the illumination normal region, (x, y) equal 0 is that this point of expression is the shadow region to flag.
(3) obtain λ L1The small scale part of=0.2 o'clock facial image
Input picture after utilizing the TVQI model to step (1) normalization is handled, and obtains λ L1The small scale part v of=0.2 o'clock facial image λ=0.2
The TVQI model is from " Total variation models for variable lighting facerecognition " Chen, Terrence; Yin, Wotao; Zhou, Xiang Sean; Comaniciu, Dorin; Huang, Thomas S.Source:IEEE Transactions on Pattern Analysisand Machine Intelligence, v 28, and n 9, p 1519-1524, September 2006.
Concrete steps are:
A), use λ according to the TVQI model L1=0.2 o'clock TV-L1 model obtains the facial image u behind the smoothing denoising λ=0.2, formula is following:
u λ = 0.2 = arg min u ∫ Ω | ▿ u | dx + λ L 1 ∫ ( I - u ) 2 + ϵ dx
Wherein u is the result of TV-L1 model; The image behind the smoothing denoising just; I is an input picture; Original image just, λ is for regulating the parameter of smoothing denoising degree, and is gradient operator.
Use the Euler-Lagrange equation Numerical Implementation can obtain u λ=0.2Steady state solution
Figure BSA00000255931800064
B) utilize According to from the quotient images model, with what obtain
Figure BSA00000255931800066
As the low-frequency component of image, carry out merchant's computing of pointwise with original image I, obtain λ L1=0.2 o'clock the small scale part v that comprises people's face detailed information λ=0.2:
v λ = 0.2 ( x , y ) = I ( x , y ) / u ^ λ = 0.2 ( x , y )
Wherein (x y) is every pixel value in the original image, v to I λ=0.2(x y) is every pixel value in the TVQI model result,
Figure BSA00000255931800068
Be the steady state solution that obtains In every pixel value.
(4) obtain λ L1The small scale part of=0.8 o'clock facial image
Input picture after utilizing the TVQI model to step (1) normalization is handled, and obtains λ L1The small scale part v of=0.8 o'clock facial image λ=0.8Concrete steps are:
A), use λ according to the TVQI model L1=0.8 o'clock TV-L1 model obtains the facial image u behind the smoothing denoising λ=0.8, formula is following:
u λ = 0.2 = arg min u ∫ Ω | ▿ u | dx + λ L 1 ∫ ( I - u ) 2 + ϵ dx
Wherein u is the result of TV-L1 model; The image behind the smoothing denoising just; I is an input picture; Original image just, λ is for regulating the parameter of smoothing denoising degree, and is gradient operator.
Use the Euler-Lagrange equation Numerical Implementation to obtain u λ=0.8Steady state solution
B) utilize According to from the quotient images model, with what obtain
Figure BSA00000255931800075
As the low-frequency component of image, carry out merchant's computing of pointwise with original image I, obtain λ L1=0.8 o'clock the small scale part v that comprises people's face detailed information λ=0.8:
v λ = 0.2 ( x , y ) = I ( x , y ) / u ^ λ = 0.2 ( x , y )
Wherein (x y) is every pixel value in the original image, v to I λ=0.8(x y) is every pixel value in the TVQI model result, Be the steady state solution that obtains
Figure BSA00000255931800078
In every pixel value.
(5) multiple dimensioned splicing obtains the small scale part of entire image
The λ that step (3) is obtained L1The small scale part v of=0.2 o'clock facial image λ=0.2As the small scale part of the shadow region in the facial image, the λ that step (4) is obtained L1The small scale part v of=0.8 o'clock facial image λ=0.8Small scale part as the zone of the normal illumination in the facial image:
v ( x , y ) = v λ = 0.8 ( x , y ) if flag ( x , y ) = = 1 v λ = 0.2 ( x , y ) if flag ( x , y ) = = 0
Obtain the small scale part v of entire image.
(6) utilize the TV-L2 model to obtain the large scale part u of facial image L2
Input picture substitution TV-L2 model after the normalization that step (1) is obtained, carry out smoothing denoising:
u L 2 = arg min u ∫ | ▿ u | dx + λ L 2 ∫ ( I - u ) 2 + ϵdx
Wherein u is the image behind the smoothing denoising, and I is an original image, λ L2Be parameter, in order to guarantee (I-u) 2Can little adding distracter ε.
Use the Euler-Lagrange equation Numerical Implementation to obtain u L2Steady state solution
Figure BSA00000255931800082
(7) the large scale image is carried out the blocked histogram equalization
The basic thought of histogram equalization is to be the histogram transformation of original graph equally distributed form, has so just increased the dynamic range of pixel gray-scale value, thereby can reach the effect that strengthens the integral image contrast.
(the histogram equalization of piecemeal is not good to carry out the blocked histogram equalization for the facial image large scale part
Figure BSA00000255931800083
that step (6) is obtained; Be blocked histogram equalization better effects if); Strengthen local contrast, balanced photechic effect.Concrete steps are:
A, to image block
u ^ L 2 1 ( x , y ) = u ^ L 2 ( x , y ) 1≤x≤50,1≤y≤50
u ^ L 2 2 ( x - 50 , y ) = u ^ L 2 ( x , y ) 51≤x≤100,1≤y≤50
u ^ L 2 3 ( x , y - 50 ) = u ^ L 2 ( x , y ) 1≤x≤50,51≤y≤100
u ^ L 2 4 ( x , y ) = u ^ L 2 ( x - 50 , y - 50 ) 51≤x≤100,51≤y≤100
B, each piece is carried out histogram equalization
Utilize the cumulative distribution functional based method to carry out histogram equalization, method is following:
A, calculating gray scale Density Distribution GP
GP(i)=σ i/(m*n) i=1…256
σ wherein iBe that gray-scale value is the number of the pixel of i in the image;
B, calculating accumulative histogram distribution S
S ( i ) = Σ k = 1 i GP ( k ) i=1…256
C, cumulative distribution round, and set up the mapping relations R () of gray level
R(i)=floor(256*S(i)+0.5) i=1…256
D, according to the mapping relations of setting up; Original image is carried out grey scale mapping, obtain the histogram equalization result of each piece image
u ^ L 2 j * = R ( u ^ L 2 j ) j=1,2,3,4
C, be spliced into entire image, obtain the result
Figure BSA00000255931800093
of the blocked histogram equalization of entire image
u ^ L 2 * ( x , y ) = u ^ L 2 1 * ( x , y ) 1≤x≤50,1≤y≤50
u ^ L 2 * ( x - 50 , y ) = u ^ L 2 2 * ( x , y ) 51≤x≤100,1≤y≤50
u ^ L 2 * ( x , y - 50 ) = u ^ L 2 3 * ( x , y ) 1≤x≤50,51≤y≤100
u ^ L 2 * ( x , y ) = u ^ L 2 4 * ( x - 50 , y - 50 ) 51≤x≤100,51≤y≤100
(8) homomorphic filtering
Figure BSA00000255931800098
that step (7) is obtained carries out homomorphic filtering, and concrete steps are:
A) image is carried out logarithm operation
z ( x , y ) = ln u ^ L 2 * ( x , y )
B) image is carried out the Fourier conversion, image is transformed into frequency domain from the spatial domain
Figure BSA000002559318000910
where
Figure BSA000002559318000911
is the Fourier transform
C) (u v), carries out filtering to image to utilize Gauss's Hi-pass filter H
S(u,v)=H(u,v)Z(u,v)
D) image is carried out contrary Fourier conversion, image is transformed into the spatial domain from frequency domain
Figure BSA000002559318000912
where
Figure BSA000002559318000913
is the inverse Fourier transform
E) image is carried out exponent arithmetic, obtain image is carried out the result
Figure BSA000002559318000914
after the homomorphic filtering
u ^ L 2 * * ( x , y ) = e g ( x , y )
(9) multiple dimensioned fusion
The large scale part
Figure BSA000002559318000916
that small scale part v that step (5) is obtained and step (8) obtain is carried out addition and is merged:
y ( x , y ) = α u L 2 * * ( x , y ) + βv ( x , y )
Wherein two kinds of different scales are used different weighting coefficients: α=0.6, β=0.4.Y is the facial image behind the unitary of illumination.
Need to prove; Above-mentioned steps (2), (3), (4) and (5) are that facial image is carried out the steps necessary that unitary of illumination is handled, and step (1), (6), (7), (8) and (9) are the steps that better and further comprises for the unitary of illumination treatment effect that makes image.
According to image-recognizing method of the present invention; Be after image being carried out the unitary of illumination processing; Use the subspace analysis method that the high dimensional data that obtains is projected in the lower dimensional space, carry out feature extraction, obtain the less vector of dimension; Utilize sorter (preferred arest neighbors method) to classify then, obtain recognition result.
Image after unitary of illumination is handled can not be applied to recognition of face because dimension is higher, so need to use the subspace analysis method to carry out feature extraction, high dimensional data is projected to than in the lower dimensional space.The method that high dimensional data is become low dimension data can be selected following several kinds for use:
Principal component analysis (PCA) (PCA:Principal Component Analysis), this method is extracted the major component characteristic;
Independent component analysis (ICA:Independent Component Correlation Algorithm), this method is extracted the independent component characteristic;
Linear discriminant analysis (LDA:linearity distinction analysis), this method extract the optimum diagnostic characteristics of Fisher.
The y that v that step (5) is obtained through the subspace analysis method or step (9) obtain projects in the lower dimensional space, obtains the less vector of dimension, utilizes the arest neighbors method to classify then, obtains recognition result.
The arest neighbors method is the simplest a kind of sorter, except this method, also has some comparatively complicated classifier methods to combine with the subspace analysis method, for example BAYESIAN NETWORK CLASSIFIER, cascade classifier, BP neural network classifier etc.
Embodiment two:
The unitary of illumination of present embodiment is on the LTV model based, to improve.
The LTV model is from " Total variation models for variable lighting facerecognition " Chen, Terrence; Yin, Wotao; Zhou, Xiang Sean; Comaniciu, Dorin; Huang, Thomas S.Source:IEEE Transactions on Pattern Analysisand Machine Intelligence, v 28, and n 9, p 1519-1524, September 2006.
LTV model and TVQI model come from same piece of writing article, and the difference of the two is: what TVQI used is merchant's computing, and what LTV used is logarithm operation.Use LTV model and the difference of using the TVQI model only to be step (3) and step (4).
Only just describe below with the difference of embodiment one.
Step 3: utilize the input picture after the normalization that the LTV model obtains step 1 to handle, obtain λ L1The small scale part v of=0.2 o'clock facial image λ=0.2Concrete steps are:
A, input picture is carried out logarithm operation
f(x,y)=lnI(x,y)
B, use λ L1=0.2 o'clock TV-L1 model obtains the u behind the f smoothing denoising λ=0.2, formula is following:
u λ = 0.2 = arg min u ∫ Ω | ▿ u | dx + λ L 1 ∫ ( f - u ) 2 + ϵ dx
Wherein u is the result of TV-L1 model, and I is an input picture, λ L1Be the parameter of adjusting smoothing denoising degree,
Figure BSA00000255931800112
Be gradient operator.
Use the Euler-Lagrange equation Numerical Implementation can obtain u λ=0.2Steady state solution
Figure BSA00000255931800113
C, obtain the essential part ρ of image
ρ ( x , y ) = f ( x , y ) - u ^ λ = 0.2 ( x , y )
D, the essential part ρ of image is carried out exponent arithmetic obtain λ L1=0.2 o'clock the small scale part v that comprises people's face detailed information λ=0.2
v λ=0.2(x,y)=e ρ(x,y)
Step 4: utilize the LTV model that the normalized input picture of step (1) is handled, obtain λ L1The small scale part v of=0.8 o'clock facial image λ=0.8Concrete steps are:
A, input picture is carried out logarithm operation
f(x,y)=lnI(x,y)
B, use λ L1=0.8 o'clock TV-L1 model obtains the u behind the f smoothing denoising λ=0.8, formula is following:
u λ = 0.2 = arg min u ∫ Ω | ▿ u | dx + λ L 1 ∫ ( f - u ) 2 + ϵ dx
Wherein u is the result of TV-L1 model, and I is an input picture, λ L1Be the parameter of adjusting smoothing denoising degree,
Figure BSA00000255931800122
Be gradient operator.
Use the Euler-Lagrange equation Numerical Implementation can obtain u λ=0.8Steady state solution
Figure BSA00000255931800123
C, obtain the essential part ρ of image
ρ ( x , y ) = f ( x , y ) - u ^ λ = 0 . 8 ( x , y )
D, the essential part ρ of image is carried out exponent arithmetic obtain λ L1=0.8 o'clock the small scale part v that comprises people's face detailed information λ=0.8
v λ=0.8(x,y)=e ρ(x,y)
The content of not doing in the instructions of the present invention to describe in detail belongs to this area professional and technical personnel's known technology.
The present invention is not limited to the content that claim and the foregoing description are addressed, so long as any invention of creating out according to design of the present invention all should belong within protection scope of the present invention.

Claims (10)

1. image is carried out the method that unitary of illumination is handled, it is characterized in that, may further comprise the steps:
Input picture is divided into shadow region and illumination normal region;
According to TVQI model or LTV model, obtain λ L1The small scale part of image when getting the arbitrary value of 0-0.5 scope;
According to TVQI model or LTV model, obtain λ L1The small scale part of image when getting the arbitrary value of 0.6-1 scope;
With λ L1The image small scale part that obtains when getting the arbitrary value of 0-0.5 scope is as the small scale part of image shadow region, with λ L1The image small scale part that obtains during for arbitrary value of 0.6-1 scope is as the small scale part in image normal illumination zone, and splicing obtains the small scale part v of entire image.
2. the method that image is carried out the unitary of illumination processing according to claim 1 is characterized in that, further may further comprise the steps:
With input picture substitution TV-L2 model, obtain the large scale part u of image L2
To the image large scale part u that obtains L2The steady state solution of using the Euler-Lagrange equation Numerical Implementation to obtain Carry out histogram equalization;
The image large scale part
Figure FSB00000821034200012
that obtains after the small scale part v of said entire image and the equalization is merged.
3. the method that image is carried out the unitary of illumination processing according to claim 2 is characterized in that, further may further comprise the steps:
Image large scale part
Figure FSB00000821034200013
to obtaining after the equalization is carried out homomorphic filtering;
The image large scale part
Figure FSB00000821034200014
that obtains after the small scale part v of said entire image and the homomorphic filtering is merged.
4. according to claim 1 image is carried out the method that unitary of illumination is handled, it is characterized in that, further may further comprise the steps: it is 100 * 100 gray level image I that input picture is normalized into pixel.
5. the method that image is carried out the unitary of illumination processing according to claim 4 is characterized in that, when said input picture is gray level image, image pixel is revised as 100 * 100; When said input picture is coloured image, it is become gray level image through following formula:
gray = r + g + b 3
In the formula, r, g, b represent in the coloured image red respectively, and be green, the value of blue component, gray represents the gray-scale value of gray level image.
6. the method that image is carried out the unitary of illumination processing according to claim 1 is characterized in that the method that input picture is divided into shadow region and illumination normal region is:
A) the average gray value aver of calculating input image:
aver = ( Σ i n Σ j m I ( i , j ) ) / ( m * n )
In the formula, m, the length of n difference presentation video and wide;
B) calculate each pixel flag information flag (x, y):
flag ( x , y ) = 1 if I ( x , y ) > aver 0 if I ( x , y ) ≤ aver
Flag (x, y) equaling 1 is that this point of expression is the illumination normal region, (x, y) equal 0 is that this point of expression is the shadow region to flag.
7. the method that image is carried out the unitary of illumination processing according to claim 1 is characterized in that,
According to TVQI model or LTV model, obtain λ L1The small scale part v of=0.2 o'clock image λ=0.2
According to TVQI model or LTV model, obtain λ L1The small scale part v of=0.8 o'clock image λ=0.8
With λ L1The small scale part v of=0.2 o'clock image λ=0.2As the small scale part of image shadow region, with λ L1The small scale part v of=0.8 o'clock image λ=0.8As the small scale part in image normal illumination zone, splicing obtains the small scale part v of entire image.
8. the method that image is carried out the unitary of illumination processing according to claim 2 is characterized in that, to the image large scale part u that obtains L2The steady state solution of using the Euler-Lagrange equation Numerical Implementation to obtain
Figure FSB00000821034200031
Carry out the blocked histogram equalization.
9. image-recognizing method; It is characterized in that,, use the subspace analysis method that high dimensional data is projected in the lower dimensional space according to the unitary of illumination result that above-mentioned arbitrary claim obtains; Carry out feature extraction; Obtain the less vector of dimension, utilize sorter to classify then, obtain recognition result.
10. image-recognizing method according to claim 9 is characterized in that, said image is a facial image.
CN2010102713221A 2010-09-01 2010-09-01 Method for illuminating/normalizing image and method for identifying image by using same Active CN101916371B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010102713221A CN101916371B (en) 2010-09-01 2010-09-01 Method for illuminating/normalizing image and method for identifying image by using same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010102713221A CN101916371B (en) 2010-09-01 2010-09-01 Method for illuminating/normalizing image and method for identifying image by using same

Publications (2)

Publication Number Publication Date
CN101916371A CN101916371A (en) 2010-12-15
CN101916371B true CN101916371B (en) 2012-11-21

Family

ID=43323880

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010102713221A Active CN101916371B (en) 2010-09-01 2010-09-01 Method for illuminating/normalizing image and method for identifying image by using same

Country Status (1)

Country Link
CN (1) CN101916371B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102385753B (en) * 2011-11-17 2013-10-23 江苏大学 Illumination-classification-based adaptive image segmentation method
CN107292275B (en) * 2017-06-28 2020-04-10 北京飞搜科技有限公司 Frequency domain division human face feature recognition method and system
CN107993193B (en) * 2017-09-21 2021-06-11 沈阳工业大学 Tunnel lining image splicing method based on illumination equalization and surf algorithm improvement
CN107704856A (en) * 2017-09-28 2018-02-16 杭州电子科技大学 Ice core optical characteristics image acquisition and processing method
CN110610525B (en) * 2018-06-15 2023-04-07 中兴通讯股份有限公司 Image processing method and device and computer readable storage medium
CN109448014B (en) * 2018-10-19 2021-04-30 福建师范大学 Image information thinning method based on subgraph
CN109858546B (en) * 2019-01-28 2021-03-30 北京工业大学 Image identification method based on sparse representation
CN110163811A (en) * 2019-04-10 2019-08-23 浙江工业大学 A kind of facial image yin-yang face phenomenon removing method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101046847A (en) * 2007-04-29 2007-10-03 中山大学 Human face light alignment method based on secondary multiple light mould
CN101236598A (en) * 2007-12-28 2008-08-06 北京交通大学 Independent component analysis human face recognition method based on multi- scale total variation based quotient image
CN101261678A (en) * 2008-03-18 2008-09-10 中山大学 A method for normalizing face light on feature image with different size

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4207717B2 (en) * 2003-08-26 2009-01-14 株式会社日立製作所 Personal authentication device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101046847A (en) * 2007-04-29 2007-10-03 中山大学 Human face light alignment method based on secondary multiple light mould
CN101236598A (en) * 2007-12-28 2008-08-06 北京交通大学 Independent component analysis human face recognition method based on multi- scale total variation based quotient image
CN101261678A (en) * 2008-03-18 2008-09-10 中山大学 A method for normalizing face light on feature image with different size

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JP特开2005-71118A 2005.03.17
Terrence Chen et al.Total Variation Models for Variable Lighting Face Recognition.《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》.2006,第28卷(第9期), *
姜琳 等.提取多尺度光照不变量的人脸识别.《计算机应用》.2009,第29卷(第9期),2395-2397. *

Also Published As

Publication number Publication date
CN101916371A (en) 2010-12-15

Similar Documents

Publication Publication Date Title
CN101916371B (en) Method for illuminating/normalizing image and method for identifying image by using same
Wang et al. Variational Bayesian method for retinex
Hu et al. Singular value decomposition and local near neighbors for face recognition under varying illumination
US9311564B2 (en) Face age-estimation and methods, systems, and software therefor
Montazer et al. An improved radial basis function neural network for object image retrieval
Kingma et al. Regularized estimation of image statistics by score matching
CN110929836B (en) Neural network training and image processing method and device, electronic equipment and medium
Choi et al. Shadow compensation in 2D images for face recognition
CN111539246B (en) Cross-spectrum face recognition method and device, electronic equipment and storage medium thereof
Wang et al. Energy based competitive learning
CN104794693A (en) Human image optimization method capable of automatically detecting mask in human face key areas
CN111445496B (en) Underwater image recognition tracking system and method
CN105046202A (en) Adaptive face identification illumination processing method
Li et al. Image enhancement algorithm based on depth difference and illumination adjustment
Yi et al. Illumination normalization of face image based on illuminant direction estimation and improved retinex
Gui et al. Adaptive single image dehazing method based on support vector machine
CN103914677A (en) Action recognition method and device
CN105069402A (en) Improved RSC algorithm for face identification
Pu et al. Fractional-order retinex for adaptive contrast enhancement of under-exposed traffic images
Karamizadeh et al. Race classification using gaussian-based weight K-nn algorithm for face recognition.
Leszczyński Image preprocessing for illumination invariant face verification
CN102214292B (en) Illumination processing method for human face images
CN104021387A (en) Face image illumination processing method based on visual modeling
Zhang (Retracted) Computer-aided three-dimensional animation design and enhancement based on spatial variation and convolution algorithm
Zhou et al. An improved algorithm using weighted guided coefficient and union self‐adaptive image enhancement for single image haze removal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant