CN103400114A - Illumination normalization processing system aiming at face recognition - Google Patents

Illumination normalization processing system aiming at face recognition Download PDF

Info

Publication number
CN103400114A
CN103400114A CN201310303786XA CN201310303786A CN103400114A CN 103400114 A CN103400114 A CN 103400114A CN 201310303786X A CN201310303786X A CN 201310303786XA CN 201310303786 A CN201310303786 A CN 201310303786A CN 103400114 A CN103400114 A CN 103400114A
Authority
CN
China
Prior art keywords
illumination
recognition
module
sigma
detail pictures
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310303786XA
Other languages
Chinese (zh)
Inventor
周义
李鸿光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201310303786XA priority Critical patent/CN103400114A/en
Publication of CN103400114A publication Critical patent/CN103400114A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention provides an illumination normalization processing system aiming at face recognition. The illumination normalization processing system comprises a space conversion module, a signal decomposing module, a feature recognition module and a feature fusion module. A face image can be decomposed by using an FBEMD (Fast Bi-dimensional Empirical Mode Decomposition) method, a group of BIMF (Bi-dimensional Intrinsic Mode Function) components which are arranged from high frequency to low frequency can be obtained, each BIMF component is a sub signal which is in local narrow band and is provided with a single feature, weight coefficients of information contained in each BIMF can be calculated, only three components with highest weight coefficients can be selected to carry out sum adding processing, a face image can be reestablished, and the normalization processing of illumination can be completed. Through the normalization processing of the illumination normalization processing system which is provided by the invention, the interference of illumination variable on face recognition can be removed, all to-be-recognized face images are positioned under equal illumination conditions, the recognition success rate is greatly increased, and quick and real-time recognition can be realized. A face recognition system which is combined with the illumination normalization processing system can be directly put into engineering practice and can be applied to various needed occasions.

Description

Unitary of illumination disposal system for recognition of face
Technical field
The present invention relates to the recognition of face field, particularly, relating to is a kind of disposal system of unitary of illumination for recognition of face.
Background technology
Recognition of face is to the identifying on a kind of biometrics of people's face, and the general practice is by with already present face database, having compared.Face identification system has caused increasing attention, has been widely used in various field, for example public safety, turnover control, information security, intelligent surveillance etc.In the past few decades, a large amount of researchers have proposed many face identification methods, principal component analysis (PCA) (Principal component analysis for example, PCA), linear discriminant analysis method (Linear discriminant analysis, LDA), independent component analysis method (Independent component analysis, ICA), elastic graph matching method (Elastic bunch graph matching, EBGM) etc.
Although these methods are used widely in engineering reality, due to ill-matched user and uncontrollable environmental variance, as expression shape change, posture change, illumination variation etc., face identification system still is faced with huge challenge.Existing face recognition algorithms, usually, because be subject to the interference of these environmental variances, cause recognition success rate often to have a greatly reduced quality.Among all multivariates, recognition success rate is had the greatest impact, surely belong to illumination variation, comprise the illumination variation of training sample set and the illumination variation of test sample book collection.There are some researches show, the impact of this illumination variation has even surpassed individual difference.In the last few years, relevant for the research emphasis of recognition of face, concentrated on the impact of elimination illumination variation, and namely, before identification, facial image was done to unitary of illumination and process.
These method for normalizing roughly can be divided three classes: modeling, pre-service and feature extraction.
The general practice of modeling class methods is to set up the three-dimensional face model of including nearly all variable.Yet there are two problems in these class methods, thereby have limited its application in engineering reality.The one, these class methods all need a large amount of facial image of experiment sample simulation under different illumination conditions.Another one is that hypothesis people face is the convex surface object, thereby has ignored projection.The general practice of feature extraction class methods is to attempt to extract the people not to be subjected on the face the feature of illumination effect.Yet these methods all can not be completely free of the interference of illumination variable.The pre-service class methods are to use some image processing techniquess, before identification, remove the impact of illumination variable, the invention belongs to this class methods.
Summary of the invention
For defect of the prior art, the purpose of this invention is to provide a kind of disposal system of unitary of illumination for recognition of face, through after processing of the present invention, the success ratio of recognition of face is promoted well.The face identification system of being combined with the present invention, can directly drop into engineering reality.
For achieving the above object, the present invention is by the following technical solutions:
A kind of disposal system of unitary of illumination for recognition of face comprises: space modular converter, signal decomposition module, feature identification module and Fusion Features module, wherein:
The space modular converter, be responsible for facial image is done to the logarithm computing, is transformed into log space, and is passed to the signal decomposition resume module;
The signal decomposition module, by the FBEMD signal decomposition method, be decomposed into a series of two-dimentional intrinsic mode function BIMFs by facial image, and arrange from high to low by frequency, finally is passed to the BIMFs grouping module and processes;
The feature identification module, receive the result from the signal decomposition module, the quantity of computed image contrast or image extreme point, using this weight as contained quantity of information in each BIMF component, and three BIMFs that the weight number is the highest transfer to the Fusion Features module;
The Fusion Features module, receive the result from the feature identification module, and three BIMFs components that the weight number is the highest are done and added and process, and complete the unitary of illumination process.
System of the present invention is used fast two-dimensional empirical mode decomposition (Fast bi-dimensional empirical mode decomposition, FBEMD) facial image is decomposed into to the two-dimentional intrinsic mode function (Bi-dimensional intrinsic mode function, BIMF) of a series of different spaces yardsticks.Add up contrast or extreme point quantity in each BIMF, calculate the weight number.Select three weights to count the BIMF of maximum, do add and process, rebuild the illumination invariant face.After through this, processing, the impact of illumination variable is pressed into minimum, and recognition success rate is greatly improved.The sample set that uses in embodiment is respectively from Yale B and Carnegie Mellon University Pose Illumination and Expression (CMU PIE) database.
Preferably, described space modular converter, be according to the Retinex theoretical model, by log-transformation, by two variablees in a facial image (reflection and illumination), by the multiplication relationship of complexity, changed into and simply added and relation.This theory attempts the ultimate principle that image forms is done to a unified explanation, and discusses the product that an image I (x, y) can be read as reflective function R (x, y) and luminance function L (x, y):
I(x,y)=R(x,y)L(x,y) (1)
The character of function L (x, y) is determined by light source, and the character of function R (x, y) is by the character of surface decision of object.Therefore, function so function R (x, y) are insensitive to illumination variation.In general, function L (x, y) changes relatively slow, often corresponding with the feature of skin, background and some other large scale; And function R (x, y) surveys variation relatively sharply, and is often corresponding with the feature of edge, corner angle and some other small scale.
Equation (1) is made to log-transformation, can obtain:
logI(x,y)=logR(x,y)+logL(x,y) (2)
This conversion, can be converted into the product of illumination-reflection two low-and high-frequency component sums, so that the processing of signal decomposition module.
Preferably, described signal decomposition module, become a series of local arrowbands, BIMF subsignal that feature is single by the picture breakdown of people's face, and arrange from high to low by frequency.A facial image is in such decomposable process, and detailed information probably just is broken down in different components from the illumination variable, and is therefore this significant from the decomposition method that detaches the BIMF composition original signal by the signal frequency height.The method of decomposed signal is fast two-dimensional empirical mode decomposition, i.e. FBEMD.In traditional BEMD method, BIMF component of every extraction, all need to carry out iterative computation repeatedly, until find suitable enveloping surface, this makes the decomposable process complicated and time consumption.The people such as Bhuiyan have proposed a kind of fast B EMD method (Bhuiyan that estimates based on envelope, S.M.A., R.R.Adhami, and J.F.Khan, Fast and adaptive bidimensional empirical mode decomposition using order-statistics filter based envelope estimation.EURASIP J.Adv.Signal Process, 2008.2008:p.1-18.) this method abandoned the method for traditional envelope interpolation, and arithmetic speed is improved greatly.
Preferably, described feature identification module, by contrast or the extreme point quantity of adding up each BIMF subsignal, calculate it and comprise separately the characteristic information amount, and then calculate separately component to overall percentage contribution, with the form of weight, embody, choose three BIMF that three weight coefficients are the highest as recognition result.In general, facial image is decomposed to the multiple dimensioned detail pictures that obtains and integrate consideration, extract the minutia in every width detail pictures, by the adjusting of weight coefficient, these minutias are merged, therefore propose a kind of fusion calculation framework as follows:
D = λ 1 d 1 + λ 2 d 2 + · · · + λ n d n = Σ i = 1 n λ i d i - - - ( 3 )
Wherein, D is the characteristic distance between two width images, d iTwo characteristic distances between i layer detail pictures, λ iBe the distance weighting coefficient, it has reflected the contribution rate of the characteristic distance of i layer detail pictures to whole D value.Although high-rise detail pictures contains feature in more details, line, less human face structure, profile information have but been comprised.From the angle of cognition, structure and profile information more help target identification, so the characteristic distance between the low level detail pictures is larger to the contribution of global characteristics distance, the characteristic distance contribution between high-level detail pictures is less.So λ is arranged 12<...<λ nAnd &Sigma; i = 1 n &lambda; i = 1 &CenterDot;
Suppose the image I that has two width to need the calculated characteristics distance and I ', through the picture breakdown operation, obtain respectively the detail pictures I of n level 1, I 2..., I nAnd I ' 1, I ' 2..., I ' n, d iBe and between characteristic distance.If find the measure M of certain detailed information amount, the detailed information value of measuring each detail pictures is MI 1, MI 2..., MI nAnd MI ' 1, MI ' 2..., MI ' n, can calculate to obtain λ with following formula so iValue:
&lambda; i = 1 MI i + 1 MI &prime; i &Sigma; i = 1 n 1 MI i + &Sigma; i = 1 n 1 MI &prime; i - - - ( 4 )
From formula, finding out λ iThe proportion of inverse in all images that is the detailed information amount that comprises by two i layer detail pictures I and I ' comes self-adaptation to calculate, so it can be under all illumination conditions and application occasion effectively.
Based on this, what of minutia quantity of information two kinds of measures that proposed extreme point quantity and contrast value to weigh.
Detail pictures has comprised the boundary curve of a large amount of description contour of object features, and these boundary curves are abundanter, and it is more careful that the essential characteristic of object is described.In the regional area of gray level image, boundary curve is exactly pixel value maximum point or the pixel value minimum point of some continuous distribution.In people's face detail pictures, high-level detail pictures comprises more extreme point, and the detail pictures of low level comprises extreme point still less.Can be by the sum of the extreme point in detail pictures EP kAs weighing in a width details picture, comprise what module of detailed information, formula is as follows:
EP k = &Sigma; x w &Sigma; y h ( | { p c | p c > p i } | + | { p c | p c < p i } | ) ( p i &Element; A c ) - - - ( 5 )
Wherein, w and h are respectively the I of detail pictures kWidth and the height, p cBe exactly the pixel of coordinate for (x, y), A cThe n * n regional area centered by pixel, p iA cInside remove a p cOutside other any one pixels, i=1,2 ..., n * n-1.If the local window zone of 3 * 3 is some p in zone c, the individual point of 8(3 around it * 3-1), as shown in the table.
P 1 P 2 P 3
P 4 p c P 5
P 6 P 7 P 8
Contrast refers to the difference between the image pixel gray-scale value, and obviously, this difference is larger, and pixel is just more easily distinguished by human eye, and the expressed quantity of information of image is also just larger so.Piece image is through in the multi-level detail pictures that obtains after decomposing, the detailed information amount that high-level detail pictures comprises is many, and its contrast value is necessarily larger so, and on the contrary, the detailed information amount that the low level detail pictures comprises is few, and its contrast value is necessarily smaller.So can be by local contrast summation CV kAs weighing in a width details picture, comprise what module of detailed information, formula is as follows:
CV k = &Sigma; x = 1 w &Sigma; y = 1 h &Sigma; ( p c - p i ) 2 ( p i &Element; A c ) - - - ( 6 )
Wherein, w and h are respectively the I of detail pictures kWidth and the height, p cBe exactly the pixel of coordinate for (x, y), A cThe n * n regional area centered by pixel, p iA cInside remove a p cOutside other any one pixels, i=1,2 ..., n * n-1.
Use EP kAnd CV kThe value of measuring i layer detail pictures and gained is respectively EP iAnd EP ' i, CV iAnd CV ' i, carry it into to formula (4), can obtain following 2 kinds of Computational frame weight coefficient schemes:
&lambda; EP i = 1 EP i + 1 EP &prime; i &Sigma; i = 1 n 1 EP i + &Sigma; i = 1 n 1 EP &prime; i - - - ( 7 )
&lambda; CV i = 1 CV i + 1 CV &prime; i &Sigma; i = 1 n 1 CV i + &Sigma; i = 1 n 1 CV &prime; i - - - ( 8 )
Preferably, described Fusion Features module, after the BIMF component that receives three weight coefficient maximums, add and process its work, completes the normalized of illumination invariant facial image.
Said system of the present invention uses the FBEMD method to decompose facial image, obtain one group of BIMF component of arranging from the high frequency to the low frequency, each BIMF is local arrowband, have the subsignal of single features, add up extreme point quantity or the contrast of each component, calculate the weight coefficient of each component, select three weights to count the component of maximum, rebuild facial image, form the illumination invariant face, complete the normalization process.
Compared with prior art, the present invention has following beneficial effect:
(1) system of the present invention is used the FBEMD signal decomposition method, has eliminated the problems such as the mode mixing that extensively is present in the BEMD method, boundary effect, makes signal decomposition more thorough;
(2) the present invention uses the FBEMD signal decomposition method, has broken through classic method great drawback consuming time, embodies better performance;
(3) system of the present invention is used the selectional feature fusion method, and than prior art, the method is simple and practical, and automaticity is high;
(4) the present invention can obviously improve the success ratio of recognition of face, with existing recognition of face coefficient, is combined, and can directly apply to engineering reality.
The accompanying drawing explanation
By reading the detailed description of non-limiting example being done with reference to the following drawings, it is more obvious that other features, objects and advantages of the present invention will become:
Fig. 1 is the structured flowchart of system of the present invention;
Fig. 2 is the structured flowchart of space modular converter;
Fig. 3 is the structured flowchart of signal decomposition module;
Fig. 4 is the recognition result of feature identification module;
Fig. 5 is the result of Fusion Features module.
Embodiment
The present invention is described in detail below in conjunction with specific embodiment.Following examples will help those skilled in the art further to understand the present invention, but not limit in any form the present invention.It should be pointed out that to those skilled in the art, without departing from the inventive concept of the premise, can also make some distortion and improvement.These all belong to protection scope of the present invention.
As shown in Figure 1, based on the description in summary of the invention, the present embodiment provides a kind of disposal system of unitary of illumination for recognition of face, comprise: space modular converter, signal decomposition module, feature identification module and Fusion Features module, wherein: the space modular converter, be responsible for facial image is done to the logarithm computing, be transformed into log space, and be passed to the signal decomposition resume module; The signal decomposition module, by the FBEMD signal decomposition method, be decomposed into a series of two-dimentional intrinsic mode function BIMFs by facial image, and arrange from high to low by frequency, finally is passed to the BIMFs grouping module and processes; The feature identification module, receive the result from the signal decomposition module, the quantity of computed image contrast or image extreme point, using this weight as contained quantity of information in each BIMF component, and three BIMFs that the weight number is the highest transfer to the Fusion Features module; The Fusion Features module, receive the result from the feature identification module, and three BIMFs components that the weight number is the highest are done and added and process, and complete the unitary of illumination process.
As shown in Figure 2, described space modular converter, be according to the Retinex theoretical model, by log-transformation, by two variablees in a facial image (reflection and illumination), by the multiplication relationship of complexity, changed into and simply added and relation.An image I (x, y) can be read as the product of reflective function R (x, y) and luminance function L (x, y), and the character of function L (x, y) is determined by light source, and the character of function R (x, y) is by the character of surface decision of object.Therefore, function so function R (x, y) are insensitive to illumination variation.In general, function L (x, y) changes relatively slow, often corresponding with the feature of skin, background and some other large scale; And function R (x, y) surveys variation mutually sharply, and is often corresponding with the feature of edge, corner angle and some other small scale.After making log-transformation, the product of illumination-reflection can be converted into to two low-and high-frequency component sums, so that the resolution process of signal decomposition module.Because be exactly to add and relation between the BIMF component that disassembles, and take the mutually nested meeting of the multiplication relationship difficulty larger as decomposition brings.
Described signal decomposition module, by the FBEMD method, become a series of local arrowbands, BIMF subsignal that feature is single by the picture breakdown of people's face, and arrange from high to low by frequency.Traditional B EMD, every extraction single order BIMF component, all need to carry out interative computation, repeatedly until find the enveloping surface that satisfies condition in a loop body.By contrast, the FBEMD method only needs iteration one time, can find suitable BIMF composition, does not need a nested circulation that will move tens times even up to a hundred times, more need in each iteration, not carry out complicated interpolation calculation.The present embodiment, become 10 BIMF components by the picture breakdown of people's face, only need to do 10 circulations, can complete decomposable process.As shown in Figure 3, to extract the flow process of certain rank BIMF subsignal as follows for FBEMD: make r K-1=h K-1Utilizing window size is 3 * 3 field window method, and all extreme points in the middle of marking signal, obtain respectively maximum value figure and minimal value figure; According to extremal graph, utilize the up and down envelope filter to draw respectively the up and down enveloping surface, and be averaging enveloping surface, note c kFor single order BIMF component, and make r k=r K-1-c k, surplus r kBe returned to the beginning of program, continue to do decomposition.
As shown in the table, be to use the BEMD method of different editions to process stripe signal gained orthogonal factor and elapsed time; Use the BEMD of different editions, comprise radial basis (RBF), Linderhed method and FBEMD of the present invention, process same facial image, obtain result, and record the resolving time, ask its orthogonal factor.According to people such as N.E.Huang, orthogonal factor is less, illustrates that the effect of BEMD decomposition is better.In general, lower than 0.1 orthogonal factor, be all acceptable.As can be known from scheming, the FBEMD method can reduce orthogonal factor effectively, namely promotes the decomposition quality of BEMD, and elapsed time is also far less than traditional B EMD.From experimental result, confirm, FBEMD method of the present invention not only can be made the signal that quality is higher and launch, can also effectively solve puzzlement BEEMD method significant problem---too much is consuming time.
Figure BDA00003531138800071
Described feature identification module, by contrast or the extreme point quantity of adding up each BIMF subsignal, calculate it and comprise separately the characteristic information amount, and then calculate separately component to overall percentage contribution, with the form of weight, embody, choose three BIMF that three weight coefficients are the highest as recognition result.In general, facial image is decomposed to the multiple dimensioned detail pictures that obtains and integrate consideration, extract the minutia in every width detail pictures, by the adjusting of weight coefficient, these minutias are merged.Based in summary of the invention to the specific descriptions of this module, as shown in Figure 4, each facial image is carried out to the FBEMD decomposition, obtain ten BIMF components, three component: BIMF2, BIMF3 of weighting tuple maximum, BIMF5.
As shown in Figure 5, described Fusion Features module, after the BIMF component that receives three weight coefficient maximums, add and process its work, completes the normalized of illumination invariant facial image.Each facial image, all be in different illumination conditions, and after decomposition, three components of weighting tuple maximum, carry out Fusion Features, obtains the illumination invariant face.
During the work of the present embodiment said system, the signal decomposition module is decomposed into 10 BIMFs components by the FBEMD method by 100 * 100 stripe signals, the iterations of each component is set to (as bad as effect can change setting) 1 time, and be 1.6 seconds the averaging time of experiment; The feature identification module, according to 10 BIMFs components, is added up contained quantity of information in each component (contrast value or extreme point quantity), determines the BIMF component of weighted value maximum, and BIMF2, BIMF3, BIMF5, carry out Fusion Features, forms illumination invariant face image.
This example utilizes a plurality of face recognition algorithms, a plurality of world-renowned people's face sample set, checks performance of the present invention.
First embodiment uses Yale B as the training and testing sample set.Yale B face database comprises 10 people's facial image, and everyone has 9 kinds of different postures, and every kind of posture has the image under 64 width different light.Here only with 64 width images of basic position, eliminate the appraisal of illumination algorithm.This database has 640 width images, according to lighting angle, can be divided into five subsets, and the contained amount of images of each subset is respectively: 70,120,120,140 and 190.Select the poorest the 4th subset (140 width facial image) of illumination condition wherein as training sample set, and remaining image is as the test sample book collection.In table, provided the discrimination contrast of this paper put forward the methods and other classical way.
Table 1. uses the success ratio (%) of four kinds of different recognizers based on Yale B database
Figure BDA00003531138800081
Figure BDA00003531138800091
Second embodiment uses CMU PIE face database.This database has been contained 68 people's 41368 width facial images, has comprised the various images of different gestures, different light and different expressions, and the present embodiment has only adopted the illumination subset as experimental data.Everyone in this subset comprises the facial image under 21 width different light, amounts to 1428 width.The present embodiment is selected 1 width in everyone facial image, amount to 68 width, and as training sample, remaining facial image (1380 width) is as test sample book.Table 4 has provided the discrimination contrast of this paper put forward the methods and other classical ways.
Table 2. uses the success ratio (%) of four kinds of different recognizers based on CMU PIE database
Figure BDA00003531138800092
From table, be not difficult to find out, the present invention has obtained the highest recognition success rate, and adaptivity of the present invention and automaticity be all far away higher than additive method, especially overcome the great technical barrier of consuming time in the classic method and mode mixing, as seen from the table, the facial image of processing 100 * 100 equally, traditional B EMD and FBEMD treatment effect are more or less the same, but the former processing time has been compressed to 1/20, this important breakthrough really.The invention solves the main drawback in recognition of face, larger application prospect arranged engineering is actual.
Above specific embodiments of the invention are described.It will be appreciated that, the present invention is not limited to above-mentioned specific implementations, and those skilled in the art can make various distortion or modification within the scope of the claims, and this does not affect flesh and blood of the present invention.

Claims (10)

1. disposal system of the unitary of illumination for recognition of face is characterized in that described system comprises: space modular converter, signal decomposition module, feature identification module and Fusion Features module, wherein:
The space modular converter, do the logarithm computing by facial image, is transformed into log space, and result is passed to the signal decomposition resume module;
The signal decomposition module, by the FBEMD signal decomposition method, be decomposed into a series of two-dimentional intrinsic mode function BIMFs by facial image, and arrange from high to low by frequency, finally result is passed to the feature identification module;
The feature identification module, receive the result from the signal decomposition module, the quantity of computed image contrast or image extreme point, using this weight as contained quantity of information in each BIMF component, and three BIMFs that the weight number is the highest transfer to the Fusion Features module;
The Fusion Features module, receive the result from the feature identification module, and three BIMFs components that the weight number is the highest are done and added and process, and complete the unitary of illumination process.
2. the disposal system of the unitary of illumination for recognition of face according to claim 1, it is characterized in that, what of contained quantity of information in each BIMF described feature identification module judge according to the height of weight coefficient, the calculating of weight coefficient depends on quantity or the contrast of contained extreme point in each BIMF, finally selects three weights to count the BIMF of maximum.
3. the disposal system of the unitary of illumination for recognition of face according to claim 2, it is characterized in that, described feature identification module, facial image is decomposed to the multiple dimensioned detail pictures that obtains and integrate consideration, extract the minutia in every width detail pictures, by the adjusting of weight coefficient, these minutias are merged, the fusion calculation framework is proposed:
D = &lambda; 1 d 1 + &lambda; 2 d 2 + &CenterDot; &CenterDot; &CenterDot; + &lambda; n d n = &Sigma; i = 1 n &lambda; i d i
Wherein, D is the characteristic distance between two width images, d iTwo characteristic distances between i layer detail pictures, λ iIt is the distance weighting coefficient, it has reflected the contribution rate of the characteristic distance of i layer detail pictures to whole D value, characteristic distance between the low level detail pictures is larger to the contribution of global characteristics distance, and the characteristic distance contribution between high-level detail pictures is less, so λ is arranged 12<...<λ nAnd
Figure FDA00003531138700012
Suppose the image I that has two width to need the calculated characteristics distance and I ', through the picture breakdown operation, obtain respectively the detail pictures I of n level 1, I 2..., I nAnd I ' 1, I ' 2..., I ' n, d iBe and between characteristic distance, if find the measure M of certain detailed information amount, the detailed information value of measuring each detail pictures is MI 1, MI 2..., MI nAnd MI ' 1, MI ' 2..., MI ' n, with following formula, calculate to obtain λ so iValue:
&lambda; i = 1 MI i + 1 MI &prime; i &Sigma; i = 1 n 1 MI i + &Sigma; i = 1 n 1 MI &prime; i
From formula, finding out λ iThe proportion of inverse in all images that is the detailed information amount that comprises by two i layer detail pictures I and I ' comes self-adaptation to calculate, so it can be under all illumination conditions and application occasion effectively.
4. the disposal system of the unitary of illumination for recognition of face according to claim 3, is characterized in that, the measure M of described quantity of information refers in two kinds of measures of extreme point quantity and contrast value a kind of, to weigh what of minutia quantity of information.
5. the disposal system of the unitary of illumination for recognition of face according to claim 4, is characterized in that, described extreme point quantity measure refers to the sum of the extreme point in detail pictures EP kAs weighing in a width details picture, comprise what module of detailed information, formula is as follows:
EP k = &Sigma; x w &Sigma; y h ( | { p c | p c > p i } | + | { p c | p c < p i } | ) ( p i &Element; A c )
Wherein, w and h are respectively detail pictures I kWidth and the height, p cBe exactly the pixel of coordinate for (x, y), A cThe n * n regional area centered by pixel, p iA cInside remove a p cOutside other any one pixels, i=1,2 ..., n * n-1.
6. the disposal system of the unitary of illumination for recognition of face according to claim 5, is characterized in that, described extreme point quantity measure obtains extreme point sum EP k, use EP kThe value of measuring i layer detail pictures and gained is respectively EPi and EP ' Ii, carry it into to formula (4), obtain following Computational frame weight coefficient scheme:
&lambda; EP i = 1 EP i + 1 EP &prime; i &Sigma; i = 1 n 1 EP i + &Sigma; i = 1 n 1 EP &prime; i &CenterDot;
7. the disposal system of the unitary of illumination for recognition of face according to claim 4, is characterized in that, described contrast value measure refers to local contrast summation CV kAs weighing in a width details picture, comprise what module of detailed information, formula is as follows:
CV k = &Sigma; x = 1 w &Sigma; y = 1 h &Sigma; ( p c - p i ) 2 ( p i &Element; A c )
Wherein, w and h are respectively detail pictures I kWidth and the height, p cBe exactly the pixel of coordinate for (x, y), A cThe n * n regional area centered by pixel, p iA cInside remove a p cOutside other any one pixels, i=1,2 ..., n * n-1.
8. the disposal system of the unitary of illumination for recognition of face according to claim 7, is characterized in that, described contrast value measure obtains local contrast summation CV k, use CV kThe value of measuring i layer detail pictures and gained is respectively CV iAnd CV ' i, carry it into to formula (4), obtain following Computational frame weight coefficient scheme:
&lambda; CV i = 1 CV i + 1 CV &prime; i &Sigma; i = 1 n 1 CV i + &Sigma; i = 1 n 1 CV &prime; i &CenterDot;
9. the described disposal system of unitary of illumination for recognition of face of according to claim 1-8 any one, is characterized in that described space modular converter, to pass through log-transformation, by two variablees, be the nest relation between brightness and reflection, the product mode by complexity, be transformed into and simply add and mode.
10. the described disposal system of unitary of illumination for recognition of face of according to claim 1-8 any one, it is characterized in that, described signal decomposition module adopts the FBEMD signal decomposition method, with the estimation technique, replace traditional method of interpolation to draw enveloping surface, wherein: for the facial image of processing through " log-transformation ", carry out the FBEMD decomposition, obtain a series of BIMF components of arranging from high to low by frequency.
CN201310303786XA 2013-07-18 2013-07-18 Illumination normalization processing system aiming at face recognition Pending CN103400114A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310303786XA CN103400114A (en) 2013-07-18 2013-07-18 Illumination normalization processing system aiming at face recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310303786XA CN103400114A (en) 2013-07-18 2013-07-18 Illumination normalization processing system aiming at face recognition

Publications (1)

Publication Number Publication Date
CN103400114A true CN103400114A (en) 2013-11-20

Family

ID=49563732

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310303786XA Pending CN103400114A (en) 2013-07-18 2013-07-18 Illumination normalization processing system aiming at face recognition

Country Status (1)

Country Link
CN (1) CN103400114A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103870820A (en) * 2014-04-04 2014-06-18 南京工程学院 Illumination normalization method for extreme illumination face recognition
CN105184273A (en) * 2015-09-18 2015-12-23 桂林远望智能通信科技有限公司 ASM-based dynamic image frontal face reconstruction system and method
CN106157264A (en) * 2016-06-30 2016-11-23 北京大学 Large area image uneven illumination bearing calibration based on empirical mode decomposition
CN106446870A (en) * 2016-10-21 2017-02-22 湖南文理学院 Human body contour feature extracting method and device
CN106651789A (en) * 2016-11-21 2017-05-10 北京工业大学 Self-adaptive deblocking effect method orienting compressed face image
WO2017114133A1 (en) * 2015-12-31 2017-07-06 深圳光启合众科技有限公司 Image recognition method and device
CN108780508A (en) * 2016-03-11 2018-11-09 高通股份有限公司 System and method for normalized image
CN109886160A (en) * 2019-01-30 2019-06-14 浙江工商大学 It is a kind of it is non-limiting under the conditions of face identification method
CN111931615A (en) * 2020-07-28 2020-11-13 五邑大学 Robot target identification method, system, device and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050063605A1 (en) * 2003-09-18 2005-03-24 Industrial Technology Research Institute Region based illumination-normalization method and system
CN103017665A (en) * 2012-12-04 2013-04-03 上海交通大学 Fast filter system of digital speckle pattern interferometric fringes

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050063605A1 (en) * 2003-09-18 2005-03-24 Industrial Technology Research Institute Region based illumination-normalization method and system
CN103017665A (en) * 2012-12-04 2013-04-03 上海交通大学 Fast filter system of digital speckle pattern interferometric fringes

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YI ZHOU等: "A de-illumination scheme for face recognition based on fast decomposition and detail feature fusion", 《OPTICS EXPRESS》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103870820A (en) * 2014-04-04 2014-06-18 南京工程学院 Illumination normalization method for extreme illumination face recognition
CN105184273A (en) * 2015-09-18 2015-12-23 桂林远望智能通信科技有限公司 ASM-based dynamic image frontal face reconstruction system and method
CN105184273B (en) * 2015-09-18 2018-07-17 桂林远望智能通信科技有限公司 A kind of dynamic image front face reconstructing system and method based on ASM
WO2017114133A1 (en) * 2015-12-31 2017-07-06 深圳光启合众科技有限公司 Image recognition method and device
CN108780508A (en) * 2016-03-11 2018-11-09 高通股份有限公司 System and method for normalized image
CN106157264A (en) * 2016-06-30 2016-11-23 北京大学 Large area image uneven illumination bearing calibration based on empirical mode decomposition
CN106157264B (en) * 2016-06-30 2019-05-07 北京大学 Large area image uneven illumination bearing calibration based on empirical mode decomposition
CN106446870A (en) * 2016-10-21 2017-02-22 湖南文理学院 Human body contour feature extracting method and device
CN106446870B (en) * 2016-10-21 2019-11-26 湖南文理学院 A kind of human body contour outline feature extracting method and device
CN106651789A (en) * 2016-11-21 2017-05-10 北京工业大学 Self-adaptive deblocking effect method orienting compressed face image
CN106651789B (en) * 2016-11-21 2020-01-24 北京工业大学 Self-adaptive deblocking method for compressed face image
CN109886160A (en) * 2019-01-30 2019-06-14 浙江工商大学 It is a kind of it is non-limiting under the conditions of face identification method
CN111931615A (en) * 2020-07-28 2020-11-13 五邑大学 Robot target identification method, system, device and storage medium
CN111931615B (en) * 2020-07-28 2024-01-09 五邑大学 Robot target recognition method, system, device and storage medium

Similar Documents

Publication Publication Date Title
CN103400114A (en) Illumination normalization processing system aiming at face recognition
CN104392463B (en) Image salient region detection method based on joint sparse multi-scale fusion
CN105631436A (en) Face alignment method based on cascade position regression of random forests
Metre et al. An overview of the research on texture based plant leaf classification
CN113989890A (en) Face expression recognition method based on multi-channel fusion and lightweight neural network
Sun et al. Monte Carlo convex hull model for classification of traditional Chinese paintings
CN101887588B (en) Appearance block-based occlusion handling method
CN103440488A (en) Method for identifying pest
CN101968850A (en) Method for extracting face feature by simulating biological vision mechanism
CN105405132A (en) SAR image man-made target detection method based on visual contrast and information entropy
Mo et al. Background noise filtering and distribution dividing for crowd counting
CN104298999A (en) Hyperspectral feature leaning method based on recursion automatic coding
Kosarevych et al. Image segmentation based on the evaluation of the tendency of image elements to form clusters with the help of point field characteristics
Huang et al. Local image region description using orthogonal symmetric local ternary pattern
CN106446804A (en) ELM-based multi-granularity iris recognition method
CN108765439A (en) A kind of sea horizon detection method based on unmanned water surface ship
Gonçalves et al. Texture descriptor combining fractal dimension and artificial crawlers
Ribas et al. Fractal dimension of maximum response filters applied to texture analysis
CN103400370A (en) Adaptive fuzzy C-means image segmentation method based on potential function
CN117237954B (en) Text intelligent scoring method and system based on ordering learning
CN110956116B (en) Face image gender identification model and method based on convolutional neural network
Wang et al. Shape decomposition and classification by searching optimal part pruning sequence
Jeczmionek et al. Input reduction of convolutional neural networks with global sensitivity analysis as a data-centric approach
CN104050486B (en) Polarimetric SAR image classification method based on maps and Wishart distance
Cao Face recognition robot system based on intelligent machine vision image recognition

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20131120