CN102446269A - Face recognition method capable of inhibiting noise and environment impact - Google Patents

Face recognition method capable of inhibiting noise and environment impact Download PDF

Info

Publication number
CN102446269A
CN102446269A CN2010105078767A CN201010507876A CN102446269A CN 102446269 A CN102446269 A CN 102446269A CN 2010105078767 A CN2010105078767 A CN 2010105078767A CN 201010507876 A CN201010507876 A CN 201010507876A CN 102446269 A CN102446269 A CN 102446269A
Authority
CN
China
Prior art keywords
face
image
subcharacter
current
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2010105078767A
Other languages
Chinese (zh)
Other versions
CN102446269B (en
Inventor
李威霆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MICRO ONE ELECTRONICS (KUNSHAN) Inc
MSI Computer Shenzhen Co Ltd
Original Assignee
MICRO ONE ELECTRONICS (KUNSHAN) Inc
MSI Computer Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MICRO ONE ELECTRONICS (KUNSHAN) Inc, MSI Computer Shenzhen Co Ltd filed Critical MICRO ONE ELECTRONICS (KUNSHAN) Inc
Priority to CN201010507876.7A priority Critical patent/CN102446269B/en
Publication of CN102446269A publication Critical patent/CN102446269A/en
Application granted granted Critical
Publication of CN102446269B publication Critical patent/CN102446269B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a face recognition method capable of inhibiting noise and environment impact. A data processing device is implemented to judge whether the current face image accords with a reference face image. The method comprises the following steps: firstly, carrying out Gaussian blur noise reduction treatment on the current face image and the reference face image; and then dividing the current face image and the reference face image into a plurality of blocks to respectively obtain characteristic vector subgroups of the current face image and the reference face image; next, selecting a proper dynamic threshold value according to the change of an environment state, and comparing the difference of the characteristic vector subgroups, thus, whether the current face image accords with the reference face image is determined.

Description

The recognition algorithms that can suppress noise and environmental impact
Technical field
The present invention relates to recognition algorithms, particularly relate to a kind of recognition algorithms that suppresses noise (blur noise) and environmental impact.
Background technology
Traditional rights of using acquisition mode, for example through gate control system or login computer system etc., its step is input user's account number and the corresponding password of input.
The action of the password that input user's account number and input are corresponding has fully manual; Also have through identification card, for example contact identification card or RFID identification card are inputed user's account number and password automatically.The manual fully problem that often has the account number cipher outflow or the person of being used to forget, identification card then has stolen or the problem of bootlegging.
Take place for fear of foregoing problems, face detection is used to as the identification identity to obtain specified permission at present gradually.
Face detection roughly is divided into two stages, is respectively face's learning phase and face recognition stage.Face's learning phase is to capture the image of user's face, handles operation through specific numerical value, with the datumization data that video conversion is specific, and with its characteristic of data performance.The face recognition stage then is to capture the image of face to be identified, likewise is the datumization data of specific kenel with video conversion, and with its characteristic of data performance.Again two datumization data are compared at last, confirmed to it is characterized in that not being similar to, whether be consistent with user's face to differentiate face to be identified.
PCA (Principal Components Analysis), three-dimensional face recognition method are roughly arranged, be the method for identification, the comparison of subcharacter vector on basis or the like with face for the core technology of identification and be the datumization data the video conversion of face.Aforementioned the whole bag of tricks respectively has its relative merits; Yet; Its etc. the common problem that faces of institute be: obtain the environment of face to be identified, often the environment with the face learning phase has very big difference, or the image of face to be identified has comprised noise; These environmental impacts or noise are with making that face to be identified can't be through face recognition.Regularly can't be for fear of the user through face recognition, the face recognition stage just must be reduced the comparison threshold.But reduce the comparison threshold, will make face recognition pass through too easily, and let the stranger through face recognition.
Summary of the invention
In view of the above problems, the present invention is based on the comparison of subcharacter vector, propose a kind of recognition algorithms that suppresses noise and environmental impact, to reduce the influence of noise or environment the fiduciary level of face recognition.
Whether the present invention proposes a kind of recognition algorithms that suppresses noise and environmental impact, is executed in a data processing equipment, be consistent with reference to image of face with one in order to differentiate a current image of face, and said method comprises the following step:
One characteristic vector data storehouse is provided, store this with reference to image of face one with reference to subcharacter vector group, with reference to an environment state vector and a dynamic threshold table;
Capture this current image of face;
Obtain the current subcharacter vector group of this current image of face;
Each subcharacter vector is compared with reference to subcharacter vector corresponding in the subcharacter vector group with this in should current subcharacter vector group, finds out the subcharacter vectorial difference distance of each this block in this current image of face;
These subcharacter vectorial differences distances are arranged to big by little;
Choose respectively this subcharacter vectorial difference distance by less the beginning of numerical value, and only obtain a certain number in these subcharacter vectorial differences distances, and these these subcharacter vectorial differences distances of totalling are a total gap;
Obtain the current environment state vector of this current image of face;
Calculate this with reference to the Euclidean distance between environment state vector and this current ambient condition vector;
According to this with reference to the Euclidean distance between environment state vector and this current ambient condition vector; On this dynamic threshold table, obtain the dynamic threshold of a correspondence; Wherein this dynamic threshold souvenir carries a plurality of dynamic thresholds, and the Euclidean distance of each dynamic threshold and particular range connection;
Differentiate this total gap and whether surpass this dynamic threshold, and this current image of face of decision is consistent with reference to image of face with this when total gap does not surpass this dynamic threshold.
Through the aforementioned recognition algorithms that suppresses noise and environmental impact, the present invention can reduce the noise of each image of face earlier before obtaining the subcharacter vector group.Then after the gap that obtains the subcharacter vector group; The present invention further considers the variation of ambient condition; And dynamically choose the threshold value of the aforementioned gap of comparison, thus make the dynamic threshold of obtaining each time can meet the variation of ambient condition, to improve the fiduciary level of face recognition.
Description of drawings
Fig. 1 is for carrying out the data processing equipment that suppresses the recognition algorithms of noise and environmental impact of the present invention.
Fig. 2 is the process flow diagram () that suppresses the recognition algorithms of noise and environmental impact of the present invention.
Fig. 3 is the synoptic diagram with reference to the subcharacter vector group for conversion with reference to image of face.
Fig. 4 is the process flow diagram (two) that suppresses the recognition algorithms of noise and environmental impact of the present invention.
Fig. 5 is the synoptic diagram of obtaining with reference to the environment state vector.
Fig. 6 is the process flow diagram (three) that suppresses the recognition algorithms of noise and environmental impact of the present invention.
Fig. 7 is the synoptic diagram of current subcharacter vector group for the current image of face of conversion.
Fig. 8 is the process flow diagram (four) that suppresses the recognition algorithms of noise and environmental impact of the present invention.
Fig. 9 is the synoptic diagram of self adaptive comparison.
Figure 10 is the process flow diagram (five) that suppresses the recognition algorithms of noise and environmental impact of the present invention.
Figure 11 is the process flow diagram (six) that suppresses the recognition algorithms of noise and environmental impact of the present invention.
Figure 12 obtains compute euclidian distances to obtain the synoptic diagram of dynamic threshold.
Figure 13 is the process flow diagram (seven) that suppresses the recognition algorithms of noise and environmental impact of the present invention.
Figure 14 is the synoptic diagram of multi-level sampling.
Figure 15 is the process flow diagram (eight) that suppresses the recognition algorithms of noise and environmental impact of the present invention.
Figure 16 is the synoptic diagram of many size samplings.
Figure 17 figure is the process flow diagram (nine) that suppresses the recognition algorithms of noise and environmental impact of the present invention.
The reference numeral explanation
20 data processing equipments
40 characteristic vector data storehouses
30 image capture units
R is with reference to image of face
Rs is with reference to dwindling image of face
The current image of face of C
The current image of face that dwindles of Cs
The F image of face
F1 ground floor image
F2 second layer image
Embodiment
See also shown in " Fig. 1 " to " Fig. 2 "; It is a kind of recognition algorithms that suppresses noise and environmental impact that the embodiment of the invention proposed; Be executed in a data processing equipment 20; Whether be consistent with reference to image of face R with one in order to differentiate a current image of face C, and produce a recognition result, and find out corresponding identification identity.What aforementioned identification result and pairing identification identity thereof can be used for replacing this data processing equipment 20 logins account number and password, thereby simplification obtains the step of these data processing equipment 20 rights of using.
This data processing equipment 20 (for example computing machine or mobile computer) is equipped with a face recognition program, uses the recognition algorithms that execution can suppress noise and environmental impact.This data processing equipment 20 connect or in build a characteristic vector data storehouse 40, and through an image capture unit 30 current image of face C of acquisition or with reference to image of face R.
The recognition algorithms that suppresses noise and environmental impact of the present invention mainly comprises image of face proper vector handling procedure, and aforementioned image of face proper vector handling procedure not only is used for the face feature learning phase, also is used for the face recognition stage.
Shown in " Fig. 1 ", the recognition algorithms that suppresses noise and environmental impact of the present invention is through image capture unit 30 acquisition users' current image of face C or with reference to image of face R, and is sent to data processing equipment 20.Image capture unit 30 can be a video camera, external or in be built in this data processing equipment 20.
Consult " Fig. 2 " and reach shown in " Fig. 3 ", in this step that face feature learning phase is comprised is described earlier, in fact this face feature learning phase comprises an image of face proper vector handling procedure, so that characteristic vector data storehouse 40 to be provided.
Consult " Fig. 2 " and reach shown in " Fig. 3 ", at first, in the part of image of face proper vector handling procedure; The user makes image capture unit 30 aim at face earlier; To capture with reference to image of face R, be sent to data processing equipment 20, shown in step Step 110 through this image capture unit 30.
Consult " Fig. 2 " and reach shown in " Fig. 3 ", then, 20 pairs of data processing equipments should be implemented Gaussian Blur noise reduction process (Gaussian Blur Noise Reduction) with reference to image of face R, to reduce the noise among this image of face R, shown in step Step 120.
This Gaussian Blur noise reduction process is used to reduce noise, and it also can use other reduction noise processing mode to replace; Or, if the place sufficiency of illumination of acquisition image of face R and can guarantee to capture image of face R and have low noise the time, the also Gaussian Blur noise reduction process of administration step Step 120 not.
Consult " Fig. 2 " and reach shown in " Fig. 3 ", then, data processing equipment 20 will be divided into N * N block with reference to image of face R, and each block all gives a block identification code (Block_ID), shown in step Step 130.
Consult " Fig. 2 " and reach shown in " Fig. 3 ", data processing equipment 20 is analyzed in each block, the pixel value of each pixel, and to each block carry out local binary conversion treatment (Local Binary Pattern, LBP).Data processing equipment 20 converts each block into M dimension subcharacter vector, shown in step Step 140 according to the variation of pixel value.
Consulting " Fig. 2 " reaches shown in " Fig. 3 "; This can obtain N * N sub-proper vector altogether with reference to image of face R; Therefore to combine these subcharacter vectors be one with reference to the subcharacter vector group, to be stored in characteristic vector data storehouse 40, shown in step Step 150 to data processing equipment 20.
Consult " Fig. 2 " and reach shown in " Fig. 3 ", data processing equipment 20 also should be sent to characteristic vector data storehouse 40 with reference to the subcharacter vector group, shown in step Step 160.
Aforesaid operation is in order to set up with reference to the subcharacter vector group, for the usefulness of follow-up comparison.When setting up with reference to the subcharacter vector group, data processing equipment 20 can accept to discern the input of the data of identity simultaneously, so that correlate with reference to the identification identity generation corresponding with of subcharacter vector group.
Through after the aforesaid step, can get a characteristic vector data storehouse 40, store at least onely in this characteristic vector data storehouse 40 with reference to the subcharacter vector group, this is discerned identity with reference to subcharacter vector group and and is set and interrelates.
Afterwards,, obtain with reference to the environment state vector with reference to image of face R to this, it is following to obtain step:
Consulting " Fig. 4 " reaches shown in " Fig. 5 "; Data processing equipment 20 will be divided into 4 five equilibriums with reference to image of face R earlier; Beginning counterclockwise arrangement by upper left five equilibrium is that first five equilibrium 1, second five equilibrium 2, C grade divide 3, the quartern 4 in regular turn, shown in step Step 170.
Consult " Fig. 4 " and reach shown in " Fig. 5 ", then, data processing equipment 20 calculates respectively that first five equilibrium 1, second five equilibrium 2, C grade divide 3, the average GTG value m1 of the quartern 4, m2, m3, m4, shown in step Step 181.
Consulting " Fig. 4 " reaches shown in " Fig. 5 "; Shown in step Step 182; Data processing equipment 20 is then with following rule: subtract the average GTG value of the right five equilibrium, subtract the average GTG value of below five equilibrium with the average GTG value of top five equilibrium with the average GTG value of left side five equilibrium; Obtain four GTG value difference values (m1-m4), (m2-m3), (m1-m2), (m4-m3) again; Aforesaid rule mainly is in order to calculate the difference of each average GTG value and other average GTG values, therefore to be not limited to aforementioned rule.And the quantity of five equilibrium also is not limited to four (2 * 2), also can be 3 * 3,4 * 4... etc.
Shown in step Step 190; Then; In conjunction with first five equilibrium 1, second five equilibrium 2, C grade divide 3, the average GTG value m1 of the quartern 4, m2, m3, m4; And four GTG value difference values (m1-m4), (m2-m3), (m1-m2), (m4-m3) are one with reference to the environment state vector as the numerical value of each dimension, are stored in the characteristic vector data storehouse 40.
Then explanation can suppress noise and environmental impact recognition algorithms in the face recognition stage.In the face recognition stage, likewise, be to carry out earlier image of face proper vector handling procedure, obtain a current subcharacter vector group, again should current subcharacter vector group, compare one by one with the reference subcharacter vector group in the characteristic vector data storehouse 40.
Consult " Fig. 6 " and reach shown in " Fig. 7 ", the user makes image capture unit 30 aim at face earlier, through the current image of face C of these image capture unit 30 acquisitions, transmits current image of face C to data processing equipment 20, shown in step Step 210.
Consult " Fig. 6 " and reach shown in " Fig. 7 ", then, 20 couples of these current image of face C of data processing equipment implement the Gaussian Blur noise reduction process, to reduce the noise among the current image of face C, shown in step Step 220.As step Step 120, this Gaussian Blur noise reduction process is to be used to reduce noise, also can use other reduction noise processing mode to replace; Or, if the place sufficiency of illumination of acquisition image of face C and can guarantee to capture image of face C and have low noise the time, the also Gaussian Blur noise reduction process of administration step Step220 not.
Consult " Fig. 6 " and reach shown in " Fig. 7 ", then, data processing equipment 20 is divided into N * N block with current image of face C, and each block all gives a block identification code (Block_ID), shown in step Step 230.
Consult " Fig. 6 " and reach shown in " Fig. 7 ", data processing equipment 20 is analyzed in each block, the pixel value of each pixel, and each block carried out local binary conversion treatment.Data processing equipment 20 is according to the variation of pixel value, converts each block of current image of face C into M dimension subcharacter vector, shown in step Step 240.
Consult " Fig. 6 " and reach shown in " Fig. 7 ", this current image of face C can obtain N * N sub-proper vector altogether, thus data processing equipment 20 to combine these subcharacter vectors be a current subcharacter vector group, shown in step Step 250.
Consulting " Fig. 8 " reaches shown in " Fig. 9 "; Data processing equipment 20 will be corresponding to each subcharacter vector in the current subcharacter vector group of current image of face C; With compare with reference to corresponding subcharacter vector in the reference subcharacter vector group of image of face R; Find out the subcharacter vectorial difference distance of each this block among the current image of face C, shown in step Step 300.
With reference to shown in " Figure 10 ", the details of the self adaptive comparison of step Step 300 is following.
At first, data processing equipment 20 is written into one with reference to the subcharacter vector group by characteristic vector data storehouse 40, shown in step Step 310.
Will be with reference in subcharacter vector group and the current subcharacter vector group, have same block identification code (Block_ID) and in correspondence with each other subcharacter vector is compared, and obtain N * N group subcharacter vectorial difference distance, shown in step Step 320.
With data processing equipment 20 aforesaid these subcharacter vectorial differences distances are arranged to big by little; Choose respectively this subcharacter vectorial difference distance by less the beginning of numerical value; And only choose these subcharacter vectorial differences apart from a certain number; For example only get precedingly 65%, giving up that its remainder values is bigger do not adopted, shown in step Step 330.
Shown in step Step 340, the aforementioned subcharacter vectorial difference distance of choosing of data processing equipment 20 totallings is a total gap.
Among the step Step 330; Give up when the bigger reason that does not adopt of numerical value is the local binary conversion treatment (LBP) of execution in step Step 140, Step 240 in the subcharacter vectorial difference distance; The subcharacter vector of each block in fact all can receive the noise influence, and its subcharacter vector is affected.And the subcharacter vectorial difference is not when bigger the giving up of numerical value adopted, and it is bigger that the block that is rejected in fact all receives the noise influence, the graininess noise at shade, clear sea, the smooth place of face particularly, and the probability that important face feature is rejected is not high.
Otherwise; Under the normal occasion of light,, be short of distinctive block and more highlighted sometimes though noise is few; For example smooth forehead, the cheek graininess noise under the high light effect; These blocks not only are short of distinctive characteristic, and sufficient light also can make its subcharacter vector be affected and in combination of eigenvectors, highlighted, and cause its corresponding subcharacter vectorial difference bigger apart from numerical value.Therefore, under the normal occasion of light, give up the subcharacter vectorial difference apart from numerical value bigger can't give up important face feature, can promote the weight of important face feature on the contrary.
Then, data processing equipment 20 is specified a dynamic threshold, carries out the dynamic threshold inspection, shown in step Step 400.Whether dynamic threshold inspection is in order to differentiate in current image of face C and the characteristic vector data storehouse 40 total gap with reference to the subcharacter vectorial difference distance of image of face R above this dynamic threshold.That is, current subcharacter vector group with whether surpass this dynamic threshold with reference to the difference of subcharacter vector group.
Consult shown in " Figure 11 " to " Figure 12 " details of following description of step Step 400.
Consult shown in " Figure 11 " to " Figure 12 "; Data processing equipment 20 is divided into 4 five equilibriums with current image of face C earlier; Beginning counterclockwise arrangement by upper left five equilibrium is that first five equilibrium 1, second five equilibrium 2, C grade divide 3, the quartern 4 in regular turn, shown in step Step 410.
Consult shown in " Figure 11 " to " Figure 12 ", data processing equipment 20 calculates respectively that first five equilibrium 1, second five equilibrium 2, C grade divide 3, the average GTG value m1 of the quartern 4, m2, m3, m4, shown in step Step 420.
Consult " Figure 11 " to " Figure 12 ", shown in step Step 430, data processing equipment 20 then calculates difference (m1-m4), (m2-m3), (m1-m2), (m4-m3) of each average GTG value and other average GTG values.
Shown in step Step 440; Then; With first five equilibrium 1, second five equilibrium 2, C grade divide 3, the average GTG value m1 of the quartern 4, m2, m3, m4; And four GTG value difference values (m1-m4), (m2-m3), (m1-m2), (m4-m3) are combined into a current environment state vector as the numerical value of each dimension.
In fact, the step that step Step 410 to Step 440 is performed is identical with step Step 160 to Step 190, and its difference only is that objective for implementation is with reference to image of face R or current image of face C.
Shown in step Step 450, data processing equipment 20 calculates with reference to the Euclidean distance between environment state vector and the current environment state vector (Euclidean distance).
Shown in step Step 460, data processing equipment 20 is written into a dynamic threshold table by characteristic vector data storehouse 40.The dynamic threshold souvenir carries a plurality of dynamic thresholds, and the Euclidean distance of each dynamic threshold and particular range connection.Aforementioned dynamic threshold table can be set up out the association of the Euclidean distance of each dynamic threshold and a particular range one by one through after the varying environment test.
Shown in step Step 470, data processing equipment 20 is obtained the dynamic threshold of a correspondence according to reference to the Euclidean distance between environment state vector and the current environment state vector.
Consult shown in " Fig. 8 ",, differentiate total gap with data processing equipment 20 and whether surpass dynamic threshold like step Step 500.
If current subcharacter vector group with surpass this dynamic threshold with reference to total gap of subcharacter vector group, judge that then current image of face C does not conform to this reference face image R, and belong to a stranger, shown in step Step 510.
If current subcharacter vector group with reference to the gap of subcharacter vector group, surpass this dynamic threshold, then determine current image of face C to be consistent, and belong to the user, shown in step Step 520 with this reference face image R.
Above-mentioned recognition result and with reference to the pairing identification identity of subcharacter vector group, what can be used for replacing this data processing equipment 20 logins account number and password, thus simplification obtains the step of these data processing equipment 20 rights of using.
Dynamic threshold can reflect the environmental impact when obtaining current image of face C and obtaining with reference to image of face R; With this environmental impact adjustment comparison threshold value; Thereby avoid the identification threshold of recognition algorithms too harsh, also still can reduce the probability of stranger through face recognition.
In order to promote the correctness of face recognition, the step of recognition algorithms of the present invention can be carried out following two kinds of corrections.
One of which is carried out sampling at many levels, increase current subcharacter vector group with reference to the vector of the subcharacter in subcharacter vector group number, to increase effective comparison sample number.
Consult " Figure 13 " and reach shown in " Figure 14 ", carry out the step of multi-level sampling, be in order to step of replacing Step 130 to step Step 140 or step Step 230 to step Step 240.Below no longer the district only is referred to as with image of face F at a distance from current image of face C and with reference to image of face R.
Consulting " Figure 13 " reaches shown in " Figure 14 "; Obtain image of face F and reduce noise handle after (to step Step 120 like step Step 110; Or step Step 210 is to step Step220); Data processing equipment 20 is divided into ground floor image F1 and second layer image F2 with image of face F, and wherein ground floor image F1 is original image of face F, and second layer image F2 is the regional area among the original image of face F; Particularly face's central feature is significantly regional, shown in step Step 610.
Consult " Figure 13 " and reach shown in " Figure 14 ", data processing equipment 20 is divided into a plurality of blocks respectively with ground floor image F1 and second layer image F2; For example ground floor image F1 is divided into N * N block, and second layer image F2 is divided into L * L block.Likewise, each all blocks all gives a block identification code (Block_ID), shown in step Step 620.
Consult " Figure 13 " and reach shown in " Figure 14 ", data processing equipment 20 is analyzed in each block, the pixel value of each pixel, and each block carried out local binary conversion treatment.Data processing equipment 20 converts each block into M dimension subcharacter vector, shown in step Step 630 according to the variation of pixel value.
Consult " Figure 13 " and reach shown in " Figure 14 ", this can obtain N * N altogether with reference to image of face R and add L * L sub-proper vector, and this L * L is individual all obviously to be located from face feature, and can increase the weight of face feature.It is one with reference to the subcharacter vector group, to be stored in characteristic vector data storehouse 40, shown in step Step 640 that data processing equipment 20 combines these subcharacter vectors.And this image of face F can obtain the subcharacter vector combination that N * N adds L * L sub-proper vector altogether, with subcharacter vector group as a reference.
Thus, subcharacter vector can be original N * N, increase L * L again, and this L * L located obviously from face feature all, and can increase the weight of face feature.
Its two, carry out the sampling of many sizes, revise the Euclidean distance of the reference image of face R in current image of face C and the characteristic vector data storehouse 40, with the influence of reduction noise.
Consult " Figure 15 " and reach shown in " Figure 16 ", carry out the step of many sizes sampling, be in order to step of replacing Step 130 to step Step 140 or step Step 230 to step Step 240.
Consulting " Figure 15 " reaches shown in " Figure 16 "; Obtain with reference to after image of face R or the current image of face C (to step Step 120 like step Step 110; Or step Step 210 is to step Step220); Data processing equipment 20 further change obtains one respectively with reference to dwindling an image of face Rs and a current image of face Cs that dwindles, shown in step Step 710 with reference to the resolution of image of face R and current image of face C.Generation is dwindled in the process of image of face, and the noise part can be eliminated, but the weight of face characteristic also can be lowered.Therefore, original image of face F still needs in follow-up step, to use.
Consult shown in " Figure 15 ", then, data processing equipment 20 will be with reference to image of face R, this current image of face C, image of face Rs is dwindled in reference or the current image of face Cs that dwindles is divided into a plurality of blocks respectively; Each all blocks all gives a block identification code (Block_ID), shown in step Step720.
Consult shown in " Figure 15 ", data processing equipment 20 is analyzed in each block, the pixel value of each pixel, and each block carried out local binary conversion treatment.Data processing equipment 20 converts each block into M dimension subcharacter vector, shown in step Step 730 according to the variation of pixel value.
Consulting " Figure 15 " reaches shown in " Figure 16 "; At last; Data processing equipment 20 is obtained this reference respectively and is dwindled image of face Rs and this current subcharacter vector group of dwindling image of face Cs; To find out the subcharacter vectorial difference distance of each this block in the current subcharacter vector group of dwindling image of face Cs, shown in step Step 740.
Consulting " Figure 16 " reaches shown in " Figure 17 figure "; And original step Step 300; That is to the current subcharacter vector group of current image of face C; With each step of carrying out self adaptive comparison in the characteristic vector data storehouse 40 with reference to the subcharacter vector group, then divide into two parallel branch lines and carry out, be respectively step Step 300 ' and Step 300 ".
Consult " Figure 16 " and reach shown in " Figure 17 figure ", Step 300 ' is a subcharacter vectorial difference distance of finding out each block among this current image of face C still, that is Step 310 ' is identical with Step 310 to Step 340 to Step 340 '.
Consulting " Figure 16 " reaches shown in " Figure 17 figure "; Step 300 " be the subcharacter vectorial difference distance of finding out each this block in the current subcharacter vector group of dwindling image of face Cs; that is Step 310 " to Step340 " identical with Step 310 to Step 340, but comparison other is currently to dwindle image of face Cs and with reference to dwindling image Rs.
At last, the subcharacter vectorial difference that totalling is current to dwindle each block in the subcharacter vector group of image of face Cs is apart to the total gap that is obtained by current image of face C, and the total gap that obtains is used to compare with dynamic threshold among the step Step 500.
Recognition algorithms of the present invention mainly is to consider the variation of ambient condition, and changes the dynamic threshold of comparison each time, makes the dynamic threshold of obtaining each time can meet the variation of ambient condition, to improve the fiduciary level of face recognition.

Claims (9)

1. whether the recognition algorithms that can suppress noise and environmental impact is executed in a data processing equipment, be consistent with reference to image of face with one in order to differentiate a current image of face, and this method comprises the following step:
One characteristic vector data storehouse is provided, store this with reference to image of face one with reference to subcharacter vector group, with reference to an environment state vector and a dynamic threshold table;
Capture this current image of face;
Obtain the current subcharacter vector group of this current image of face;
Each subcharacter vector is compared with reference to subcharacter vector corresponding in the subcharacter vector group with this in should current subcharacter vector group, finds out the subcharacter vectorial difference distance of each this block in this current image of face;
This subcharacter vectorial difference distance is arranged to big by little;
Choose respectively this subcharacter vectorial difference distance by less the beginning of numerical value, and only obtain a certain number in this subcharacter vectorial difference distance, and this subcharacter vectorial difference distance of totalling is a total gap;
Obtain the current environment state vector of this current image of face;
Calculate this with reference to the Euclidean distance between environment state vector and this current ambient condition vector;
According to this with reference to the Euclidean distance between environment state vector and this current ambient condition vector; On this dynamic threshold table, obtain the dynamic threshold of a correspondence; Wherein this dynamic threshold souvenir carries a plurality of dynamic thresholds, and the Euclidean distance of each dynamic threshold and particular range connection;
Differentiate this total gap and whether surpass this dynamic threshold, and this current image of face of decision is consistent with reference to image of face with this when total gap does not surpass this dynamic threshold.
2. the recognition algorithms that suppresses noise and environmental impact as claimed in claim 1, the step that wherein obtains with reference to subcharacter vector group or this current subcharacter vector group comprises:
Capture one with reference to an image of face or a current image of face through an image capture unit;
Transmitting should be with reference to image of face or this current image of face to this data processing equipment;
With this data processing equipment this is implemented the processing of reduction noise with reference to image of face or this current image of face;
Should be divided into a plurality of blocks with reference to image of face or this current image of face with this data processing equipment; And
With this data processing equipment each this block is carried out local binary conversion treatment, convert each this block into a plurality of subcharacter vectors, and be combined into a subcharacter vector group.
3. the recognition algorithms that suppresses noise and environmental impact as claimed in claim 1, wherein acquisition should also comprise the following step with reference to after image of face or this current image of face:
Should be divided into ground floor image and second layer image with reference to image of face or this current image of face with this data processing equipment; Wherein this ground floor image be this with reference to image of face or this current image of face, this second layer image is that this is with reference to the regional area in image of face or this current image of face; And
This ground floor image and this second layer image are divided into a plurality of blocks respectively.
4. the recognition algorithms that suppresses noise and environmental impact as claimed in claim 2, wherein this reduction noise processed steps is to carry out the Gaussian Blur noise reduction process.
5. the recognition algorithms that suppresses noise and environmental impact as claimed in claim 2, wherein acquisition should wherein also comprise with reference to after image of face or this current image of face:
Change this resolution, obtain one respectively with reference to dwindling an image of face and a current image of face that dwindles with reference to image of face and this current image of face;
Obtain this reference respectively and dwindle image of face and this current subcharacter vector group of dwindling image of face, to find out the subcharacter vectorial difference distance of each this block in the current subcharacter vector group of dwindling image of face;
The subcharacter vectorial difference that totalling is current to dwindle each this block in the subcharacter vector group of image of face is apart to the total gap that is obtained by current image of face.
6. the recognition algorithms that suppresses noise and environmental impact as claimed in claim 1 wherein should produce connection with reference to the subcharacter vector group identification identity corresponding with.
7. the recognition algorithms that suppresses noise and environmental impact as claimed in claim 1, the step that wherein obtains with reference to environment state vector or current environment state vector comprises:
Should be divided into a plurality of five equilibriums with reference to image of face or this current image of face with one earlier;
Calculate the average GTG value of each this five equilibrium respectively;
Calculate the difference of each this average GTG value and other this average GTG value;
As the numerical value of each dimension, be combined into this ambient condition vector with this average GTG value and four GTG value difference values.
8. the recognition algorithms that suppresses noise and environmental impact as claimed in claim 1, wherein, find out this current image of face and comprise with step with reference to this subcharacter vectorial difference distance of image of face:
Be written into this with reference to the subcharacter vector group with this data processing equipment by this characteristic vector data storehouse;
Will be with reference in subcharacter vector group and the current subcharacter vector group, the subcharacter vector that belongs to same block is compared, to obtain this subcharacter vectorial difference distance.
9. the recognition algorithms that suppresses noise and environmental impact as claimed in claim 1, wherein, selected subcharacter vectorial difference distance, for this by little to preceding 65% of big subcharacter vectorial difference distance of arranging.
CN201010507876.7A 2010-10-15 2010-10-15 The recognition algorithms of noise and environmental impact can be suppressed Active CN102446269B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201010507876.7A CN102446269B (en) 2010-10-15 2010-10-15 The recognition algorithms of noise and environmental impact can be suppressed

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201010507876.7A CN102446269B (en) 2010-10-15 2010-10-15 The recognition algorithms of noise and environmental impact can be suppressed

Publications (2)

Publication Number Publication Date
CN102446269A true CN102446269A (en) 2012-05-09
CN102446269B CN102446269B (en) 2015-10-14

Family

ID=46008758

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201010507876.7A Active CN102446269B (en) 2010-10-15 2010-10-15 The recognition algorithms of noise and environmental impact can be suppressed

Country Status (1)

Country Link
CN (1) CN102446269B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105701763A (en) * 2015-12-30 2016-06-22 青岛海信移动通信技术股份有限公司 Method and device for adjusting face image
CN108241836A (en) * 2016-12-23 2018-07-03 同方威视技术股份有限公司 For the method and device of safety check

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060204058A1 (en) * 2004-12-07 2006-09-14 Kim Do-Hyung User recognition system and method thereof
CN101021899A (en) * 2007-03-16 2007-08-22 南京搜拍信息技术有限公司 Interactive human face identificiating system and method of comprehensive utilizing human face and humanbody auxiliary information
CN101281598A (en) * 2008-05-23 2008-10-08 清华大学 Method for recognizing human face based on amalgamation of multicomponent and multiple characteristics
CN101414348A (en) * 2007-10-19 2009-04-22 三星电子株式会社 Method and system for identifying human face in multiple angles

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060204058A1 (en) * 2004-12-07 2006-09-14 Kim Do-Hyung User recognition system and method thereof
CN101021899A (en) * 2007-03-16 2007-08-22 南京搜拍信息技术有限公司 Interactive human face identificiating system and method of comprehensive utilizing human face and humanbody auxiliary information
CN101414348A (en) * 2007-10-19 2009-04-22 三星电子株式会社 Method and system for identifying human face in multiple angles
CN101281598A (en) * 2008-05-23 2008-10-08 清华大学 Method for recognizing human face based on amalgamation of multicomponent and multiple characteristics

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105701763A (en) * 2015-12-30 2016-06-22 青岛海信移动通信技术股份有限公司 Method and device for adjusting face image
CN105701763B (en) * 2015-12-30 2019-05-21 青岛海信移动通信技术股份有限公司 A kind of method and device adjusting facial image
CN108241836A (en) * 2016-12-23 2018-07-03 同方威视技术股份有限公司 For the method and device of safety check

Also Published As

Publication number Publication date
CN102446269B (en) 2015-10-14

Similar Documents

Publication Publication Date Title
TWI453680B (en) Face recognition method eliminating affection of blur noise and environmental variations
George et al. Deep pixel-wise binary supervision for face presentation attack detection
CN110188641B (en) Image recognition and neural network model training method, device and system
KR102486699B1 (en) Method and apparatus for recognizing and verifying image, and method and apparatus for learning image recognizing and verifying
JP6089577B2 (en) Image processing apparatus, image processing method, and image processing program
CN107545277B (en) Model training, identity verification method and device, storage medium and computer equipment
TW201627917A (en) Method and device for face in-vivo detection
CN109740589B (en) Asynchronous object ROI detection method and system in video mode
CN105450411A (en) Method, device and system for utilizing card characteristics to perform identity verification
CN104966079A (en) Distinguishing live faces from flat surfaces
CN102902959A (en) Face recognition method and system for storing identification photo based on second-generation identity card
KR101412727B1 (en) Apparatus and methdo for identifying face
US20160189048A1 (en) Data analysis system and method
CN105654505B (en) A kind of collaboration track algorithm and system based on super-pixel
EP2701096A2 (en) Image processing device and image processing method
Selwal et al. Template security analysis of multimodal biometric frameworks based on fingerprint and hand geometry
JP2018026115A (en) Flame detection method, flame detector, and electronic apparatus
Chang et al. Reversible data hiding for color images based on adaptive three-dimensional histogram modification
CN102446269A (en) Face recognition method capable of inhibiting noise and environment impact
US10402554B2 (en) Technologies for depth-based user authentication
CN111291780A (en) Cross-domain network training and image recognition method
CN111222558A (en) Image processing method and storage medium
Wan et al. Face detection method based on skin color and adaboost algorithm
KR102257883B1 (en) Face Recognition Apparatus and Method
KR20110133271A (en) Method of detecting shape of iris

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant