CN102004905A - Human face authentication method and device - Google Patents

Human face authentication method and device Download PDF

Info

Publication number
CN102004905A
CN102004905A CN 201010549760 CN201010549760A CN102004905A CN 102004905 A CN102004905 A CN 102004905A CN 201010549760 CN201010549760 CN 201010549760 CN 201010549760 A CN201010549760 A CN 201010549760A CN 102004905 A CN102004905 A CN 102004905A
Authority
CN
China
Prior art keywords
mrow
images
face
sample
weak
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201010549760
Other languages
Chinese (zh)
Other versions
CN102004905B (en
Inventor
邓亚峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
JIANGSU BOYUE INTERNET OF THINGS TECHNOLOGY Co Ltd
Original Assignee
Wuxi Vimicro Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Vimicro Corp filed Critical Wuxi Vimicro Corp
Priority to CN201010549760XA priority Critical patent/CN102004905B/en
Publication of CN102004905A publication Critical patent/CN102004905A/en
Application granted granted Critical
Publication of CN102004905B publication Critical patent/CN102004905B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a human face authentication method, which comprises the following steps of: extracting human face characteristics from a plurality of frame images; judging whether the human face characteristics in the plurality of frame images accord with a preset human face model or not, and when the number of images according with the preset human face model exceeds a first threshold value, considering that user authentication corresponding to the human face characteristics in the plurality of frame images is successful; when the number of images according with the preset human face model exceeds a second threshold value, extracting all human face characteristics in the plurality of frame images as sample characteristics; and performing increment training on the preset human face model by utilizing the sample characteristics, wherein the second threshold value is not less than the first threshold value. In the method, more approximate sample images are used for training the human face model in the continual using process of users, so that the authentication capacity of the human face model is enhanced, and the self-maintenance is realized.

Description

Face authentication method and device
[ technical field ] A method for producing a semiconductor device
The invention relates to the technical field of image processing, in particular to a face authentication method and device capable of incremental learning.
[ background of the invention ]
In the field of image processing and computer vision technology, the face authentication technology is an important and mature technology. The existing face authentication technology usually adopts a training mode-based method to obtain a face model, specifically, a face region is cut from a training image and features are extracted as samples, a classifier capable of distinguishing the samples is obtained by training through a certain training method, and then the classifier is used for face authentication.
However, the face model in this method is obtained by training only through an initial batch of sample images. The authentication success rate of the face model is reduced along with factors such as time change, ambient light change, clothing change and appearance change. For example: an attendance machine based on a face authentication technology is purchased by a certain company in summer, initial training images are collected for Zhang III of female employees on the same day of purchase, and the attendance machine can obtain a Zhang III face model by utilizing the initial training images. However, as time goes by, the user opens three times in winter, changes new hairstyles, wears more clothes, and the authentication rate of the attendance machine is gradually reduced. At this time, the traditional method is that the attendance machine acquires training images again for zhang san and trains a model, because the method needs additional manual operation, the maintenance cost of the system is increased, and the user experience is not good.
Therefore, a new technical solution is needed to solve the above-mentioned drawbacks.
[ summary of the invention ]
This section is for the purpose of summarizing some aspects of embodiments of the invention and to briefly introduce some preferred embodiments. In this section, as well as in the abstract and the title of the invention of this application, simplifications or omissions may be made to avoid obscuring the purpose of the section, the abstract and the title, and such simplifications or omissions are not intended to limit the scope of the invention.
The invention aims to provide a face authentication method, which realizes self-maintenance capability and training capability of a face model by using an incremental learning method.
The invention also aims to provide a face authentication device, which realizes the self-maintenance of the system by using the incremental learning method.
In order to achieve the object of the present invention, according to an aspect of the present invention, there is provided a face authentication method, the method including: extracting human face features from a plurality of frame images; judging whether the face features in the plurality of frames of images accord with a preset face model or not, and when the number of the images which accord with the preset face model exceeds a first threshold value, considering that the user authentication corresponding to the face features in the plurality of frames of images is successful; when the number of the images which accord with the preset human face model exceeds a second threshold value, extracting all human face features in the plurality of frames of images as sample features; and performing incremental training on the preset face model by using the sample characteristics, wherein the second threshold is not less than the first threshold.
Further, the method further comprises: collecting continuous images of the same user; carrying out face detection and tracking on the continuous images; and selecting a plurality of frame images of which the rotation angles of the human faces do not exceed a preset error range from the continuous images of the detected human face area.
Further, the selecting, from the continuous images in which the face regions are detected, a number of frames of images in which the rotation angles of the faces do not exceed a predetermined error range includes:
extracting feature points of eyes and a mouth from a face region of the continuous image;
calculating the face rotation angle theta through the feature points, wherein the calculation formula is as follows:
<math><mrow><mrow><mi>&theta;</mi><mo>=</mo><mi>arctg</mi><mo>[</mo></mrow><mfrac><mrow><mrow><mo>(</mo><mi>b</mi><mo>-</mo><mi>a</mi><mo>)</mo></mrow><mi>sin</mi><mi>&alpha;</mi></mrow><mrow><mrow><mo>(</mo><mi>b</mi><mo>+</mo><mi>a</mi><mo>)</mo></mrow><mrow><mo>(</mo><mn>1</mn><mo>-</mo><mi>cos</mi><mi>&alpha;</mi><mo>)</mo></mrow></mrow></mfrac><mo>]</mo></mrow></math>
wherein a is the horizontal distance from the right-eye characteristic point to the mouth characteristic point, b is the horizontal distance from the left-eye characteristic point to the mouth characteristic point, and alpha is a certain value from 20 degrees to 30 degrees;
selecting a plurality of frames of images of which the face rotation angle theta does not exceed a preset error range.
Further, extracting all the face features in the plurality of frame images as sample features comprises: and extracting the face features which accord with the preset face model from the plurality of frames of images as positive samples, and extracting the face features which do not accord with the preset face model from the plurality of frames of images as negative samples.
Further, constructing a weak classifier library by using the positive sample and the negative sample and performing incremental training,
initializing all selectors, wherein the selectors comprise selected weak classifiers and weak classifier weights; initializing all selector corresponding classifier correct sample weights and
Figure BDA0000033049680000022
and classifying error sample weights and
Figure BDA0000033049680000031
for a current sample, the sample label is 1, if 1 ═ 1, then it is a positive sample, and 1 ═ 1 is a negative sample; setting the sample weight w to be 1;
updating M weak classifiers of online weak feature structures;
for N selectors, updating weak classifier serial number j and weak classifier weight alpha of the selectorn
Further, the weak features are pre-U-dimensional Gobar features extracted from the positive samples and the negative samples, and for the pre-U-dimensional Gabor features with different scales and different positions, a weak classifier is constructed in a nearest neighbor-based mode and is in the form of:
h j weak ( x ) = sign ( D ( f j ( x ) , c j p ) - D ( f j ( x ) , c j n ) ) ,
wherein,
Figure BDA0000033049680000033
for the jth feature center of the positive sample,
Figure BDA0000033049680000034
is the jth feature center of the inverse sample, fj(x) Is the current feature.
Further, the step of updating M weak classifiers constructed by the online weak features is to adopt a Kalman filtering mode to update the mean value of the weak features online
Figure BDA0000033049680000035
To know
Figure BDA0000033049680000036
Further, for the N selectors, the weak classifier serial number j and the weak classifier weight α of the selector are updatednThe method comprises the following steps:
obtaining the authentication result marks Hyp (M) of the M weak classifiers for the sample, wherein the authentication is 1 correctly, and otherwise, the authentication is 0;
setting unused flag bUsed for each weak classifiermWhether the flag has been selected for use by a selector, and if it has been used to be 1, noneUse is 0;
for all N selectors, the following process updates are performed:
for all M weak classifiers, according to the authentication result of the M weak classifiers on the sample, if HypmIs 1, thenIf not, then,
Figure BDA0000033049680000038
bUsed if the current weak classifier is not in use m1, skip and calculate the authentication error rate e for all unused weak classifiersn,mAnd selecting e with the smallest error ratenWeak classifiers as current selectors, i.e. taking j ═ argmin (e)n,m) While computing weak classifier weights alphan(ii) a Updating the weight w of the sample;
and replacing the T weak classifiers with the worst authentication effect.
Further, the authentication error rate en,mSatisfies the following conditions:
<math><mrow><msub><mi>e</mi><mrow><mi>n</mi><mo>,</mo><mi>m</mi></mrow></msub><mo>=</mo><mfrac><msubsup><mi>&lambda;</mi><mrow><mi>n</mi><mo>,</mo><mi>m</mi></mrow><mi>w</mi></msubsup><mrow><msubsup><mi>&lambda;</mi><mrow><mi>n</mi><mo>,</mo><mi>m</mi></mrow><mi>w</mi></msubsup><mo>+</mo><msubsup><mi>&lambda;</mi><mrow><mi>n</mi><mo>,</mo><mi>m</mi></mrow><mi>c</mi></msubsup></mrow></mfrac></mrow></math>
the weak classifier weight αnSatisfies the following conditions:
Figure BDA0000033049680000042
the weight w of the sample satisfies:
if Hyp (j) is 1If not, then,
Figure BDA0000033049680000044
according to another aspect of the present invention, there is provided a face authentication system, the system comprising: the characteristic extraction module is used for extracting human face characteristics from a plurality of frame images; the face authentication module is used for judging whether the face features in the plurality of frames of images accord with a preset face model or not, and when the number of the images which accord with the preset face model exceeds a first threshold value, the user authentication corresponding to the face features in the plurality of frames of images is considered to be successful; the characteristic adding module is used for extracting all the human face characteristics in the plurality of frames of images as sample characteristics when the number of the images which accord with the preset human face model exceeds a second threshold value; and the increment learning module is used for carrying out increment training on the preset human face model by utilizing the sample characteristics, wherein the second threshold value is not less than the first threshold value.
Further, the system further comprises: the image acquisition module acquires continuous images of the same user; the face tracking and positioning module is used for carrying out face detection, tracking and positioning on the continuous images; and the image selection module is used for selecting a plurality of frames of images of which the rotation angles of the human faces do not exceed a preset error range from the continuous images of the detected human face area.
Further, the image selection module comprises a feature point extraction unit, a rotation angle calculation unit and an image selection unit, wherein the feature point extraction unit extracts feature points of eyes and mouths from a face region of a continuous image; the rotation angle calculating unit calculates the face rotation angle theta through the feature points, and the calculation formula is as follows:
<math><mrow><mi>&theta;</mi><mo>=</mo><mi>arctg</mi><mo>[</mo><mfrac><mrow><mrow><mo>(</mo><mi>b</mi><mo>-</mo><mi>a</mi><mo>)</mo></mrow><mi>sin</mi><mi>&alpha;</mi></mrow><mrow><mrow><mo>(</mo><mi>b</mi><mo>+</mo><mi>a</mi><mo>)</mo></mrow><mrow><mo>(</mo><mn>1</mn><mo>-</mo><mi>cos</mi><mi>&alpha;</mi><mo>)</mo></mrow></mrow></mfrac><mo>]</mo></mrow></math>
wherein a is the horizontal distance between the right-eye characteristic point and the mouth characteristic point, b is the horizontal distance between the left-eye characteristic point and the mouth characteristic point, and alpha is 20-30 degrees; the image selection unit selects a plurality of frame images of which the face rotation angle theta does not exceed a preset error range.
Compared with the prior art, the invention uses the initial training image to obtain the face model, and then uses the image with higher confidence coefficient of the authentication result as the sample to carry out incremental learning and training on the face model in the process of carrying out user authentication by using the face model.
[ description of the drawings ]
The present invention will be more readily understood by reference to the following detailed description when considered in conjunction with the accompanying drawings, wherein like reference numerals designate like structural elements, and in which:
FIG. 1 is a flow chart of a method of face authentication in an embodiment of the present invention;
FIG. 2 is a schematic diagram of the imaging of a human face in one embodiment of the invention;
FIG. 3 is a block diagram of a face authentication system according to an embodiment of the present invention; and
fig. 4 is a block diagram of an image selection module according to an embodiment of the present invention.
[ detailed description ] embodiments
The detailed description of the invention generally describes procedures, steps, logic blocks, processes, or other symbolic representations that directly or indirectly simulate the operation of the present invention. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. And the invention may be practiced without these specific details. Those skilled in the art will be able to utilize the description and illustrations herein to effectively introduce other skilled in the art to their working essence. In other instances, well-known methods, procedures, components and circuits have not been described in detail so as not to obscure the present invention.
Reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic may be included in at least one implementation of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, the order of blocks in a method, flowchart, or functional block diagram representing one or more embodiments is not necessarily fixed to refer to any particular order, nor is it intended to be limiting.
The face authentication method and apparatus of the present invention can be implemented as a module, a system or a part of a system by software, hardware or a combination of both. The face authentication method and the face authentication device continuously select samples to continue training the face model in the use process of the user, and achieve good self-maintenance and authentication accuracy.
Referring to fig. 1, a flow chart of a method 100 of face authentication in an embodiment of the invention is shown. The face authentication method 100 includes:
step 101, collecting continuous images of the same user;
in this step, successive images of the same user are typically acquired with a camera, such as with a high-definition camera with a resolution of 1280 × 960 at a rate of 30 frames per second.
102, carrying out face detection and tracking on the continuous images;
the face detection and tracking of the continuous images acquired in step 101 are performed by a method described in the present inventor's chinese patent application No. 200510135668.8, namely, a method and a system for real-time detection and continuous tracking of a face in a video sequence.
103, extracting human face features from a plurality of frame images;
in this embodiment, the face region may be segmented according to the standard face model and the feature point positions in the current face region, where the feature points refer to two or more of an eye feature point, a mouth feature point, a nose feature point, and a chin feature point in the face region. For the extraction of the feature points, there are many mature techniques in the prior art, for example, the method described in the inventor's chinese patent application No. 200710177541.1, "a method and apparatus for locating feature points of an image" can be used. When the feature points are obtained, the human face region in the image can be obtained by using the 'three-stop five-eye' criterion of human face organ distribution. Further, corresponding Gabor features in the face region may be extracted, and in order to increase the speed of the authentication process, the adaboost algorithm may be adopted to select the Gabor features of different scales and different directions, and then the front M-dimensional Gabor feature most effective for authentication is selected from the Gabor features.
104, judging whether the face features in the plurality of frames of images accord with a preset face model or not, and when the number of the images which accord with the preset face model exceeds a first threshold value, considering that the user authentication corresponding to the face features in the plurality of frames of images is successful;
and judging whether the front M-dimensional Gabor features accord with a trained preset face model or not, and outputting a final output result according to a plurality of frames of images for authentication because a plurality of frames of images of the user are acquired, namely voting the final output result by using the authentication results of a plurality of frames of images. The face features in the N frames of face images are supposed to be authenticated, and the output results are O respectivelynN is 1, 2, 3, as OnWhen the value of (1) is 1, the frame image is expressed to accord with the preset human face model; when O is presentnWhen the value of (1) is 0, the frame image does not accord with the preset face model, then whether the number of images which accord with the model in the N frames of images reaches a first threshold value is judged, and if so, the user is considered to pass the authentication; if not, the user is considered to be not authenticated.
105, when the number of the images which accord with the preset human face model exceeds a second threshold value, extracting all human face features in the plurality of frames of images as sample features;
after the user passes the authentication, continuously judging whether the number of the images which accord with the preset face model exceeds a second threshold value, if so, taking the face features in the N frames of images as sample features, wherein OnThe face feature with the value of 1 is a positive sample, OnThe face feature with the value of 0 is taken as an inverse sample; and if not, not taking the human face features in the N frames of images as sample features. Since it is desirable to authenticate the face features with high confidence as sample features, the second threshold is usually larger than the first threshold, but may be equal to the first threshold.
And 106, performing incremental training on the preset face model by using the sample characteristics.
In the step, an Adaptive Boosting algorithm is mainly used for training a classifier, namely a human face model. For example, the paper Real-Time Tracking via On-line boosting-Grabner Helmut, Grabner Michael, Bischof Horst, Proceedings of the British Machine Vision Conference (BMVC' 06), vol.1, pages47-56, 2006, proposes an improved method. Specifically, the incremental learning training method by using the adaboost algorithm provided by the invention is as follows;
firstly, extracting front U-dimensional Gabor characteristics in the positive sample and the negative sample in the step 105 as weak characteristics, wherein U is less than M, and constructing an online weak classifier library;
next, training a strong classifier including a plurality of selectors sharing a weak classifier library by using the following method:
(1) initializing all selectors, wherein the selectors comprise selected weak classifiers and weak classifier weights;
(2) initializing all selector corresponding classifier correct sample weights and
Figure BDA0000033049680000071
and classifying error sample weights and
Figure BDA0000033049680000072
(3) for a current sample, the sample label is 1, if 1 ═ 1, then it is a positive sample, and 1 ═ 1 is a negative sample; setting the sample weight w to be 1;
(4) updating M weak classifiers with online weak feature structures, wherein a weak classifier updating algorithm is detailed below;
(5) for N selectors, updating weak classifier serial number j and weak classifier weight alpha of the selectorn(ii) a The method comprises the following specific steps:
obtaining the authentication result marks Hyp (M) of the M weak classifiers for the sample, wherein the authentication is 1 correctly, and otherwise, the authentication is 0;
setting unused flag bUsed for each weak classifiermWhether the flag has been selected for use by a certain selector, and if it has been used as 1, it is not used as 0;
for all N selectors, the following process updates are performed:
for all M weak classifiers, according to the authentication result of the M weak classifiers on the sample, if HypmIs 1, then
Figure BDA0000033049680000081
If not, then,
bUsed if the current weak classifier is not in use m1, then skip, and perform the following on all unused weak classifiers:
calculating an authentication error rate
Figure BDA0000033049680000083
And selects e with the smallest error ratenWeak classifiers as current selectors, i.e. taking j ═ argmin (e)n,m) Simultaneously calculate
Figure BDA0000033049680000084
Updating the weight of the sample, if Hyp (j) is 1
Figure BDA0000033049680000085
If not, then,
Figure BDA0000033049680000086
replacing T weak classifiers with the worst authentication effect;
secondly, the weak classifier construction updating algorithm can be as follows: for front U-dimensional Gabor features with different scales and different positions, a nearest neighbor based mode is adopted to constructThe weak classifier is in the form of
Figure BDA0000033049680000087
Wherein,for the jth feature center of the positive sample,
Figure BDA0000033049680000089
is the jth feature center of the inverse sample, fj(x) Is the current feature. A feasible weak classifier online updating algorithm is that the mean sum of weak features is updated online in a Kalman filtering mode
Figure BDA00000330496800000810
Therefore, online updating of the weak classifiers is realized. D (f)1,f2) Representing a feature f1,f2The absolute value of the difference between.
In addition, in order to ensure that the obtained preset face model can be subjected to incremental training, the preset face model also needs to be obtained by adopting the incremental learning training method. Namely, the face images collected when the initial user is logged in are used as positive samples, the face images of other users and illegal users are used as negative samples, and the increment learning training method is sequentially adopted to obtain the preset face model one by one according to the crossing sequence of the positive samples and the negative samples.
And taking the face images acquired when the user is recorded as a positive sample, taking the face images of other users and illegal users as negative samples, sequentially sending the face images one by one to the incremental learning module according to the sequence of crossing the positive and negative samples, and training to obtain the face authentication model.
In a preferred embodiment, between step 102 and step 103, a number of frame images whose face rotation angles do not exceed a predetermined error range may also be selected from the continuous images in which the face region is detected to extract features. That is, the continuous images in step 102 are not all processed in step 103, but a plurality of frames of images, some of which are considered to have the expression pose of the face region meeting the predetermined condition, are selected for processing in step 103. This is because, in the image acquisition process, if the user does not experience or unconsciously rotates the head, so that the faces in the acquired continuous images are not all at the acquisition target surface of the camera, the face regions in the continuous images are not all "ideal" face regions, and at this time, several frames of images in which the rotation angle of the face does not exceed the predetermined error range can be selected to enter the processing in step 103. The calculation method of the face rotation angle comprises the following steps:
as shown in fig. 2, assuming that the head of a human is a cylinder, the left and right glasses and the mouth are distributed on the surface of the same cylinder with radius r, according to the rule of 'three-stop five-eye' distribution of human face organs, that is, the front of the face is divided into five equal parts longitudinally, the distance between two eyes is the distance of one eye, the distance between the perpendicular line of the external canthus and the perpendicular line of the external ear hole is the distance of one eye, and the distance between the whole front of the face is divided into five eyes longitudinally, the radial included angle between the eye feature point and the mouth feature point can be estimated
And obtaining r and theta according to the marking information of the left and right eye characteristic points and the mouth characteristic points, wherein the r and theta satisfy the following relation:
rsin(α+θ)-rsinθ=a
rsin(α-θ)+rsinθ=b
solving the two equations to obtain:
<math><mrow><mi>&theta;</mi><mo>=</mo><mi>arctg</mi><mo>[</mo><mfrac><mrow><mrow><mo>(</mo><mi>b</mi><mo>-</mo><mi>a</mi><mo>)</mo></mrow><mi>sin</mi><mi>&alpha;</mi></mrow><mrow><mrow><mo>(</mo><mi>b</mi><mo>+</mo><mi>a</mi><mo>)</mo></mrow><mrow><mo>(</mo><mn>1</mn><mo>-</mo><mi>cos</mi><mi>&alpha;</mi><mo>)</mo></mrow></mrow></mfrac><mo>]</mo></mrow></math>
<math><mrow><mi>r</mi><mo>=</mo><mfrac><mn>1</mn><mn>2</mn></mfrac><msqrt><mfrac><msup><mrow><mo>(</mo><mi>a</mi><mo>+</mo><mi>b</mi><mo>)</mo></mrow><mn>2</mn></msup><mrow><msup><mi>sin</mi><mn>2</mn></msup><mi>&alpha;</mi></mrow></mfrac><mo>+</mo><mfrac><msup><mrow><mo>(</mo><mi>a</mi><mo>-</mo><mi>b</mi><mo>)</mo></mrow><mn>2</mn></msup><msup><mrow><mo>(</mo><mn>1</mn><mo>-</mo><mi>cos</mi><mi>&alpha;</mi><mo>)</mo></mrow><mn>2</mn></msup></mfrac></msqrt></mrow></math>
thereby estimating the face rotation angle theta and the head radius r.
If the face rotation angle theta does not meet the preset error range thetamin≤θ≤θmaxFor example (-60 °, 60 °) the face rotation error in the image is considered too large, and the step 103 is not selected.
In summary, the training and authentication processes of the face model in the face authentication method of the present invention are performed gradually, and the face model can obtain better authentication accuracy and self-maintenance capability through continuous training and updating. Meanwhile, a plurality of frame images with part of face rotation angles not exceeding a preset error range are selected to extract features, and certain authentication accuracy can be improved.
Referring to fig. 3, a block diagram of a face authentication apparatus 300 according to an embodiment of the present invention is shown. The face authentication apparatus 300 includes: an image acquisition module 301, a face tracking and positioning module 302, an image selection module 303, a feature extraction module 304, a face authentication module 305, a feature addition module 306, an incremental learning module 307, and a face model library 308.
The image capturing module 301 may be a camera for capturing continuous images of a user, such as: one frame of image of the user is acquired at a rate of 30 frames per second using a high-definition camera with a resolution of 1280 x 960.
The face tracking and positioning module 302 detects a face region from the continuous images, and performs tracking and positioning after the face region is detected, wherein the positioning can be performed through eye feature points and mouth feature points in the face region.
The image selection module 303 calculates a face rotation angle according to the eye feature points and the mouth feature points in the face region, and selects a plurality of frames of images in which the face rotation angle in the continuous images conforms to a predetermined error range. In the embodiment shown in fig. 4, the image processing device comprises a feature point extracting unit 402, a rotation angle calculating unit 404 and an image selecting unit 406, wherein the feature point extracting unit 402 extracts feature points of eyes and mouth from a face region of a continuous image; the rotation angle calculating unit 404 calculates the face rotation angle θ through the feature points, and the calculation formula is as follows:
<math><mrow><mi>&theta;</mi><mo>=</mo><mi>arctg</mi><mo>[</mo><mfrac><mrow><mrow><mo>(</mo><mi>b</mi><mo>-</mo><mi>a</mi><mo>)</mo></mrow><mi>sin</mi><mi>&alpha;</mi></mrow><mrow><mrow><mo>(</mo><mi>b</mi><mo>+</mo><mi>a</mi><mo>)</mo></mrow><mrow><mo>(</mo><mn>1</mn><mo>-</mo><mi>cos</mi><mi>&alpha;</mi><mo>)</mo></mrow></mrow></mfrac><mo>]</mo></mrow></math>
wherein a is the horizontal distance between the right-eye characteristic point and the mouth characteristic point, b is the horizontal distance between the left-eye characteristic point and the mouth characteristic point, and alpha is 20-30 degrees; the image selecting unit 406 selects a number of frame images whose face rotation angle θ does not exceed a predetermined error range.
The feature extraction module 304 extracts facial features from several frame images. Specifically, the feature extraction module 304 may select Gabor features of different scales and different directions by using an adaboost algorithm, and then select a front M-dimensional Gabor feature that is most effective for authentication.
The facial feature module 305 determines whether the facial features in the images conform to a preset facial model by using the facial model library 308, and when the number of the images conforming to the preset facial model exceeds a first threshold value, the user authentication corresponding to the facial features in the images is considered to be successful.
The feature adding module 306 determines whether the number of images conforming to the preset face model exceeds a second threshold, and if so, extracts all face features in the plurality of frames of images as sample features.
The incremental learning module 307 performs incremental training on the preset face model by using the sample features, and the specific incremental learning training method may refer to the foregoing method. And the second threshold should be no less than the first threshold in order to obtain an image with a higher authentication confidence as a sample.
The foregoing description has disclosed fully preferred embodiments of the present invention. It should be noted that those skilled in the art can make modifications to the embodiments of the present invention without departing from the scope of the appended claims. Accordingly, the scope of the claims of the present invention should not be limited to the particular embodiments described.

Claims (12)

1. A face authentication method is characterized by comprising the following steps:
extracting human face features from a plurality of frame images;
judging whether the face features in the plurality of frames of images accord with a preset face model or not, and when the number of the images which accord with the preset face model exceeds a first threshold value, considering that the user authentication corresponding to the face features in the plurality of frames of images is successful;
when the number of the images which accord with the preset human face model exceeds a second threshold value, extracting all human face features in the plurality of frames of images as sample features; and
performing incremental training on the preset human face model by using the sample characteristics,
wherein the second threshold is not less than the first threshold.
2. The method of claim 1, further comprising:
collecting continuous images of the same user;
carrying out face detection and tracking on the continuous images; and
selecting a plurality of frame images of which the rotation angles of the human faces do not exceed a preset error range from continuous images of which the human face regions are detected.
3. The method according to claim 2, wherein the selecting a number of frames of images, of which the rotation angles of the human faces do not exceed a predetermined error range, from the continuous images in which the human face regions are detected comprises:
extracting feature points of eyes and a mouth from a face region of the continuous image;
calculating the face rotation angle theta through the feature points, wherein the calculation formula is as follows:
<math><mrow><mi>&theta;</mi><mo>=</mo><mi>arctg</mi><mo>[</mo><mfrac><mrow><mrow><mo>(</mo><mi>b</mi><mo>-</mo><mi>a</mi><mo>)</mo></mrow><mi>sin</mi><mi>&alpha;</mi></mrow><mrow><mrow><mo>(</mo><mi>b</mi><mo>+</mo><mi>a</mi><mo>)</mo></mrow><mrow><mo>(</mo><mn>1</mn><mo>-</mo><mi>cos</mi><mi>&alpha;</mi><mo>)</mo></mrow></mrow></mfrac><mo>]</mo></mrow></math>
wherein a is the horizontal distance from the right-eye characteristic point to the mouth characteristic point, b is the horizontal distance from the left-eye characteristic point to the mouth characteristic point, and alpha is a certain value from 20 degrees to 30 degrees;
selecting a plurality of frames of images of which the face rotation angle theta does not exceed a preset error range.
4. The method of claim 1, wherein extracting all the facial features in the plurality of frames of images as sample features comprises: and extracting the face features which accord with the preset face model from the plurality of frames of images as positive samples, and extracting the face features which do not accord with the preset face model from the plurality of frames of images as negative samples.
5. The method of claim 4, wherein a weak classifier library is constructed and incrementally trained using the positive and negative samples,
initializing all selectors, wherein the selectors comprise selected weak classifiers and weak classifier weights;
initializing all selector corresponding classifier correct sample weights and
Figure FDA0000033049670000021
and classifying error sample weights and
Figure FDA0000033049670000022
for a current sample, the sample label is 1, if 1 ═ 1, then it is a positive sample, and 1 ═ 1 is a negative sample; setting the sample weight w to be 1;
updating M weak classifiers of online weak feature structures;
for N selectors, updating weak classifier serial number j and weak classifier weight alpha of the selectorn
6. The method of claim 5, wherein the weak features are pre-U-dimensional Gobar features extracted from the positive and negative samples, and for pre-U-dimensional Gabor features of different scales and different positions, a weak classifier is constructed in a nearest neighbor based manner, and the weak classifier is in the form of:
h j weak ( x ) = sign ( D ( f j ( x ) , c j p ) - D ( f j ( x ) , c j n ) ) ,
wherein,
Figure FDA0000033049670000024
for the jth feature center of the positive sample,
Figure FDA0000033049670000025
is the jth feature center of the inverse sample, fj(x) Is the current feature.
7. The method according to claim 5, wherein the step of updating M weak classifiers constructed by weak features on line is to update the mean value of the weak features on line by using Kalman filtering
Figure FDA0000033049670000026
To know
Figure FDA0000033049670000027
8. The method of claim 5, wherein for N selectors, the weak classifier sequence number j and weak classifier weight α of the selector are updatednThe method comprises the following steps:
obtaining the authentication result marks Hyp (M) of the M weak classifiers for the sample, wherein the authentication is 1 correctly, and otherwise, the authentication is 0;
setting unused flag bUsed for each weak classifiermWhether the flag has been selected for use by a selector, and if it has been used to be 1, noneUse is 0;
for all N selectors, the following process updates are performed:
for all M weak classifiers, according to the authentication result of the M weak classifiers on the sample, if HypmIs 1, then
Figure FDA0000033049670000028
If not, then,
bUsed if the current weak classifier is not in usem1, skip and calculate the authentication error rate e for all unused weak classifiersn,mAnd selecting e with the smallest error ratenWeak classifiers as current selectors, i.e. taking j ═ argmin (e)n,m) While computing weak classifier weights alphan(ii) a Updating the weight w of the sample;
and replacing the T weak classifiers with the worst authentication effect.
9. The method of claim 8, wherein the authentication error rate en,mSatisfies the following conditions:
<math><mrow><msub><mi>e</mi><mrow><mi>n</mi><mo>,</mo><mi>m</mi></mrow></msub><mo>=</mo><mfrac><msubsup><mi>&lambda;</mi><mrow><mi>n</mi><mo>,</mo><mi>m</mi></mrow><mi>w</mi></msubsup><mrow><msubsup><mi>&lambda;</mi><mrow><mi>n</mi><mo>,</mo><mi>m</mi></mrow><mi>w</mi></msubsup><mo>+</mo><msubsup><mi>&lambda;</mi><mrow><mi>n</mi><mo>,</mo><mi>m</mi></mrow><mi>c</mi></msubsup></mrow></mfrac></mrow></math>
the weak classifier weight αnSatisfies the following conditions:
Figure FDA0000033049670000032
the weight w of the sample satisfies:
if Hyp (j) is 1
Figure FDA0000033049670000033
If not, then,
Figure FDA0000033049670000034
10. a face authentication system, comprising:
the characteristic extraction module is used for extracting human face characteristics from a plurality of frame images;
the face authentication module is used for judging whether the face features in the plurality of frames of images accord with a preset face model or not, and when the number of the images which accord with the preset face model exceeds a first threshold value, the user authentication corresponding to the face features in the plurality of frames of images is considered to be successful;
the characteristic adding module is used for extracting all the human face characteristics in the plurality of frames of images as sample characteristics when the number of the images which accord with the preset human face model exceeds a second threshold value; and
an increment learning module for carrying out increment training on the preset human face model by utilizing the sample characteristics,
wherein the second threshold is not less than the first threshold.
11. The system of claim 10, further comprising:
the image acquisition module acquires continuous images of the same user;
the face tracking and positioning module is used for carrying out face detection, tracking and positioning on the continuous images; and
and the image selection module is used for selecting a plurality of frames of images of which the human face rotation angles do not exceed a preset error range from the continuous images of the detected human face area.
12. The system of claim 11, wherein the image selection module package
Comprises a characteristic point extracting unit, a rotation angle calculating unit and an image selecting unit,
the feature point extraction unit extracts feature points of eyes and a mouth from a face region of a continuous image;
the rotation angle calculating unit calculates the face rotation angle theta through the feature points, and the calculation formula is as follows:
<math><mrow><mi>&theta;</mi><mo>=</mo><mi>arctg</mi><mo>[</mo><mfrac><mrow><mrow><mo>(</mo><mi>b</mi><mo>-</mo><mi>a</mi><mo>)</mo></mrow><mi>sin</mi><mi>&alpha;</mi></mrow><mrow><mrow><mo>(</mo><mi>b</mi><mo>+</mo><mi>a</mi><mo>)</mo></mrow><mrow><mo>(</mo><mn>1</mn><mo>-</mo><mi>cos</mi><mi>&alpha;</mi><mo>)</mo></mrow></mrow></mfrac><mo>]</mo></mrow></math>
wherein a is the horizontal distance between the right-eye characteristic point and the mouth characteristic point, b is the horizontal distance between the left-eye characteristic point and the mouth characteristic point, and alpha is 20-30 degrees;
the image selection unit selects a plurality of frame images of which the face rotation angle theta does not exceed a preset error range.
CN201010549760XA 2010-11-18 2010-11-18 Human face authentication method and device Expired - Fee Related CN102004905B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201010549760XA CN102004905B (en) 2010-11-18 2010-11-18 Human face authentication method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201010549760XA CN102004905B (en) 2010-11-18 2010-11-18 Human face authentication method and device

Publications (2)

Publication Number Publication Date
CN102004905A true CN102004905A (en) 2011-04-06
CN102004905B CN102004905B (en) 2012-11-21

Family

ID=43812258

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201010549760XA Expired - Fee Related CN102004905B (en) 2010-11-18 2010-11-18 Human face authentication method and device

Country Status (1)

Country Link
CN (1) CN102004905B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103605969A (en) * 2013-11-28 2014-02-26 Tcl集团股份有限公司 Method and device for face inputting
WO2014169441A1 (en) * 2013-04-16 2014-10-23 Thomson Licensing Method and system for eye tracking using combination of detection and motion estimation
CN104537389A (en) * 2014-12-29 2015-04-22 生迪光电科技股份有限公司 Human face recognition method and terminal equipment
CN106296784A (en) * 2016-08-05 2017-01-04 深圳羚羊极速科技有限公司 A kind of by face 3D data, carry out the algorithm that face 3D ornament renders
CN103679118B (en) * 2012-09-07 2017-06-16 汉王科技股份有限公司 A kind of human face in-vivo detection method and system
CN107943527A (en) * 2017-11-30 2018-04-20 西安科锐盛创新科技有限公司 The method and its system of electronic equipment is automatically closed in sleep
CN108734092A (en) * 2017-04-19 2018-11-02 株式会社日立制作所 Personage's authentication device
CN109255307A (en) * 2018-08-21 2019-01-22 深圳市梦网百科信息技术有限公司 A kind of human face analysis method and system based on lip positioning
WO2019119449A1 (en) * 2017-12-22 2019-06-27 深圳中兴力维技术有限公司 Human face image feature fusion method and apparatus, device, and storage medium
CN112836660A (en) * 2021-02-08 2021-05-25 上海卓繁信息技术股份有限公司 Face library generation method and device for monitoring field and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1794264A (en) * 2005-12-31 2006-06-28 北京中星微电子有限公司 Method and system of real time detecting and continuous tracing human face in video frequency sequence
CN101162501A (en) * 2006-10-13 2008-04-16 上海银晨智能识别科技有限公司 Increment training method in human face recognition system
CN101216884A (en) * 2007-12-29 2008-07-09 北京中星微电子有限公司 A method and system for face authentication
CN101499127A (en) * 2008-02-03 2009-08-05 上海银晨智能识别科技有限公司 Method for preventing trouble in human face recognition caused by interference

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1794264A (en) * 2005-12-31 2006-06-28 北京中星微电子有限公司 Method and system of real time detecting and continuous tracing human face in video frequency sequence
CN101162501A (en) * 2006-10-13 2008-04-16 上海银晨智能识别科技有限公司 Increment training method in human face recognition system
CN101216884A (en) * 2007-12-29 2008-07-09 北京中星微电子有限公司 A method and system for face authentication
CN101499127A (en) * 2008-02-03 2009-08-05 上海银晨智能识别科技有限公司 Method for preventing trouble in human face recognition caused by interference

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《计算机工程》 20100831 申芳林等 基于固定增量单样本感知器的AdaBoost算法 5 第36卷, 第15期 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679118B (en) * 2012-09-07 2017-06-16 汉王科技股份有限公司 A kind of human face in-vivo detection method and system
WO2014169441A1 (en) * 2013-04-16 2014-10-23 Thomson Licensing Method and system for eye tracking using combination of detection and motion estimation
CN103605969A (en) * 2013-11-28 2014-02-26 Tcl集团股份有限公司 Method and device for face inputting
CN104537389A (en) * 2014-12-29 2015-04-22 生迪光电科技股份有限公司 Human face recognition method and terminal equipment
CN106296784A (en) * 2016-08-05 2017-01-04 深圳羚羊极速科技有限公司 A kind of by face 3D data, carry out the algorithm that face 3D ornament renders
CN108734092A (en) * 2017-04-19 2018-11-02 株式会社日立制作所 Personage's authentication device
CN108734092B (en) * 2017-04-19 2021-09-17 株式会社日立制作所 Person authentication device
CN107943527A (en) * 2017-11-30 2018-04-20 西安科锐盛创新科技有限公司 The method and its system of electronic equipment is automatically closed in sleep
WO2019119449A1 (en) * 2017-12-22 2019-06-27 深圳中兴力维技术有限公司 Human face image feature fusion method and apparatus, device, and storage medium
CN109255307A (en) * 2018-08-21 2019-01-22 深圳市梦网百科信息技术有限公司 A kind of human face analysis method and system based on lip positioning
CN112836660A (en) * 2021-02-08 2021-05-25 上海卓繁信息技术股份有限公司 Face library generation method and device for monitoring field and electronic equipment
CN112836660B (en) * 2021-02-08 2024-05-28 上海卓繁信息技术股份有限公司 Face library generation method and device for monitoring field and electronic equipment

Also Published As

Publication number Publication date
CN102004905B (en) 2012-11-21

Similar Documents

Publication Publication Date Title
CN102004905B (en) Human face authentication method and device
CN108537743B (en) Face image enhancement method based on generation countermeasure network
CN108921100B (en) Face recognition method and system based on visible light image and infrared image fusion
CN101710383B (en) Method and device for identity authentication
CN108596041B (en) A kind of human face in-vivo detection method based on video
CN104794465B (en) A kind of biopsy method based on posture information
CN106529414A (en) Method for realizing result authentication through image comparison
CN107358152B (en) Living body identification method and system
CN102375970A (en) Identity authentication method based on face and authentication apparatus thereof
CN109359603A (en) A kind of vehicle driver&#39;s method for detecting human face based on concatenated convolutional neural network
CN105426870A (en) Face key point positioning method and device
WO2021139171A1 (en) Facial enhancement based recognition method, apparatus and device, and storage medium
CN103473564B (en) A kind of obverse face detection method based on sensitizing range
CN105138967B (en) Biopsy method and device based on human eye area active state
CN109902603A (en) Driver identity identification authentication method and system based on infrared image
CN109598210A (en) A kind of image processing method and device
CN112001215B (en) Text irrelevant speaker identity recognition method based on three-dimensional lip movement
CN111062292A (en) Fatigue driving detection device and method
CN105022999A (en) Man code company real-time acquisition system
CN113627256B (en) False video inspection method and system based on blink synchronization and binocular movement detection
CN103218615B (en) Face judgment method
CN107330914A (en) Human face part motion detection method and device and living body identification method and system
CN105069745A (en) face-changing system based on common image sensor and enhanced augmented reality technology and method
CN107330370A (en) Forehead wrinkle action detection method and device and living body identification method and system
CN103544478A (en) All-dimensional face detection method and system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: JIANGSU BOYUE INTERNET OF THINGS TECHNOLOGY CO., L

Free format text: FORMER OWNER: WUXI VIMICRO CO., LTD.

Effective date: 20141126

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 214028 WUXI, JIANGSU PROVINCE TO: 226300 NANTONG, JIANGSU PROVINCE

TR01 Transfer of patent right

Effective date of registration: 20141126

Address after: 226300 1 large east science and Technology Park, Nantong hi tech Zone, Nantong, Jiangsu, Tongzhou District

Patentee after: JIANGSU BOYUE INTERNET OF THINGS TECHNOLOGY CO., LTD.

Address before: 214028 Jiangsu New District of Wuxi, Taihu international science and Technology Park Jia Qing 530 building 10 layer

Patentee before: Wuxi Vimicro Co., Ltd.

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20121121

Termination date: 20191118

CF01 Termination of patent right due to non-payment of annual fee