CN109299690B - Method capable of improving video real-time face recognition precision - Google Patents

Method capable of improving video real-time face recognition precision Download PDF

Info

Publication number
CN109299690B
CN109299690B CN201811105144.8A CN201811105144A CN109299690B CN 109299690 B CN109299690 B CN 109299690B CN 201811105144 A CN201811105144 A CN 201811105144A CN 109299690 B CN109299690 B CN 109299690B
Authority
CN
China
Prior art keywords
face
image
recognition
face recognition
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811105144.8A
Other languages
Chinese (zh)
Other versions
CN109299690A (en
Inventor
刘中秋
张伟
陈高曙
梁敏
占海花
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Miaxis Biometrics Co Ltd
Original Assignee
Miaxis Biometrics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Miaxis Biometrics Co Ltd filed Critical Miaxis Biometrics Co Ltd
Priority to CN201811105144.8A priority Critical patent/CN109299690B/en
Publication of CN109299690A publication Critical patent/CN109299690A/en
Application granted granted Critical
Publication of CN109299690B publication Critical patent/CN109299690B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method capable of improving video real-time face recognition accuracy, which comprises the steps of firstly building a face detection model and a face recognition model, then carrying out face detection on a monitoring image obtained in real time through the face detection model, after a face is detected, creating a corresponding face template and carrying out face tracking on a subsequent k frame image, on the basis, calculating the face quality score and the interpupillary distance of the detected face image and the subsequent k frame face tracking image, and finally carrying out face recognition on a face image higher than the face quality score threshold and the interpupillary distance threshold through the face recognition model to obtain a final face recognition result. The method is suitable for video monitoring and recognizing scenes, and can effectively improve the real-time face recognition precision of videos.

Description

Method capable of improving video real-time face recognition precision
[ technical field ] A method for producing a semiconductor device
The invention relates to the technical field of face image processing, in particular to a method capable of improving video real-time face recognition accuracy.
[ background of the invention ]
The biometric technology is a biometric technology that performs personal identification using physiological characteristics or behavioral characteristics inherent to a human body, in which face recognition is a kind of biometric technology that performs identification based on face characteristic information of a person. In recent years, with the popularization of video surveillance systems, face recognition is widely favored in the market due to its advantages of non-mandatory, non-contact, concurrency, simplicity in operation, intuitive result, good concealment, and the like. However, when a face is recognized by using a conventional face recognition method based on a single frame image, once a single frame face image used for recognition is wrong, the identity of a person is misjudged, and thus the recognition accuracy is reduced.
[ summary of the invention ]
In order to solve the problems, the invention provides a method capable of improving the real-time face recognition precision of a video, so as to solve the technical problems of personnel identity misjudgment and low face recognition accuracy rate caused by the adoption of the traditional face recognition method based on a single-frame image.
The invention adopts the following technical scheme to solve the problems:
a method for improving the real-time face recognition accuracy of a video comprises the following steps:
step 1: building a face detection model and a face recognition model;
step 2: acquiring a monitoring image in real time;
and step 3: carrying out face detection on the image obtained in real time in the step (2) through a face detection model, creating a corresponding face template and carrying out face tracking on a subsequent k frame image after the face is detected;
and 4, step 4: performing face quality evaluation on the image of the face detected in the step 3 and the subsequent k frames of face tracking images, and performing weighting calculation according to evaluation indexes and occupied weights to obtain a face quality score;
and 5: calculating the interpupillary distance of the face image in the step 3;
step 6: and on the basis of the step 4 and the step 5, a plurality of face images meeting the requirements are taken for face recognition.
Further, step 1 comprises the following steps:
step 1.1: constructing a convolutional neural network to train the marked face image, and obtaining a face detection model after the training is finished;
step 1.2: constructing a convolutional neural network to train the classified face images, and obtaining a face recognition model after the training is finished;
as a technical scheme, the real-time monitoring image in the step 2 is obtained by acquiring the image shot by the camera through function callback.
As a technical solution, in step 3, the image in which the face is detected is a face detection frame, and then the k frame image is a face tracking frame. In order to ensure the fluency of the recognition process, the k value is given manually according to the time required by the face detection and the face tracking.
As a technical scheme, in the step 4, the evaluation indexes are face score, face illumination symmetry score, definition score, eye opening score and mouth closing score, and the weight occupied by each index is given manually.
As a technical solution, in step 5, the distance between the face and the camera is represented by using the pupil distance of the acquired face image.
Further, step 6 comprises the following steps:
step 6.1: setting a face quality score threshold value to be s1 and a pupil distance threshold value to be s2, collecting n face images with the face quality score higher than s1 and the pupil distance larger than s2 according to the face template created in the step 3, carrying out face recognition on the n face images by adopting a trained face recognition model, comparing the n face images with all people in a face recognition library one by one, and obtaining the maximum similarity rate of the face images in the face comparison library so as to obtain n similarity rate values. The value of n is assigned manually according to the number of selected camera frames.
Step 6.2: setting a similarity threshold as s3, wherein in the n similarity values, the number of the similarity values smaller than s3 is n1, and the number of the similarity values larger than or equal to s3 is n 2; if n1> n2, the recognized face is judged to be a foreign person; if n1 is equal to n2, the person with the highest similarity rate is taken as the final face recognition result; and if n1< n2, the person with the highest recognition ratio is taken as the final face recognition result, and if the recognition ratios are equal, the person with the highest similarity ratio is taken as the final face recognition result.
According to the method capable of improving the video real-time face recognition precision, the monitoring image is obtained in real time, the face recognition is carried out on the detected face detection frame and the face tracking frame meeting the recognition parameter requirements, namely, the face recognition is carried out on the multi-frame face image, so that the problems of identity misjudgment and the like caused by the fact that single-frame image recognition is adopted in the prior art are solved, and the accuracy of the video real-time face recognition is effectively improved.
[ description of the drawings ]
FIG. 1 is a general flow diagram of the present invention.
Fig. 2 is a flowchart of constructing a face model according to embodiment 1 of the present invention.
Fig. 3 is a schematic flow chart of acquiring a face detection frame and a face recognition frame in embodiment 1 of the present invention.
[ detailed description ] embodiments
The following detailed description of embodiments of the present invention is provided in connection with the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
Real-time example 1
As shown in fig. 1, a method for improving the accuracy of real-time face recognition of a video includes the following steps:
(1) and constructing a convolutional neural network A to train the marked face images and a convolutional neural network B to train the classified face images, and after the training is finished, respectively obtaining a face detection model and a face recognition model. As shown in figure 2 of the drawings, in which,
(2) starting a camera, and decoding video data acquired by the camera through function call-back to obtain a series of video frame images; and carrying out face detection frame by frame from the 1 st frame image, if a face is detected for the first time in the ith frame, defining the current frame as a face detection frame, creating a face template based on the face detection frame, defining the 10 frames of images immediately following the face detection frame as a face tracking frame, carrying out face tracking by adopting a template matching method, then continuing to carry out face detection frame by frame from the (i +10+1) th frame, detecting the face again in the jth frame, and repeating the process until the camera is closed, wherein i is a natural number, and j is not less than (i + 11). As shown in fig. 3.
(3) And (3) performing quality evaluation on the face image acquired by the face detection frame and the face tracking frame in the step (2), wherein evaluation indexes comprise a front face score, a face illumination symmetry score, a definition score, an eye opening score and a mouth closing score, the score ranges of the six indexes are all 0-1, and the weight occupied by each index is set to be 30, 10, 20, 10 and 10 in sequence, so that the total score range of the face quality score is 0-100. The face is divided into frontalsScore, and the larger the score is, the more positive the face is; the face illumination is divided into lightScore, and the larger the score is, the more suitable the face illumination is; the illumination of the face is symmetrically divided into symscope, and the larger the value is, the more symmetrical the illumination of the face is; the definition of the blurScore is blurScore, and the higher the score is, the clearer the face is; eye opening and eyeScore, wherein the larger the score is, the more likely the eye is to be in an eye opening state; the mouth is closed, and mouthScore, the larger the score, the more likely it is to be in the mouth-closed state.
The calculation formula of the face quality score is as follows:
qualityScore=30*frontalsScore+10*lightScore+20*symScore+20*blurScore+10*eyeScore+10*mouthScore
(4) and (3) representing the distance between the human face and the camera for the interpupillary distance of the human face image acquired by the human face detection frame and the human face tracking frame in the step (2). The pupil distance here refers to the distance between two eyes on the face image, the unit is a pixel, and the larger the numerical value of the pupil distance, the closer the distance between the shot face and the camera is.
(5) The human face quality threshold value is set to be 70, and the pupil distance threshold value is set to be 35. And (3) collecting n face images with the corresponding face quality score higher than 70 and the pupil distance larger than 35 according to the face template established based on the face detection frame in the step (2), wherein n is more than or equal to 0 and less than or equal to 5. If n is 0, the face image which does not meet the face recognition requirement is shown, and the face template is not subjected to subsequent face recognition; if n is more than or equal to 1 and less than or equal to 5, performing face recognition on the n face images by adopting a trained face recognition model, comparing the face images with all people in a face recognition library one by one, and obtaining the maximum similarity rate in the face comparison library for each face image, thereby obtaining n similarity rate values.
(6) The similarity threshold value is set to 0.8. If n is 1 and the similarity rate value is lower than 0.8, judging that the recognized face is a foreign person, and if not, taking the recognition result as the final face recognition result; if n is more than or equal to 2 and less than or equal to 5, comparing n similarity rate values obtained in the step (5) with a threshold value of 0.8, and if the similarity rate of the n similarity rate values is less than 0.8 and the similarity rate of the n similarity rate values is more than 0.8, judging that the recognized face is an alien person; if the numerical value with the similarity rate higher than 0.8 is equal to the numerical value with the similarity rate lower than 0.8, taking the person with the highest similarity rate as the final face recognition result; and if the similarity ratio is more than the value of which the similarity ratio is less than 0.8, taking the person with the highest recognition ratio as the final face recognition result, and if the recognition ratios are equal, taking the person with the highest similarity ratio as the final face recognition result.

Claims (9)

1. A method for improving the video real-time face recognition precision is characterized by comprising the following steps:
step 1: building a face detection model and a face recognition model;
step 2: acquiring a monitoring image in real time;
and step 3: performing face detection on the image acquired in real time in the step 2 through a face detection model, and after a face is detected,
creating a corresponding face template and carrying out face tracking on a subsequent k frame image;
and 4, step 4: the image of the face detected in the step 3 and the subsequent k frames of face tracking images are subjected to face quality evaluation, and the root
Weighting and calculating according to the evaluation indexes and the occupied weight to obtain a face quality score;
and 5: calculating the interpupillary distance of the face image in the step 3;
step 6: on the basis of the step 4 and the step 5, a plurality of face images meeting the requirements are taken for face recognition:
step 6.1: setting a face quality score threshold value to be s1 and a pupil distance threshold value to be s2, collecting n face images with the face quality score higher than s1 and the pupil distance larger than s2 according to the face template established in the step 3, carrying out face recognition on the n face images by adopting a trained face recognition model, comparing the n face images with all persons in a face recognition library one by one, and obtaining the maximum similarity rate in the face comparison library for each face image, thereby obtaining n similarity rate values;
step 6.2: setting a similarity threshold as s3, wherein in the n similarity values, the number of the similarity values smaller than s3 is n1, and the number of the similarity values larger than or equal to s3 is n 2; if n1> n2, the recognized face is judged to be a foreign person; if n1= n2, taking the person with the highest similarity rate as the final face recognition result; and if n1< n2, the person with the highest recognition ratio is taken as the final face recognition result, and if the recognition ratios are equal, the person with the highest similarity ratio is taken as the final face recognition result.
2. The method according to claim 1, wherein in step 1, the face detection model is obtained by constructing a convolutional neural network a to train the labeled face images, and the face recognition model is obtained by constructing a convolutional neural network B to train the classified face images.
3. The method according to claim 1, wherein the real-time monitoring image in step 2 is obtained by function call-back through a camera.
4. The method as claimed in claim 1, wherein in step 3, the detected face image is a face detection frame, and then the k frames of images are face tracking frames.
5. The method according to claim 1, wherein in the step 4, the evaluation indexes are face classification, face illumination symmetry classification, sharpness classification, eye opening classification and mouth closing classification.
6. The method according to claim 1, wherein in step 5, the distance between the face and the camera is represented by the interpupillary distance of the acquired face image.
7. The method as claimed in claim 1 or 4, wherein the k value is manually assigned according to the time required for face detection and face tracking to ensure the smoothness of the recognition process.
8. The method as claimed in claim 5, wherein the weight of each evaluation index is assigned manually.
9. The method of claim 1, wherein the n value is assigned manually according to the selected camera frame number, and n = n1+ n 2.
CN201811105144.8A 2018-09-21 2018-09-21 Method capable of improving video real-time face recognition precision Active CN109299690B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811105144.8A CN109299690B (en) 2018-09-21 2018-09-21 Method capable of improving video real-time face recognition precision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811105144.8A CN109299690B (en) 2018-09-21 2018-09-21 Method capable of improving video real-time face recognition precision

Publications (2)

Publication Number Publication Date
CN109299690A CN109299690A (en) 2019-02-01
CN109299690B true CN109299690B (en) 2020-12-29

Family

ID=65164086

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811105144.8A Active CN109299690B (en) 2018-09-21 2018-09-21 Method capable of improving video real-time face recognition precision

Country Status (1)

Country Link
CN (1) CN109299690B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110232323A (en) * 2019-05-13 2019-09-13 特斯联(北京)科技有限公司 A kind of parallel method for quickly identifying of plurality of human faces for crowd and its device
CN110674715B (en) * 2019-09-16 2022-02-18 宁波视睿迪光电有限公司 Human eye tracking method and device based on RGB image
CN112381016A (en) * 2020-11-19 2021-02-19 山东海博科技信息系统股份有限公司 Vehicle-mounted face recognition algorithm optimization method and system
CN112926458B (en) * 2021-02-26 2022-11-18 展讯通信(天津)有限公司 Face authentication method, face authentication device, storage medium and computer equipment
CN114882576B (en) * 2022-07-07 2022-09-20 中关村科学城城市大脑股份有限公司 Face recognition method, electronic device, computer-readable medium, and program product

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202197300U (en) * 2010-08-05 2012-04-18 北京海鑫智圣技术有限公司 Mobile face identification system
CN102306290B (en) * 2011-10-14 2013-10-30 刘伟华 Face tracking recognition technique based on video
KR101322168B1 (en) * 2012-01-17 2013-10-28 성균관대학교산학협력단 Apparatus for real-time face recognition
CN104517104B (en) * 2015-01-09 2018-08-10 苏州科达科技股份有限公司 A kind of face identification method and system based under monitoring scene
US9430697B1 (en) * 2015-07-03 2016-08-30 TCL Research America Inc. Method and system for face recognition using deep collaborative representation-based classification
CN105354543A (en) * 2015-10-29 2016-02-24 小米科技有限责任公司 Video processing method and apparatus
CN105740758A (en) * 2015-12-31 2016-07-06 上海极链网络科技有限公司 Internet video face recognition method based on deep learning
US9971933B1 (en) * 2017-01-09 2018-05-15 Ulsee Inc. Facial image screening method and face recognition system thereof
CN106815575B (en) * 2017-01-22 2019-12-10 上海银晨智能识别科技有限公司 Optimization system and method for face detection result set
CN107066942A (en) * 2017-03-03 2017-08-18 上海斐讯数据通信技术有限公司 A kind of living body faces recognition methods and system
CN108229297B (en) * 2017-09-30 2020-06-05 深圳市商汤科技有限公司 Face recognition method and device, electronic equipment and computer storage medium
CN107742107B (en) * 2017-10-20 2019-03-01 北京达佳互联信息技术有限公司 Facial image classification method, device and server
CN107944363B (en) * 2017-11-15 2019-04-26 北京达佳互联信息技术有限公司 Face image processing process, system and server
CN108229330A (en) * 2017-12-07 2018-06-29 深圳市商汤科技有限公司 Face fusion recognition methods and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN109299690A (en) 2019-02-01

Similar Documents

Publication Publication Date Title
CN109299690B (en) Method capable of improving video real-time face recognition precision
CN108830252B (en) Convolutional neural network human body action recognition method fusing global space-time characteristics
CN109815826B (en) Method and device for generating face attribute model
KR102174595B1 (en) System and method for identifying faces in unconstrained media
CN105469065B (en) A kind of discrete emotion identification method based on recurrent neural network
CN109543526B (en) True and false facial paralysis recognition system based on depth difference characteristics
Chaudhari et al. Face detection using viola jones algorithm and neural networks
CN106682578B (en) Weak light face recognition method based on blink detection
CN107133612A (en) Based on image procossing and the intelligent ward of speech recognition technology and its operation method
CN109635727A (en) A kind of facial expression recognizing method and device
CN104361316B (en) Dimension emotion recognition method based on multi-scale time sequence modeling
CN110472512B (en) Face state recognition method and device based on deep learning
CN106250825A (en) A kind of at the medical insurance adaptive face identification system of applications fields scape
CN109711309B (en) Method for automatically identifying whether portrait picture is eye-closed
CN110458235B (en) Motion posture similarity comparison method in video
CN112801000B (en) Household old man falling detection method and system based on multi-feature fusion
CN111666845B (en) Small sample deep learning multi-mode sign language recognition method based on key frame sampling
CN111353390A (en) Micro-expression recognition method based on deep learning
CN113869276B (en) Lie recognition method and system based on micro-expression
CN108960216A (en) A kind of detection of dynamic human face and recognition methods
CN113920568A (en) Face and human body posture emotion recognition method based on video image
CN113627256A (en) Method and system for detecting counterfeit video based on blink synchronization and binocular movement detection
CN112364801A (en) Dynamic threshold face recognition method
Diyasa et al. Multi-face Recognition for the Detection of Prisoners in Jail using a Modified Cascade Classifier and CNN
Chang et al. Personalized facial expression recognition in indoor environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant