CN112287770A - Face quality sensing method and system for identity recognition - Google Patents

Face quality sensing method and system for identity recognition Download PDF

Info

Publication number
CN112287770A
CN112287770A CN202011077049.9A CN202011077049A CN112287770A CN 112287770 A CN112287770 A CN 112287770A CN 202011077049 A CN202011077049 A CN 202011077049A CN 112287770 A CN112287770 A CN 112287770A
Authority
CN
China
Prior art keywords
face
quality
model
image
normal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011077049.9A
Other languages
Chinese (zh)
Other versions
CN112287770B (en
Inventor
王中元
王光成
黄宝金
韩镇
曾康利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202011077049.9A priority Critical patent/CN112287770B/en
Publication of CN112287770A publication Critical patent/CN112287770A/en
Priority to PCT/CN2021/121776 priority patent/WO2022073453A1/en
Application granted granted Critical
Publication of CN112287770B publication Critical patent/CN112287770B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • G06V10/993Evaluation of the quality of the acquired pattern
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a face quality sensing method and system facing identity recognition, which adopt a meta-learning strategy to learn the prior knowledge of human vision system evaluation image quality from a natural image quality evaluation task, and adopt a small amount of face image samples to finely adjust an obtained quality prior model so as to quickly obtain a face image recognizability sensing model. The method adopts a pre-trained VGGface model to extract the face features, and adopts the face features obtained by the learning of an SVM classifier to classify the shielded face and the normal face. The invention can automatically sense the human face identification degree and the distortion type, and provides a basis for selecting a high-quality human face or selecting a human face identification algorithm matched with the distortion type.

Description

Face quality sensing method and system for identity recognition
Technical Field
The invention belongs to the technical field of identity recognition, relates to a method and a system for automatically sensing the quality of a face image, and particularly relates to a method and a system for sensing the quality of a face facing identity recognition.
Technical Field
With the further maturity of face recognition technology and the improvement of social recognition, face recognition is widely applied to a plurality of fields such as public security, intelligent control, identity authentication and the like. Under controllable conditions, the face recognition algorithm achieves extremely high accuracy. The accuracy of the ArcFace proposed by DengJiankang, etc. of the UK empire national institute of technology in LFW is up to 99.83%. However, under the uncontrolled or uncooperative conditions of the actual situation, such as illumination, shooting distance, shooting angle in the face shooting environment and active shielding (wearing a cap, a mask, sunglasses and the like) of a shot person, the uncontrollable factors cause low recognition degree of the collected face image, and phenomena such as blurring, low illumination, incomplete face or improper posture and the like which are not beneficial to face recognition occur, so that the performance of the face recognition system in the actual environment is severely restricted.
The human face image quality evaluation is an important branch of the image quality evaluation, and the research thereof is still in an exploration stage. At present, popular divisions can be mainly divided into two categories: (1) predicting the quality of the face image according to the existing image quality evaluation algorithm; (2) according to the characteristics of the face image, the relationship between the face image quality and the accuracy of the face recognition algorithm under the action of single or multiple factors is researched from the factors influencing the face image quality. The human face image quality standards ISO/IEC 19794-5 and ISO/IEC 29794-5 published by the International standards organization describe standard human face images in detail from multiple angles such as illumination, human face pose, image focus, and human face occlusion, and indicate that image defocus, non-frontal face pose, and asymmetric illumination are the most important causes for the degradation of human face image quality. Most of the existing face image quality evaluation methods only predict face quality according to a single influence factor, such as brightness, contrast, definition, face angle, shielding degree and the like, or evaluate the face image quality by combining several different influence factors. However, these algorithms only simply evaluate the quality of the face image, and do not take into account the requirements of downstream face recognition and other tasks.
In many practical scenes, only the image of the face to be shielded can be acquired, for example, in a new crown epidemic situation, in order to prevent infection, face verification can be performed only under the condition of wearing a mask, and criminals can hide face information by wearing a hat, the mask, sunglasses and the like in order to avoid eye tracking. The technology of detecting and identifying the face with the shielding effect is rapidly developed in recent years. An occlusion face data set MAFA is constructed by Geshiming and the like of a Chinese academy and an occlusion face detection method LLE-CNNs is further provided based on data driving. The Geshiming et al also propose the ID-GAN to recognize the occluded face based on the idea of image restoration and the existing face recognizer. The MaskNet can be easily integrated into the existing CNN network, can effectively separate effective facial information from a shielding part, and improves the robustness of the existing face recognition algorithm.
The current general face recognition system directly removes the occluded face image as a low-quality image. Therefore, if the face quality evaluation model can automatically sense the distortion type, support is provided for selecting a matched special face recognition algorithm, and the efficiency of the face recognition system is indirectly promoted to be improved.
Disclosure of Invention
The quality of the face image is affected by factors such as brightness, definition, contrast, occlusion and the like, and the existing face recognition system considers the occluded face image as a low-quality face image and eliminates the occluded face image. However, in many practical scenes, only face images with shielding can be acquired, and in order to be closer to the practical scene, the invention provides a face quality evaluation method and system for face recognition of the practical scene.
The method adopts the technical scheme that: a face quality sensing method facing identity recognition is characterized by comprising the following steps:
step 1: carrying out face detection on an input image to mark out a face frame;
step 2: learning priori knowledge of the quality of the images evaluated by the human visual system from a natural image evaluation task based on an optimized meta-learning strategy, and further finely adjusting the quality prior model through a human face sample to obtain a human face identifiability evaluation model;
and step 3: establishing an occlusion face classification model based on a data-driven strategy;
extracting face features by adopting a trained VGGface model, and classifying normal and shielded faces by adopting an SVM classifier to learn the extracted face features;
and 4, step 4: and respectively inputting the selected recognizable normal face and the shielded face into a normal face recognizer and a shielded face recognizer for recognition.
The technical scheme adopted by the system of the invention is as follows: a face quality perception system facing identity recognition is characterized in that: the device comprises a first module, a second module, a third module and a fourth module;
the module I is used for carrying out face detection on an input image to mark a face frame;
the second module is used for learning the prior knowledge of the human visual system for evaluating the image quality from a natural image evaluation task based on the optimized meta-learning strategy and further carrying out fine adjustment on the quality prior model through a human face sample to obtain a human face identifiability evaluation model;
the module III is used for establishing an occlusion face classification model based on a data-driven strategy;
extracting face features by adopting a trained VGGface model, and classifying normal and shielded faces by adopting an SVM classifier to learn the extracted face features;
and the module IV is used for respectively inputting the selected recognizable normal face and the shielded face into the normal face and the shielded face recognizer for recognition.
The invention has the following advantages and positive effects:
the invention can automatically sense the face recognizability and the distortion type under the condition of no manual intervention, and provides a basis for selecting a high-quality face or selecting a face recognition algorithm matched with the distortion type. The technology of the invention is combined with a face recognition system for use, and can help the face recognition system select a candidate face with high identification degree or start a special face recognition module, thereby indirectly improving the efficiency of the face recognition system.
Drawings
FIG. 1: a method flowchart of an embodiment of the invention.
Detailed Description
In order to facilitate the understanding and implementation of the present invention for those of ordinary skill in the art, the present invention is further described in detail with reference to the accompanying drawings and examples, it is to be understood that the embodiments described herein are merely illustrative and explanatory of the present invention and are not restrictive thereof.
Referring to fig. 1, the method for sensing face quality facing to identity recognition provided by the invention comprises the following steps:
step 1: carrying out face detection on an input image to mark out a face frame;
the invention adopts the prior high-performance face detection model RetinaFace to detect the face image from the input image.
Step 2: learning the prior knowledge of the human visual system for evaluating the image quality from a natural image evaluation task based on an optimized meta-learning strategy, and further fine-tuning a quality prior model through a small number of face samples to obtain a face identifiability evaluation model;
step 2.1: the human face identifiability evaluation model in the step 2 mainly aims at brightness, contrast and definition distortion which influence the human face identifiability. Since these distortions are also important distortions affecting the quality of the natural image, the present invention adopts an optimized meta-learning strategy to learn the natural image evaluation task to obtain the prior knowledge of the human visual system evaluation image. The optimization-based meta-learning strategy is adopted because the optimization-based meta-learning strategy can be applied to any network structure based on random Gradient Descent (SGD) optimization. The natural image evaluation task data (support set and query set) used in fig. 1 is a common natural image quality evaluation data set TID2013, which is used as a training task set for meta-learning to learn a priori knowledge of the human visual system for evaluating image quality. The network structure adopted by the invention is a common convolution neural networkAnd adding a full connecting layer. Specifically, the output of the convolutional neural network is pooled by using global average pooling to obtain a full-link layer with a first layer dimension of 512, and the output of the deep regression network generated by an additional full-link layer is increased. For the input face image I, the invention inputs the image I into a depth regression network to obtain a predicted quality score
Figure BDA0002717340240000041
The specific definition is as follows
Figure BDA0002717340240000042
Where θ represents the initial parameters of the depth regression network. The mean Euclidean distance is used as a loss function to optimize the error between the predicted image quality fraction and the true value, and the loss function is specifically defined as follows
Figure BDA0002717340240000043
Where y represents the true value of the quality score of image I. In order to better learn generalization ability among different tasks, the deep regression network is optimized by a double-layer random gradient descent method commonly used in the field of meta-learning, and parameters of the deep regression network are updated by using an Adam optimizer.
Step 2.2: after the quality prior model is learned, fine tuning is carried out on the quality prior model by using a small amount of face image samples (namely, the quality prior model is further trained and optimized by using a small amount of face samples), and a final face identifiability evaluation model is obtained.
And step 3: and establishing an occlusion face classification model based on a data-driven strategy. Extracting face features by adopting a trained VGGface model, and classifying normal and shielded faces by adopting an SVM classifier to learn the extracted face features;
step 3.1: the normal and occluded face data sets shown in fig. 1 adopt a mainstream normal face recognition data set CASIA-Webface and an occluded face detection data set MAFA proposed by courtyard puerperium, etc. respectively, and retrain vgface;
step 3.2: extracting the human face features by adopting a trained VGGface model;
step 3.3: and classifying the face features by adopting an SVM classifier based on an RBF kernel in an LIBSVM (LiBSVM) package, and outputting normal and occlusion types.
And 4, step 4: respectively inputting the selected recognizable normal face and the shielded face into a normal face recognizer and a shielded face recognizer for recognition;
specifically, an ArcFace model provided by the institute of Imperial science, DengJiankang and the like is used as a normal face recognizer, and an occlusion face recognition model ID-GAN based on a repair idea provided by the institute of Chinese sciences, Pushiming and the like is used as an occlusion face recognizer.
The invention also provides a face quality sensing system facing identity recognition, which comprises a module I, a module II, a module III and a module IV;
the module I is used for carrying out face detection on an input image to mark a face frame;
the second module is used for learning the prior knowledge of the human visual system for evaluating the image quality from a natural image evaluation task based on an optimized meta-learning strategy, and further carrying out fine adjustment on the quality prior model through a small amount of face samples to obtain a face identifiability evaluation model;
a third module, configured to establish an occlusion face classification model based on a data-driven policy;
extracting face features by adopting a trained VGGface model, and classifying normal and shielded faces by adopting an SVM classifier to learn the extracted face features;
and the module IV is used for respectively inputting the selected recognizable normal face and the shielded face into the normal face and the shielded face recognizer for recognition.
The invention comprises two parts: (1) other factors affecting the quality of the face image, other than occlusion factors, are similar to natural images. Therefore, the invention learns the priori knowledge of the image quality evaluated by the human visual system from the natural image quality evaluation task by using the meta-learning strategy, and finely adjusts the quality prior model by adopting the human face image to quickly obtain the human face identifiability evaluation model. (2) The invention further provides an occlusion face classification model based on data driving to classify the image after the first part of preprocessing.
It should be understood that parts of the specification not set forth in detail are well within the prior art.
It should be understood that the above description of the preferred embodiments is given for clarity and not for any purpose of limitation, and that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (6)

1. A face quality sensing method facing identity recognition is characterized by comprising the following steps:
step 1: carrying out face detection on an input image to mark out a face frame;
step 2: learning priori knowledge of the quality of the images evaluated by the human visual system from a natural image evaluation task based on an optimized meta-learning strategy, and further finely adjusting the quality prior model through a human face sample to obtain a human face identifiability evaluation model;
and step 3: establishing an occlusion face classification model based on a data-driven strategy;
extracting face features by adopting a trained VGGface model, and classifying normal and shielded faces by adopting an SVM classifier to learn the extracted face features;
and 4, step 4: and respectively inputting the selected recognizable normal face and the shielded face into a normal face recognizer and a shielded face recognizer for recognition.
2. The identity-oriented face quality perception method according to claim 1, wherein: in step 1, a high-performance face detection model RetinaFace is adopted to detect a face frame from an input image.
3. The identity-oriented face quality perception method according to claim 1, wherein: in the step 2, the adopted network structure is a convolutional neural network plus a full link layer, global average pooling is adopted to carry out global average pooling operation on the output of the convolutional neural network to obtain a full link layer with a first layer of dimensionality of 512, and in addition, an additional full link layer is added to generate the output of a depth regression network; for an input face image I, inputting the image I into a depth regression network to obtain a predicted quality score
Figure FDA0002717340230000011
Figure FDA0002717340230000012
Wherein θ represents an initial parameter of the depth regression network;
and optimizing the error between the predicted image quality fraction and the true value by using the average Euclidean distance as a loss function, wherein the loss function is specifically defined as:
Figure FDA0002717340230000013
wherein y represents the true value of the quality score of the image I;
optimizing the depth regression network by adopting a double-layer random gradient descent method, and updating parameters of the depth regression network by using an Adam optimizer;
after the quality prior model is learned, fine adjustment is carried out on the quality prior model by using a small number of face image samples to obtain a final face recognizability evaluation model.
4. The identity-recognition-oriented face quality perception method according to claim 1, wherein the step 3 of establishing the occlusion face classification model based on the data-driven strategy specifically includes the following substeps:
step 3.1: retraining a VGGface model through a normal face recognition data set CASIA-Webface and an occlusion face detection data set MAFA;
step 3.2: extracting the face features by using the trained VGGface model;
step 3.3: and (4) carrying out secondary classification on the extracted face feature vectors by adopting an SVM classifier, and outputting normal and occlusion types.
5. The identity-oriented face quality perception method according to any one of claims 1 to 4, wherein: in step 4, ArcFace and ID-GAN models are respectively adopted as identifiers of normal faces and shielded faces.
6. A face quality perception system facing identity recognition is characterized in that: the device comprises a first module, a second module, a third module and a fourth module;
the module I is used for carrying out face detection on an input image to mark a face frame;
the second module is used for learning the prior knowledge of the human visual system for evaluating the image quality from a natural image evaluation task based on the optimized meta-learning strategy and further carrying out fine adjustment on the quality prior model through a human face sample to obtain a human face identifiability evaluation model;
the module III is used for establishing an occlusion face classification model based on a data-driven strategy;
extracting face features by adopting a trained VGGface model, and classifying normal and shielded faces by adopting an SVM classifier to learn the extracted face features;
and the module IV is used for respectively inputting the selected recognizable normal face and the shielded face into the normal face and the shielded face recognizer for recognition.
CN202011077049.9A 2020-10-10 2020-10-10 Face quality sensing method and system for identity recognition Active CN112287770B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011077049.9A CN112287770B (en) 2020-10-10 2020-10-10 Face quality sensing method and system for identity recognition
PCT/CN2021/121776 WO2022073453A1 (en) 2020-10-10 2021-09-29 Personal identification-oriented face quality perception method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011077049.9A CN112287770B (en) 2020-10-10 2020-10-10 Face quality sensing method and system for identity recognition

Publications (2)

Publication Number Publication Date
CN112287770A true CN112287770A (en) 2021-01-29
CN112287770B CN112287770B (en) 2022-06-07

Family

ID=74422407

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011077049.9A Active CN112287770B (en) 2020-10-10 2020-10-10 Face quality sensing method and system for identity recognition

Country Status (2)

Country Link
CN (1) CN112287770B (en)
WO (1) WO2022073453A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022073453A1 (en) * 2020-10-10 2022-04-14 武汉大学 Personal identification-oriented face quality perception method and system
CN116721304A (en) * 2023-08-10 2023-09-08 武汉大学 Image quality perception method, system and equipment based on distorted image restoration guidance

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111008971B (en) * 2019-12-24 2023-06-13 天津工业大学 Aesthetic quality evaluation method of group photo image and real-time shooting guidance system
CN116740452B (en) * 2023-06-19 2023-12-22 北京数美时代科技有限公司 Image classification method, system and storage medium based on image restoration
CN116977220B (en) * 2023-08-07 2024-02-13 中国矿业大学 Blind image motion blur removal algorithm based on image quality heuristic
CN116938611B (en) * 2023-09-19 2023-12-12 苏州宏存芯捷科技有限公司 Information verification method and system based on artificial intelligence

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107590452A (en) * 2017-09-04 2018-01-16 武汉神目信息技术有限公司 A kind of personal identification method and device based on gait and face fusion
CN108960087A (en) * 2018-06-20 2018-12-07 中国科学院重庆绿色智能技术研究院 A kind of quality of human face image appraisal procedure and system based on various dimensions evaluation criteria
CN110070010A (en) * 2019-04-10 2019-07-30 武汉大学 A kind of face character correlating method identified again based on pedestrian

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10565433B2 (en) * 2017-03-30 2020-02-18 George Mason University Age invariant face recognition using convolutional neural networks and set distances
CN107679450A (en) * 2017-08-25 2018-02-09 珠海多智科技有限公司 Obstruction conditions servant's face recognition method based on deep learning
CN108052932A (en) * 2018-01-10 2018-05-18 重庆邮电大学 One kind blocks adaptive face identification method
CN110334615A (en) * 2019-06-20 2019-10-15 湖北亮诚光电科技有限公司 A method of there is the recognition of face blocked
CN112287770B (en) * 2020-10-10 2022-06-07 武汉大学 Face quality sensing method and system for identity recognition

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107590452A (en) * 2017-09-04 2018-01-16 武汉神目信息技术有限公司 A kind of personal identification method and device based on gait and face fusion
CN108960087A (en) * 2018-06-20 2018-12-07 中国科学院重庆绿色智能技术研究院 A kind of quality of human face image appraisal procedure and system based on various dimensions evaluation criteria
CN110070010A (en) * 2019-04-10 2019-07-30 武汉大学 A kind of face character correlating method identified again based on pedestrian

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄法秀等: "基于CNN的人脸图像亮度和清晰度质量评价", 《计算机工程与设计》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022073453A1 (en) * 2020-10-10 2022-04-14 武汉大学 Personal identification-oriented face quality perception method and system
CN116721304A (en) * 2023-08-10 2023-09-08 武汉大学 Image quality perception method, system and equipment based on distorted image restoration guidance
CN116721304B (en) * 2023-08-10 2023-10-20 武汉大学 Image quality perception method, system and equipment based on distorted image restoration guidance

Also Published As

Publication number Publication date
WO2022073453A1 (en) 2022-04-14
CN112287770B (en) 2022-06-07

Similar Documents

Publication Publication Date Title
CN112287770B (en) Face quality sensing method and system for identity recognition
CN106250877B (en) Near-infrared face identification method and device
Phung et al. A novel skin color model in ycbcr color space and its application to human face detection
US10445574B2 (en) Method and apparatus for iris recognition
KR101254181B1 (en) Face recognition method using data processing technologies based on hybrid approach and radial basis function neural networks
CN103136504B (en) Face identification method and device
CN111368683B (en) Face image feature extraction method and face recognition method based on modular constraint CenterFace
US20160078319A1 (en) Method, Apparatus and Computer Readable Recording Medium for Detecting a Location of a Face Feature Point Using an Adaboost Learning Algorithm
EP3440593B1 (en) Method and apparatus for iris recognition
CN107463917B (en) Improved LTP and two-dimensional bidirectional PCA fusion-based face feature extraction method
CN111178208A (en) Pedestrian detection method, device and medium based on deep learning
KR20130037734A (en) A system for real-time recognizing a face using radial basis function neural network algorithms
CN112784728B (en) Multi-granularity clothes changing pedestrian re-identification method based on clothing desensitization network
CN103902962A (en) Shielding or light source self-adaption human face recognition method and device
CN106650574A (en) Face identification method based on PCANet
CN112990052A (en) Partially-shielded face recognition method and device based on face restoration
CN111967592A (en) Method for generating counterimage machine recognition based on positive and negative disturbance separation
CN110705454A (en) Face recognition method with living body detection function
CN111797696B (en) Face recognition system and method for on-site autonomous learning
Zanlorensi et al. Ocular recognition databases and competitions: A survey
CN109726703B (en) Face image age identification method based on improved ensemble learning strategy
Lee et al. Distinction Between Real Faces and Photos by Analysis of Face Data.
WO2015037973A1 (en) A face identification method
CN113158828A (en) Facial emotion calibration method and system based on deep learning
Li et al. A feature-level solution to off-angle iris recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant