WO2021051611A1 - Face visibility-based face recognition method, system, device, and storage medium - Google Patents
Face visibility-based face recognition method, system, device, and storage medium Download PDFInfo
- Publication number
- WO2021051611A1 WO2021051611A1 PCT/CN2019/118428 CN2019118428W WO2021051611A1 WO 2021051611 A1 WO2021051611 A1 WO 2021051611A1 CN 2019118428 W CN2019118428 W CN 2019118428W WO 2021051611 A1 WO2021051611 A1 WO 2021051611A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- face
- visibility
- model
- training
- quality evaluation
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Definitions
- This application relates to the field of face recognition technology, and in particular to a face recognition method, system, device and storage medium based on face visibility.
- existing face recognition solutions will have a quality evaluation module, that is, when the quality of the photographed photo does not meet the requirements, no recognition is performed, and only when the quality meets the requirements, recognition is performed. This process is equivalent to a preliminary filtering before the picture is recognized.
- the existing quality evaluation algorithms mostly filter blurs (including motion blur, low picture pixels and other blurs), insufficient lighting (too bright and dim), and can’t be occluded.
- Perfect solution for example, the user wears sunglasses, face mask, etc.
- this partially occluded picture also greatly affects the accuracy of recognition, resulting in face recognition failure or poor results.
- This application provides a face recognition method, system, electronic device, and storage medium based on face visibility. Its main purpose is to solve the problem of occlusion that cannot be solved in image quality evaluation through face visibility, and it can also greatly improve the face. Accuracy and time of recognition.
- the present application provides a face recognition method based on face visibility, the method includes:
- Face recognition and feature extraction are performed on the face area whose visibility evaluation score meets the preset value range.
- this application provides a face recognition system based on face visibility, including:
- a position determining unit configured to detect the picture to be processed, and obtain the position of the face area in the picture to be processed
- the face straightening unit is configured to determine the key points of the face in the face area based on the key point alignment technology, and perform straightening processing on the face;
- a quality evaluation unit configured to evaluate the quality of the face area after the straightening process, and obtain a quality evaluation score
- the visibility evaluation unit is configured to evaluate the visibility of the face area whose quality evaluation score meets the preset value range, and obtain the corresponding visibility evaluation score;
- the face recognition unit is used to perform face recognition and feature extraction on the face area whose visibility evaluation score meets the preset value range.
- the present application also provides an electronic device, which includes a memory and a processor, the memory includes a face recognition program based on face visibility, and the When the face recognition program is executed by the processor, the following steps are implemented:
- Face recognition and feature extraction are performed on the face area whose visibility evaluation score meets the preset value range.
- the present application also provides a storage medium, the storage medium includes a face recognition program based on the visibility of the face, when the face recognition program based on the visibility of the face is executed by the processor , To achieve any step in the face recognition method based on the visibility of the face as described above.
- the face recognition method, system, electronic device, and computer-readable storage medium proposed in this application are based on the visibility of the face, which acquires and aligns the key points of the face in the picture to be processed, and then combines quality evaluation and visibility
- the evaluation gradually filters the face area and performs face recognition operations on pictures that meet the filtering conditions. This can not only solve the problem of face occlusion in the picture, but also improve the accuracy and speed of face recognition.
- FIG. 1 is a schematic diagram of an application environment of a specific embodiment of face recognition based on face visibility in this application;
- FIG. 2 is a schematic diagram of modules of a specific embodiment of the face recognition program based on face visibility in FIG. 1;
- FIG. 3 is a flowchart of a specific embodiment of a face recognition method based on face visibility according to this application;
- Figure 4 is a schematic diagram of the multi-task model structure of the application assistance system.
- This application provides a face recognition method based on face visibility, which is applied to an electronic device 1.
- FIG. 1 this is a schematic diagram of an application environment of a specific embodiment of a face recognition method based on face visibility of this application.
- the electronic device 1 may be a terminal device with arithmetic function, such as a server, a smart phone, a tablet computer, a portable computer, a desktop computer, and the like.
- the electronic device 1 includes a processor 12, a memory 11, a network interface 14 and a communication bus 15.
- the memory 11 includes at least one type of readable storage medium.
- the at least one type of readable storage medium may be a non-volatile storage medium such as flash memory, hard disk, multimedia card, card-type memory 11, and the like.
- the readable storage medium may be an internal storage unit of the electronic device 1, such as a hard disk of the electronic device 1.
- the readable storage medium may also be the external memory 11 of the electronic device 1, such as a plug-in hard disk or a smart memory card (Smart Media Card, SMC) equipped on the electronic device 1. , Secure Digital (SD) card, Flash Card (Flash Card), etc.
- SD Secure Digital
- Flash Card Flash Card
- the readable storage medium of the memory 11 is generally used to store the face recognition program 10 based on the visibility of the face installed in the electronic device 1 and the like.
- the memory 11 can also be used to temporarily store data that has been output or will be output.
- the processor 12 may be a central processing unit (CPU), a microprocessor or other data processing chip, which is used to run the program code or process data stored in the memory 11, for example, to execute based on human face. Visibility of the face recognition program 10 etc.
- CPU central processing unit
- microprocessor or other data processing chip
- the network interface 14 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface), and is generally used to establish a communication connection between the electronic device 1 and other electronic devices.
- the communication bus 15 is used to realize the connection and communication between these components.
- FIG. 1 only shows the electronic device 1 with the components 11-15, but it should be understood that it is not required to implement all the illustrated components, and more or fewer components may be implemented instead.
- the electronic device 1 may also include a user interface, a display, and a touch sensor.
- the area of the display of the electronic device 1 may be the same as or different from the area of the touch sensor.
- the display and the touch sensor are stacked to form a touch display screen. The device detects the touch operation triggered by the user based on the touch screen.
- the electronic device 1 may also include a radio frequency (RF) circuit, a sensor, an audio circuit, etc., which will not be repeated here.
- RF radio frequency
- the memory 11 as a computer storage medium may include an operating system and a face recognition program 10 based on face visibility; the processor 12 executes the human-based face recognition program stored in the memory 11
- the face recognition program 10 for face visibility implements the following steps:
- Face recognition and feature extraction are performed on the face area whose visibility evaluation score meets the preset value range.
- the step of determining the key points of the human face in the human face area and performing the correction processing on the human face includes:
- the step of acquiring annotated image data and training an alignment model based on the annotated image data includes:
- the face image corresponding to the labeled image is used as the network input of the training model of the alignment model, and the pre-labeled key point coordinate positions of the face image are used as the label of the training model for training;
- the step of evaluating the quality of the face area after the straightening and obtaining the quality evaluation score includes:
- the training steps of the quality assessment model include:
- the input of the multi-task neural network is the face area after the square
- the output of the multi-task neural network is the face feature of the face area and the face feature corresponding to the face Point value
- the step of performing visibility evaluation on the face area whose quality evaluation score meets a preset value range, and obtaining the corresponding visibility evaluation score includes:
- the training steps of the visibility evaluation model include:
- the loss function is the number of errors in the visibility judgment of the key points.
- the electronic device 1 proposed in the above embodiment integrates the three modules of the alignment model, the quality evaluation model, and the visibility evaluation model, which can not only solve the problem that the quality evaluation model cannot solve the problem of reduced recognition accuracy caused by face occlusion, but also improve the recognition accuracy, It can also solve the recognition problem through a multi-task neural network, which greatly reduces the reasoning time of the entire process.
- the face recognition program 10 based on face visibility can also be divided into one or more modules, and the one or more modules are stored in the memory 11 and executed by the processor 12 to complete this Application.
- the module referred to in this application refers to a series of computer program instruction segments that can complete specific functions.
- FIG. 2 it is a program module diagram of a preferred embodiment of the face recognition program 10 based on the visibility of the face in FIG. 1.
- the face recognition program 10 based on face visibility can be divided into: a position determination unit 11, a face straightening unit 12, a quality evaluation unit 13, a visibility evaluation unit 14, and a face recognition unit 15.
- the functions or operation steps implemented by the modules 11-15 are all similar to the above, and will not be described in detail here. Illustratively, for example:
- the position determining unit 11 is configured to detect the picture to be processed, and obtain the position of the face area in the picture to be processed.
- the face straightening unit 12 is configured to determine the key points of the face in the face area based on the key point alignment technology, and perform straightening processing on the face.
- the quality evaluation unit 13 is configured to evaluate the quality of the face area after the straightening process, and obtain a quality evaluation score.
- the visibility evaluation unit 14 is configured to evaluate the visibility of the face area whose quality evaluation score meets the preset value range, and obtain the corresponding visibility evaluation score.
- the face recognition unit 15 is configured to perform face recognition and feature extraction on a face region whose visibility evaluation score meets a preset value range.
- this application also provides a face recognition method based on face visibility.
- FIG. 3 this is a flowchart of a specific embodiment of a face recognition method based on face visibility of this application.
- the method can be executed by a device, and the device can be implemented by software and/or hardware.
- the face recognition method based on face visibility provided in the present application includes: step S110-step S150.
- S110 Detect the picture to be processed, and obtain the position of the face area in the picture to be processed.
- S120 Based on the key point alignment technology, determine the key points of the face in the face area, and perform a straightening process on the face.
- the face point that is, the landmarks obtained by Face Alignment
- Face Alignment is a step that must be done in face recognition.
- face detection from the image to be processed to get the position of the face.
- key point alignment technology to get the key points of the face (such as eye points, nose points, mouth points, etc.), and perform a straightening operation on the face.
- the key points of the human face are determined to be 68.
- the step of determining the key points of the face in the face area in the picture to be processed, and correcting the face includes:
- the step of acquiring annotated image data as described above, and training an alignment model based on the annotated image data includes:
- the face image corresponding to the annotation image is used as the network input of the training model of the alignment model, and the pre-annotated key point coordinate positions of the face image are used as the label of the training model for training;
- the annotated image data is a picture set or image set with pre-marked key points.
- a rotation angle can be obtained, and the face can be straightened by rotating the picture based on the rotation angle.
- the network input of the training model of the alignment model is an image of a face obtained after face detection, and the label of the training model is the coordinate position of 68 points on the face.
- the key of the training model is the coordinate position of 68 points on the face.
- our label input is as follows, x1, y1, x2, y2...x68, y68 total 136 values.
- S130 Perform quality evaluation on the face area after the straightening process, and obtain a quality evaluation score.
- the step of performing quality evaluation on the face area after the straightening process and obtaining the quality evaluation score includes:
- the training steps of the quality assessment model include:
- the input of the multi-task neural network is the face area after the square
- the output of the multi-task neural network is the face feature of the face area and the face feature Corresponding score value
- the quality evaluation of the face region is carried out through the quality evaluation model.
- the quality evaluation model can be done by a multi-task network based on a simple recognition model.
- One of the branches of the multi-task network is extracted after face recognition.
- the face feature of, the other branch after passing the sigmoid function, a score value between 0-1 is obtained. Multiply the score value with the previously extracted face features to obtain the final face recognition feature; then, after the triplet loss loss function is trained, the final quality evaluation model can be obtained.
- Triplet Loss is a loss function in deep learning, which is used to train samples with small differences, such as human faces.
- Feed data includes Anchor examples, Positive examples, and Negative examples.
- S140 Perform visibility evaluation on the face area whose quality evaluation score meets a preset value range, and obtain a corresponding visibility evaluation score.
- the step of performing visibility evaluation on the face area whose quality evaluation score meets the preset value range, and obtaining the corresponding visibility evaluation score includes:
- the training steps of the visibility evaluation model include:
- the loss function is the number of errors in the visibility judgment of the key points.
- S150 Perform face recognition and feature extraction on a face region whose visibility evaluation score meets a preset value range.
- this application designs a face recognition assistance system that integrates alignment, quality evaluation, and visibility evaluation into a multi-task model.
- the multi-task model of the auxiliary system includes a weight-sharing bottom layer, and multiple branches connected to the bottom layer and independent of each other, as shown in FIG. 4.
- the multi-task model branch of the auxiliary system further includes: an alignment model branch for obtaining key points of a face, a quality evaluation model branch, and a visibility evaluation model branch for judging occlusion.
- an alignment model branch for obtaining key points of a face
- a quality evaluation model branch for judging occlusion
- a visibility evaluation model branch for judging occlusion.
- the alignment model is first trained, that is, shared weights and landmarks.
- the parameters of the IQA (image quality assessment) model and the Visibility (visibility) model branch are fixed, and only return during the training process. Pass the loss of the landmarks branch.
- the fixed weights share the parameters of the underlying shared weights and the parameters of the landmarks.
- the IQA and Visibility modules respectively to train these two branches.
- turn on the weights of all modules and use pictures with three tabs for short-term fine-tuning. So far, the multi-task model training is completed.
- the input of the training model is the picture and the 68 point coordinates of the picture
- the output is the coordinates of 68 points
- the loss function is the normalized L2Loss (that is, normalized Euclidean distance difference).
- the input of the training model is the visibility of 68 points (visible as 1, and invisible as 0), and finally a 2*68 vector is output (2 represents the probability of visible and invisible, visible and invisible The sum of the probability is 1, we take the result greater than 0.5 as the final output result), the loss function is the number of visibility judgment errors of 68 points, and it is multiplied by a set coefficient to prevent overfitting .
- the quality evaluation score is judged to be less than 0.5, it indicates that the image quality cannot meet the recognition requirements, and the subsequent operations are not continued.
- the three modules of face point model, visibility model and quality evaluation are integrated together, which not only solves the problem of occlusion that cannot be solved in the quality evaluation module, but also greatly Improve the accuracy of recognition; and use multi-tasking to solve multiple modules, which can also greatly reduce the reasoning time of the entire process.
- the above-mentioned multi-task model can be used for face recognition as well as other face attributes.
- eyelid classification when we classify eyelids, when we judge that the points on the eyelids are blocked by the glasses frame, we can Eyelid classification is not performed.
- beard classification when performing beard classification, if the beard is hidden by objects such as hands, microphones, etc., the beard classification can also not be performed. This can significantly improve the inference speed of face attributes and avoid recognition errors caused by occlusion. The resulting accuracy is reduced.
- this application also provides a face recognition system based on face visibility. Its logical structure is similar to that of the aforementioned electronic device based on face visibility.
- the module composition of the face recognition program 10 (shown in Figure 2) is similar, and the functions implemented by the position determining unit 11, the face straightening unit 12, the quality evaluation unit 13, the visibility evaluation unit 14, and the face recognition unit 15 are or
- the operation steps are similar to the logical composition of the face recognition system based on face visibility in this embodiment. For example:
- the position determining unit is used to detect the picture to be processed and obtain the position of the face area in the picture to be processed;
- the face straightening unit is used to determine the key points of the face in the face area obtained by the position determining unit based on the key point alignment technology, and perform the straightening process on the face;
- the quality evaluation unit is used to evaluate the quality of the face region after the face correction unit has been straightened, and obtain a quality evaluation score
- the visibility evaluation unit is used to evaluate the visibility of the face area whose quality evaluation score meets the preset value range, and obtain the corresponding visibility evaluation score;
- the face recognition unit is used to perform face recognition and feature extraction on the face area whose visibility evaluation score meets the preset value range.
- the face straightening unit may include an alignment model training unit, a coordinate information determining unit, and a straightening unit (not shown in the figure).
- the alignment model training unit is used to obtain annotated image data and train an alignment model based on the annotated image data
- the coordinate information determination unit is used to input the image to be processed into the alignment model, and output the same as the image to be processed.
- the annotated image data is a picture set or image set with pre-marked key points.
- the coordinates of the key points can be obtained by the regression method.
- the number of face key points in the face area is set to 68.
- the network input of the training model of the alignment model is an image of a face obtained after face detection, and the label of the training model is the coordinate position of 68 points on the face.
- the alignment model unit further includes an annotation image data acquisition unit, a training unit, a normalization unit, and an iteration unit (not shown in the figure).
- the annotated image data acquiring unit is used to acquire an image set with pre-annotated key points as annotated image data;
- the training unit is used to use the face image corresponding to the annotated image as the network input of the training model of the alignment model,
- the pre-labeled key point coordinate positions of the face image are used as the label of the training model for training;
- the normalization unit is used to obtain the sum of the Euclidean distance between the output of the training model and the label, and Perform normalization processing to obtain a loss function; an iterative unit is used to iterate parameters based on the loss function until a trained alignment model is obtained.
- the quality evaluation unit further includes a first evaluation model training unit and a first evaluation unit (not shown in the figure).
- the first evaluation model training unit is used to train the quality evaluation model; the first evaluation unit is used to evaluate the quality of the corrected face region based on the quality evaluation model trained by the first evaluation model training unit, and obtain the quality Evaluation score.
- the first evaluation model training unit further includes a first network training unit, a face recognition feature determination unit, and an evaluation model acquisition unit (not shown in the figure).
- the network training unit is used to train a multi-task neural network
- the input of the multi-task neural network is the face area after the square
- the output of the multi-task neural network is the face features of the face area and the The score value corresponding to the face feature
- the face recognition feature determination unit is used to multiply the face feature and the corresponding score value to obtain the final face recognition feature
- the evaluation model acquisition unit is used to based on the final face recognition feature and
- the loss function is used for network training to obtain a quality evaluation model.
- the quality evaluation unit can be done with a multi-task network based on a simple recognition model.
- One of the branches is the facial features extracted after face recognition, and the other branch, after passing through the sigmoid function, gets a value between 0-1 The number of points in between. Multiply the score value with the previously extracted face features to obtain the final face recognition feature; then, after the triplet loss loss function is trained, the final quality evaluation model can be obtained.
- the model input a picture to get a quality evaluation score. The closer the quality evaluation score is to 1, the higher the quality of the face region is considered.
- the visibility evaluation unit further includes a second evaluation model training unit and a second evaluation unit (not shown in the figure).
- the second evaluation model training unit is used to train the visibility evaluation model; the second evaluation unit is used to evaluate the visibility of the face area that meets the quality evaluation based on the visibility evaluation model trained by the second evaluation model training unit .
- the above-mentioned second evaluation model training unit further includes a second network training unit and a second evaluation model acquisition unit (not shown in the figure).
- the second network training unit is used for inputting the visibility of face key points in the face area after correcting based on a multi-task neural network, and outputting the visibility probability of face key points in the face area;
- the second evaluation model acquisition unit is used to perform network training based on the visibility probability and loss function of key points of the face to obtain the visibility evaluation model, where the loss function is the number of errors in the visibility judgment of the key points.
- an embodiment of the present application also proposes a storage medium that includes a face recognition program based on face visibility.
- the face recognition program based on face visibility is executed by a processor, the implementation is as described above.
- the specific implementation of the computer-readable storage medium of the present application is substantially the same as the specific implementation of the above-mentioned face recognition method, system, and electronic device based on face visibility, and will not be repeated here.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computational Linguistics (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The present application relates to the technical field of face recognition, and discloses a face visibility-based face recognition method, applicable to an electronic device. The method comprises: performing detection on an image to be processed, and acquiring a position of a face region in the image; determining, on the basis of a key point alignment technique, key points of a face in the face region, and performing orientation correction processing on the face; performing quality assessment on the face region that has undergone the orientation correction processing, and acquiring a quality assessment score; performing visibility assessment on a face region having a quality assessment score falling within a pre-determined value range, and acquiring a corresponding visibility assessment score; and performing face recognition and feature extraction on a face region having a visibility assessment score falling within a pre-determined value range. The present application solves, by means of face visibility, problems caused by occlusion during image quality assessment, and improves the accuracy and efficiency of face recognition.
Description
本申请要求申请号为201910885914.3,申请日为2019年9月19日,发明创造名称为“基于人脸可见性的人脸识别方法、装置及存储介质”的专利申请的优先权。This application requires the priority of the patent application whose application number is 201910885914.3, the application date is September 19, 2019, and the invention and creation title is "Face Recognition Method, Device and Storage Medium Based on Facial Visibility".
本申请涉及人脸识别技术领域,尤其涉及一种基于人脸可见性的人脸识别方法、系统、装置及存储介质。This application relates to the field of face recognition technology, and in particular to a face recognition method, system, device and storage medium based on face visibility.
人脸识别技术在学术界早已论证完成,同时在工业上也已经得到了小范围的应用。但是随着数据库量级的增加,比如从同一栋楼的几千人,到一个小区的几万人,在到一个城市上千万人的规模,识别难度也逐渐增加。在这个过程中,为维护用户体验,大部分的识别工作都是基于用户的非配合操作,比如不需要要求用户在特定角度,特定光照,且静止时进行识别等等。而与此同时,这也给人脸识别工作带来了极大的难度。The face recognition technology has already been demonstrated in academia, and it has also been applied in a small range in industry. However, with the increase in the level of the database, for example, from thousands of people in the same building, to tens of thousands of people in a community, to tens of millions of people in a city, the difficulty of identification gradually increases. In this process, in order to maintain the user experience, most of the identification work is based on the user's non-cooperative operation, for example, the user does not need to be required to identify at a specific angle, specific light, and stationary, etc. At the same time, this also brings great difficulty to face recognition.
为克服上述问题,现有的人脸识别解决方案中,都会有质量评估模块,即当所拍摄的照片质量不满足要求时,不会进行识别,当质量满足要求时,才进行识别。这个过程相当于在图片进行识别前进行了初步的过滤。然而,申请人意识到,现有的质量评估算法,大都过滤的是模糊(包括运动模糊,图片像素低等种种模糊),光照不足(过于明亮和暗淡)的情况,对于有遮挡的情况,不能完美的解决(比如用户佩戴了墨镜,面罩等等)。然而这种存在局部遮挡的图片,也极大的影响了识别的精度,导致人脸识别失败或者效果不佳。In order to overcome the above-mentioned problems, existing face recognition solutions will have a quality evaluation module, that is, when the quality of the photographed photo does not meet the requirements, no recognition is performed, and only when the quality meets the requirements, recognition is performed. This process is equivalent to a preliminary filtering before the picture is recognized. However, the applicant realizes that the existing quality evaluation algorithms mostly filter blurs (including motion blur, low picture pixels and other blurs), insufficient lighting (too bright and dim), and can’t be occluded. Perfect solution (for example, the user wears sunglasses, face mask, etc.). However, this partially occluded picture also greatly affects the accuracy of recognition, resulting in face recognition failure or poor results.
为此,亟需一种技术能够对人脸遮挡的情况进行分析,以提高人脸识别的准确度。For this reason, there is an urgent need for a technology that can analyze the situation of face occlusion, so as to improve the accuracy of face recognition.
发明内容Summary of the invention
本申请提供一种基于人脸可见性的人脸识别方法、系统、电子装置及存储介质,其主要目的在于通过人脸可见性解决图片质量评估中无法解决遮挡的问题,也可大大提高人脸识别的精度及时间。This application provides a face recognition method, system, electronic device, and storage medium based on face visibility. Its main purpose is to solve the problem of occlusion that cannot be solved in image quality evaluation through face visibility, and it can also greatly improve the face. Accuracy and time of recognition.
为实现上述目的,本申请提供一种基于人脸可见性的人脸识别方法,所述方法包括:In order to achieve the above objective, the present application provides a face recognition method based on face visibility, the method includes:
检测待处理图片,获取所述待处理图片中的人脸区域的位置;Detecting the picture to be processed, and obtaining the position of the face area in the picture to be processed;
基于关键点对齐技术,确定所述人脸区域中的人脸的关键点,并对所述人脸进行摆正处理;Based on the key point alignment technology, determine the key points of the face in the face area, and perform a straightening process on the face;
对所述摆正处理后的人脸区域进行质量评估,并获取质量评估分数;Perform quality evaluation on the face area after the straightening process, and obtain a quality evaluation score;
对所述质量评估分数满足预设值范围的人脸区域进行可见性评估,并获取对应的可见性评估分数;Perform visibility evaluation on the face area whose quality evaluation score meets the preset value range, and obtain a corresponding visibility evaluation score;
对可见性评估分数满足预设值范围的人脸区域进行人脸识别及特征提取。Face recognition and feature extraction are performed on the face area whose visibility evaluation score meets the preset value range.
相应的,本申请提供一种基于人脸可见性的人脸识别系统,包括:Correspondingly, this application provides a face recognition system based on face visibility, including:
位置确定单元,用于检测待处理图片,获取所述待处理图片中的人脸区域的位置;A position determining unit, configured to detect the picture to be processed, and obtain the position of the face area in the picture to be processed;
人脸摆正单元,用于基于关键点对齐技术,确定所述人脸区域中的人脸的关键点,并对所述人脸进行摆正处理;The face straightening unit is configured to determine the key points of the face in the face area based on the key point alignment technology, and perform straightening processing on the face;
质量评估单元,用于对所述摆正处理后的人脸区域进行质量评估,并获取质量评估分数;A quality evaluation unit, configured to evaluate the quality of the face area after the straightening process, and obtain a quality evaluation score;
可见性评估单元,用于对所述质量评估分数满足预设值范围的人脸区域进行可见性评估,并获取对应的可见性评估分数;The visibility evaluation unit is configured to evaluate the visibility of the face area whose quality evaluation score meets the preset value range, and obtain the corresponding visibility evaluation score;
人脸识别单元,用于对可见性评估分数满足预设值范围的人脸区域进行人脸识别及特征提取。The face recognition unit is used to perform face recognition and feature extraction on the face area whose visibility evaluation score meets the preset value range.
此外,为实现上述目的,本申请还提供一种电子装置,该电子装置包括:存储器、处理器,所述存储器中包括基于人脸可见性的人脸识别程序,所述基于人脸可见性的人脸识别程序被所述处理器执行时实现如下步骤:In addition, in order to achieve the above object, the present application also provides an electronic device, which includes a memory and a processor, the memory includes a face recognition program based on face visibility, and the When the face recognition program is executed by the processor, the following steps are implemented:
检测待处理图片,获取所述待处理图片中的人脸的位置;Detect the picture to be processed, and obtain the position of the face in the picture to be processed;
基于关键点对齐技术,确定所述待处理图片中的人脸的关键点,并对所述人脸进行摆正处理;Based on the key point alignment technology, determine the key points of the face in the picture to be processed, and perform a straightening process on the face;
对所述摆正处理后的人脸区域进行质量评估,并获取质量评估分数;Perform quality evaluation on the face area after the straightening process, and obtain a quality evaluation score;
对所述质量评估分数满足预设值范围的人脸区域进行可见性评估,并获取对应的可见性评估分数;Perform visibility evaluation on the face area whose quality evaluation score meets the preset value range, and obtain a corresponding visibility evaluation score;
对可见性评估分数满足预设值范围的人脸区域进行人脸识别及特征提取。Face recognition and feature extraction are performed on the face area whose visibility evaluation score meets the preset value range.
此外,为实现上述目的,本申请还提供一种存储介质,所述存储介质中包括基于人脸可见性的人脸识别程序,所述基于人脸可见性的人脸识别程序被处理器执行时,实现如上所述的基于人脸可见性的人脸识别方法中的任意步骤。In addition, in order to achieve the above object, the present application also provides a storage medium, the storage medium includes a face recognition program based on the visibility of the face, when the face recognition program based on the visibility of the face is executed by the processor , To achieve any step in the face recognition method based on the visibility of the face as described above.
本申请提出的基于人脸可见性的人脸识别方法、系统、电子装置及计算机可读存储介质,对检测待处理图片中的人脸进行关键点获取及摆正,进而结合质量评估及可见性评估对人脸区域进行逐步筛选,对符合筛选条件的图片进行人脸识别操作,不仅能够解决图片中人脸遮挡的问题,也能够提升人脸识别的精度和速度。The face recognition method, system, electronic device, and computer-readable storage medium proposed in this application are based on the visibility of the face, which acquires and aligns the key points of the face in the picture to be processed, and then combines quality evaluation and visibility The evaluation gradually filters the face area and performs face recognition operations on pictures that meet the filtering conditions. This can not only solve the problem of face occlusion in the picture, but also improve the accuracy and speed of face recognition.
图1为本申请基于人脸可见性的人脸识别具体实施例的应用环境示意图;FIG. 1 is a schematic diagram of an application environment of a specific embodiment of face recognition based on face visibility in this application;
图2为图1中基于人脸可见性的人脸识别程序具体实施例的模块示意图;2 is a schematic diagram of modules of a specific embodiment of the face recognition program based on face visibility in FIG. 1;
图3为本申请基于人脸可见性的人脸识别方法具体实施例的流程图;3 is a flowchart of a specific embodiment of a face recognition method based on face visibility according to this application;
图4为本申请辅助系统的多任务模型结构示意图。Figure 4 is a schematic diagram of the multi-task model structure of the application assistance system.
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The realization, functional characteristics, and advantages of the purpose of this application will be further described in conjunction with the embodiments and with reference to the accompanying drawings.
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。It should be understood that the specific embodiments described here are only used to explain the application, and not used to limit the application.
实施例一Example one
本申请提供一种基于人脸可见性的人脸识别方法,应用于一种电子装置1。参照图1所示,为本申请基于人脸可见性的人脸识别方法具体实施例的应用环境示意图。This application provides a face recognition method based on face visibility, which is applied to an electronic device 1. Referring to FIG. 1, this is a schematic diagram of an application environment of a specific embodiment of a face recognition method based on face visibility of this application.
在本实施例中,电子装置1可以是服务器、智能手机、平板电脑、便携 计算机、桌上型计算机等具有运算功能的终端设备。In this embodiment, the electronic device 1 may be a terminal device with arithmetic function, such as a server, a smart phone, a tablet computer, a portable computer, a desktop computer, and the like.
该电子装置1包括:处理器12、存储器11、网络接口14及通信总线15。The electronic device 1 includes a processor 12, a memory 11, a network interface 14 and a communication bus 15.
存储器11包括至少一种类型的可读存储介质。所述至少一种类型的可读存储介质可为如闪存、硬盘、多媒体卡、卡型存储器11等的非易失性存储介质。在一些实施例中,所述可读存储介质可以是所述电子装置1的内部存储单元,例如该电子装置1的硬盘。在另一些实施例中,所述可读存储介质也可以是所述电子装置1的外部存储器11,例如所述电子装置1上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。The memory 11 includes at least one type of readable storage medium. The at least one type of readable storage medium may be a non-volatile storage medium such as flash memory, hard disk, multimedia card, card-type memory 11, and the like. In some embodiments, the readable storage medium may be an internal storage unit of the electronic device 1, such as a hard disk of the electronic device 1. In other embodiments, the readable storage medium may also be the external memory 11 of the electronic device 1, such as a plug-in hard disk or a smart memory card (Smart Media Card, SMC) equipped on the electronic device 1. , Secure Digital (SD) card, Flash Card (Flash Card), etc.
在本实施例中,所述存储器11的可读存储介质通常用于存储安装于所述电子装置1的基于人脸可见性的人脸识别程序10等。所述存储器11还可以用于暂时地存储已经输出或者将要输出的数据。In this embodiment, the readable storage medium of the memory 11 is generally used to store the face recognition program 10 based on the visibility of the face installed in the electronic device 1 and the like. The memory 11 can also be used to temporarily store data that has been output or will be output.
处理器12在一些实施例中可以是一中央处理器(Central Processing Unit,CPU),微处理器或其他数据处理芯片,用于运行存储器11中存储的程序代码或处理数据,例如执行基于人脸可见性的人脸识别程序10等。In some embodiments, the processor 12 may be a central processing unit (CPU), a microprocessor or other data processing chip, which is used to run the program code or process data stored in the memory 11, for example, to execute based on human face. Visibility of the face recognition program 10 etc.
网络接口14可选地可以包括标准的有线接口、无线接口(如WI-FI接口),通常用于在该电子装置1与其他电子设备之间建立通信连接。The network interface 14 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface), and is generally used to establish a communication connection between the electronic device 1 and other electronic devices.
通信总线15用于实现这些组件之间的连接通信。The communication bus 15 is used to realize the connection and communication between these components.
图1仅示出了具有组件11-15的电子装置1,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。FIG. 1 only shows the electronic device 1 with the components 11-15, but it should be understood that it is not required to implement all the illustrated components, and more or fewer components may be implemented instead.
可选地,该电子装置1还可以包括用户接口、显示器以及括触摸传感器。Optionally, the electronic device 1 may also include a user interface, a display, and a touch sensor.
此外,该电子装置1的显示器的面积可以与所述触摸传感器的面积相同,也可以不同。可选地,将显示器与所述触摸传感器层叠设置,以形成触摸显示屏。该装置基于触摸显示屏侦测用户触发的触控操作。In addition, the area of the display of the electronic device 1 may be the same as or different from the area of the touch sensor. Optionally, the display and the touch sensor are stacked to form a touch display screen. The device detects the touch operation triggered by the user based on the touch screen.
可选地,该电子装置1还可以包括射频(Radio Frequency,RF)电路,传感器、音频电路等等,在此不再赘述。Optionally, the electronic device 1 may also include a radio frequency (RF) circuit, a sensor, an audio circuit, etc., which will not be repeated here.
在图1所示的装置实施例中,作为一种计算机存储介质的存储器11中可以包括操作系统、以及基于人脸可见性的人脸识别程序10;处理器12执行存储器11中存储的基于人脸可见性的人脸识别程序10时实现如下步骤:In the device embodiment shown in FIG. 1, the memory 11 as a computer storage medium may include an operating system and a face recognition program 10 based on face visibility; the processor 12 executes the human-based face recognition program stored in the memory 11 The face recognition program 10 for face visibility implements the following steps:
检测待处理图片,获取所述待处理图片中的人脸的位置;Detect the picture to be processed, and obtain the position of the face in the picture to be processed;
基于关键点对齐技术,确定所述待处理图片中的人脸区域的关键点,并对所述人脸进行摆正处理;Based on the key point alignment technology, determine the key points of the face region in the picture to be processed, and perform a straightening process on the face;
对所述摆正处理后的人脸区域进行质量评估,并获取质量评估分数;Perform quality evaluation on the face area after the straightening process, and obtain a quality evaluation score;
对所述质量评估分数满足预设值范围的人脸区域进行可见性评估,并获取对应的可见性评估分数;Perform visibility evaluation on the face area whose quality evaluation score meets the preset value range, and obtain a corresponding visibility evaluation score;
对可见性评估分数满足预设值范围的人脸区域进行人脸识别及特征提取。Face recognition and feature extraction are performed on the face area whose visibility evaluation score meets the preset value range.
优选地,所述确定所述人脸区域中的人脸的关键点,并对所述人脸进行摆正处理的步骤包括:Preferably, the step of determining the key points of the human face in the human face area and performing the correction processing on the human face includes:
获取标注图像数据,基于所述标注图像数据训练对齐模型;Acquiring annotated image data, and training an alignment model based on the annotated image data;
将待处理图片输入所述对齐模型中,输出与所述待处理图片对应的人脸的关键点坐标信息;Inputting the picture to be processed into the alignment model, and outputting key point coordinate information of the face corresponding to the picture to be processed;
基于所述关键点坐标信息,获取所述待处理图片的摆正角度,并按照所述摆正角度旋转所述待处理图片,获取摆正后的人脸区域。Based on the key point coordinate information, obtain the correction angle of the picture to be processed, and rotate the picture to be processed according to the correction angle to obtain the face area after correction.
优选地,获取标注图像数据,基于所述标注图像数据训练对齐模型的步骤包括:Preferably, the step of acquiring annotated image data and training an alignment model based on the annotated image data includes:
获取预先标注好关键点的图像集作为标注图像数据;Obtain an image set with pre-marked key points as annotated image data;
所述标注图像对应的人脸图像作为所述对齐模型的训练模型的网络输入,所述人脸图像的预先标注好的关键点坐标位置作为所述训练模型的标签进行训练;The face image corresponding to the labeled image is used as the network input of the training model of the alignment model, and the pre-labeled key point coordinate positions of the face image are used as the label of the training model for training;
获取所述训练模型的输出和所述标签之间的欧式距离和,并进行归一化处理,得到损失函数;Obtain the sum of Euclidean distances between the output of the training model and the label, and perform normalization processing to obtain a loss function;
基于损失函数进行参数迭代,直至获取训练好的对齐模型。Perform parameter iteration based on the loss function until a trained alignment model is obtained.
优选地,所述对摆正后的人脸区域进行质量评估,并获取质量评估分数的步骤包括:Preferably, the step of evaluating the quality of the face area after the straightening and obtaining the quality evaluation score includes:
训练质量评估模型;Training quality evaluation model;
基于所述质量评估模型对摆正后的人脸区域进行质量评估,并获取质量评估分数;Performing quality evaluation on the face area after the straightening based on the quality evaluation model, and obtaining a quality evaluation score;
所述质量评估模型的训练步骤包括:The training steps of the quality assessment model include:
训练多任务神经网络,所述多任务神经网络的输入为摆正后的人脸区域,所述多任务神经网络的输出为所述人脸区域的人脸特征以及和所述人脸特征 对应的分数值;Train a multi-task neural network, the input of the multi-task neural network is the face area after the square, the output of the multi-task neural network is the face feature of the face area and the face feature corresponding to the face Point value
将所述人脸特征和对应的分数值相乘,得到最终的人脸识别特征;Multiplying the face feature and the corresponding score value to obtain the final face recognition feature;
基于所述最终的人脸识别特征和损失函数进行网络训练,获得所述质量评估模型。Perform network training based on the final face recognition feature and loss function to obtain the quality evaluation model.
优选地,所述对所述质量评估分数满足预设值范围的人脸区域进行可见性评估,并获取对应的可见性评估分数的步骤包括:Preferably, the step of performing visibility evaluation on the face area whose quality evaluation score meets a preset value range, and obtaining the corresponding visibility evaluation score includes:
训练可见性评估模型;Training visibility evaluation model;
基于所述可见性评估模型对满足质量评估的人脸区域进行可见性评估;Performing visibility evaluation on the face area that meets the quality evaluation based on the visibility evaluation model;
所述可见性评估模型的训练步骤包括:The training steps of the visibility evaluation model include:
基于所述多任务神经网络,输入摆正后的人脸区域的人脸关键点的可见性,输出所述人脸区域的人脸关键点可见性概率;Based on the multi-task neural network, input the visibility of the face key points of the face area after the correction, and output the visibility probability of the face key points of the face area;
基于所述人脸关键点可见性概率及损失函数进行网络训练,获得所述可见性评估模型,所述损失函数为关键点的可见性判断错误的个数。Perform network training based on the visibility probability and loss function of the key points of the face to obtain the visibility evaluation model. The loss function is the number of errors in the visibility judgment of the key points.
上述实施例提出的电子装置1,将对齐模型、质量评估模型和可见性评估模型三个模块融合到一起,不仅能够解决质量评估模型无法解决人脸遮挡导致识别精度降低的问题,提升识别精度,还能够通过多任务神经网络的方式解决识别问题,大大减少整个流程的推理时间。The electronic device 1 proposed in the above embodiment integrates the three modules of the alignment model, the quality evaluation model, and the visibility evaluation model, which can not only solve the problem that the quality evaluation model cannot solve the problem of reduced recognition accuracy caused by face occlusion, but also improve the recognition accuracy, It can also solve the recognition problem through a multi-task neural network, which greatly reduces the reasoning time of the entire process.
在其他实施例中,基于人脸可见性的人脸识别程序10还可以被分割为一个或者多个模块,一个或者多个模块被存储于存储器11中,并由处理器12执行,以完成本申请。本申请所称的模块是指能够完成特定功能的一系列计算机程序指令段。参照图2所示,为图1中基于人脸可见性的人脸识别程序10较佳实施例的程序模块图。所述基于人脸可见性的人脸识别程序10可以被分割为:位置确定单元11、人脸摆正单元12、质量评估单元13、可见性评估单元14、人脸识别单元15。所述模块11-15所实现的功能或操作步骤均与上文类似,此处不再详述,示例性地,例如其中:In other embodiments, the face recognition program 10 based on face visibility can also be divided into one or more modules, and the one or more modules are stored in the memory 11 and executed by the processor 12 to complete this Application. The module referred to in this application refers to a series of computer program instruction segments that can complete specific functions. Referring to FIG. 2, it is a program module diagram of a preferred embodiment of the face recognition program 10 based on the visibility of the face in FIG. 1. The face recognition program 10 based on face visibility can be divided into: a position determination unit 11, a face straightening unit 12, a quality evaluation unit 13, a visibility evaluation unit 14, and a face recognition unit 15. The functions or operation steps implemented by the modules 11-15 are all similar to the above, and will not be described in detail here. Illustratively, for example:
位置确定单元11,用于检测待处理图片,获取所述待处理图片中的人脸区域的位置。The position determining unit 11 is configured to detect the picture to be processed, and obtain the position of the face area in the picture to be processed.
人脸摆正单元12,用于基于关键点对齐技术,确定所述人脸区域中的人脸的关键点,并对所述人脸进行摆正处理。The face straightening unit 12 is configured to determine the key points of the face in the face area based on the key point alignment technology, and perform straightening processing on the face.
质量评估单元13,用于对所述摆正处理后的人脸区域进行质量评估,并 获取质量评估分数。The quality evaluation unit 13 is configured to evaluate the quality of the face area after the straightening process, and obtain a quality evaluation score.
可见性评估单元14,用于对所述质量评估分数满足预设值范围的人脸区域进行可见性评估,并获取对应的可见性评估分数。The visibility evaluation unit 14 is configured to evaluate the visibility of the face area whose quality evaluation score meets the preset value range, and obtain the corresponding visibility evaluation score.
人脸识别单元15,用于对可见性评估分数满足预设值范围的人脸区域进行人脸识别及特征提取。The face recognition unit 15 is configured to perform face recognition and feature extraction on a face region whose visibility evaluation score meets a preset value range.
实施例二Example two
此外,本申请还提供一种基于人脸可见性的人脸识别方法。参照图3所示,为本申请基于人脸可见性的人脸识别方法具体实施例的流程图。该方法可以由一个装置执行,该装置可以由软件和/或硬件实现。In addition, this application also provides a face recognition method based on face visibility. Referring to FIG. 3, this is a flowchart of a specific embodiment of a face recognition method based on face visibility of this application. The method can be executed by a device, and the device can be implemented by software and/or hardware.
在本实施例中,本申请提供的基于人脸可见性的人脸识别方法包括:步骤S110-步骤S150。In this embodiment, the face recognition method based on face visibility provided in the present application includes: step S110-step S150.
S110:检测待处理图片,获取所述待处理图片中的人脸区域的位置。S110: Detect the picture to be processed, and obtain the position of the face area in the picture to be processed.
S120:基于关键点对齐技术,确定所述人脸区域中的人脸的关键点,并对所述人脸进行摆正处理。S120: Based on the key point alignment technology, determine the key points of the face in the face area, and perform a straightening process on the face.
其中,人脸点,亦即Face Alignment(人脸对齐)得到的landmarks,是人脸识别中必须要做的一步。首先,我们从待处理图片中进行人脸检测得到人脸的位置。其次,我们用关键点对齐技术,得到人脸的关键点(比如眼睛点,鼻子点,嘴巴点等等),并对人脸进行摆正操作。在本申请的一个优选实施例中,将人脸的关键点确定为68个。Among them, the face point, that is, the landmarks obtained by Face Alignment, is a step that must be done in face recognition. First, we perform face detection from the image to be processed to get the position of the face. Secondly, we use key point alignment technology to get the key points of the face (such as eye points, nose points, mouth points, etc.), and perform a straightening operation on the face. In a preferred embodiment of the present application, the key points of the human face are determined to be 68.
进一步地,确定所述待处理图片中的人脸区域的人脸的关键点,并对所述人脸进行摆正处理的步骤包括:Further, the step of determining the key points of the face in the face area in the picture to be processed, and correcting the face includes:
1、获取标注图像数据,基于所述标注图像数据训练对齐模型;1. Obtain annotated image data, and train an alignment model based on the annotated image data;
2、将待处理图片输入所述对齐模型中,输出与所述待处理图片对应的人脸的关键点坐标信息;2. Input the picture to be processed into the alignment model, and output the key point coordinate information of the face corresponding to the picture to be processed;
3、基于所述关键点坐标信息,获取所述待处理图片的摆正角度,并旋转所述待处理图片,获取摆正后的人脸区域。3. Based on the key point coordinate information, obtain the squared angle of the picture to be processed, and rotate the picture to be processed to obtain the face area after squared.
另外,如上获取标注图像数据,基于所述标注图像数据训练对齐模型的步骤包括:In addition, the step of acquiring annotated image data as described above, and training an alignment model based on the annotated image data includes:
1、获取预先标注好关键点的图像集作为标注图像数据;1. Obtain an image set with pre-marked key points as annotated image data;
2、所述标注图像对应的人脸图像作为所述对齐模型的训练模型的网络输 入,所述人脸图像的预先标注好的关键点坐标位置作为所述训练模型的标签进行训练;2. The face image corresponding to the annotation image is used as the network input of the training model of the alignment model, and the pre-annotated key point coordinate positions of the face image are used as the label of the training model for training;
3、获取所述训练模型的输出和所述标签之间的欧式距离和,并进行归一化处理,得到损失函数;3. Obtain the sum of Euclidean distances between the output of the training model and the label, and perform normalization processing to obtain a loss function;
4、基于损失函数进行参数迭代,直至获取所述训练好的对齐模型。4. Perform parameter iteration based on the loss function until the trained alignment model is obtained.
其中,标注图像数据为预先标注好关键点的图片集或图像集,关键点的坐标可以用回归的方法来获取,直接回归出图片或者图像中68个关键点的x坐标和y坐标,即输出为68*2=136个的一个一维向量。当分别确定人脸的左眼睛点和右眼睛点的x,y坐标后,可以求得一个旋转角度,基于旋转角度旋转图片实现人脸摆正。Among them, the annotated image data is a picture set or image set with pre-marked key points. The coordinates of the key points can be obtained by the regression method, and the x and y coordinates of the 68 key points in the picture or image are directly returned, that is, the output It is a one-dimensional vector of 68*2=136. When the x and y coordinates of the left eye point and the right eye point of the face are determined respectively, a rotation angle can be obtained, and the face can be straightened by rotating the picture based on the rotation angle.
为获取对齐模型,需要大量的标注关键点的人脸数据。对齐模型的训练模型的网络输入为进行人脸检测后得到的一个人脸的图像,训练模型的标签为人脸上的68个点的坐标位置。需要说明的是,对这68点,我们需要注意其顺序,即需要按照预设标注顺序标注人脸图像的关键点坐标位置作为训练模型的标签。比如左眉毛上点1到点5,右眉毛点6到点10,依次标注鼻子,嘴巴,下巴等人脸关键区域。我们的标签输入如下,x1,y1,x2,y2…x68,y68共136个数值。在网络训练过程中,计算网络输出和标签的68个点的欧式距离和,并进行归一化,得到我们的损失函数。通过多次参数的迭代后,得到最终的对齐模型或关键点对齐模型。In order to obtain the alignment model, a large amount of face data marked with key points is required. The network input of the training model of the alignment model is an image of a face obtained after face detection, and the label of the training model is the coordinate position of 68 points on the face. It should be noted that for these 68 points, we need to pay attention to the order, that is, we need to label the key point coordinate positions of the face image as the label of the training model according to the preset labeling sequence. For example, point 1 to point 5 on the left eyebrow, and point 6 to point 10 on the right eyebrow, and mark the key areas of the face such as the nose, mouth, and chin in turn. Our label input is as follows, x1, y1, x2, y2...x68, y68 total 136 values. In the network training process, calculate the Euclidean distance sum of the 68 points of the network output and the label, and normalize it to obtain our loss function. After multiple iterations of parameters, the final alignment model or key point alignment model is obtained.
S130:对所述摆正处理后的人脸区域进行质量评估,并获取质量评估分数。S130: Perform quality evaluation on the face area after the straightening process, and obtain a quality evaluation score.
其中,所述对摆正处理后的人脸区域进行质量评估,并获取质量评估分数的步骤包括:Wherein, the step of performing quality evaluation on the face area after the straightening process and obtaining the quality evaluation score includes:
训练质量评估模型;Training quality evaluation model;
基于所述质量评估模型对摆正处理后的人脸区域进行质量评估,并获取质量评估分数;Performing quality evaluation on the face area after the straightening process based on the quality evaluation model, and obtaining a quality evaluation score;
所述质量评估模型的训练步骤包括:The training steps of the quality assessment model include:
1、训练多任务神经网络,所述多任务神经网络的输入为摆正后的人脸区域,所述多任务神经网络的输出为所述人脸区域的人脸特征以及和所述人脸特征对应的分数值;1. Train a multi-task neural network, the input of the multi-task neural network is the face area after the square, the output of the multi-task neural network is the face feature of the face area and the face feature Corresponding score value;
2、将所述人脸特征和对应的分数值相乘,得到最终的人脸识别特征;2. Multiply the face feature and the corresponding score value to obtain the final face recognition feature;
3、基于所述最终的人脸识别特征和损失函数进行网络训练,获得所述质量评估模型。3. Perform network training based on the final face recognition feature and loss function to obtain the quality evaluation model.
进一步地,通过质量评估模型对人脸区域进行质量评估,该质量评估模型可以用一个基于简易的识别模型的多任务网络来做,该多任务网络的其中一个分支,是人脸识别之后提取到的人脸特征,另一个分支,则经过sigmoid函数之后得到一个在0-1之间的分数值。将该分数值与之前提取到的人脸特征相乘,得到最终的人脸识别的特征;然后,经过triplet loss损失函数进行训练,可以得到最终的质量评估的模型。在进行模型的推理时,输入一张图片,则可以得到一个质量评估分数。该质量评估分数越接近于1,则认为该人脸区域的质量越高。Further, the quality evaluation of the face region is carried out through the quality evaluation model. The quality evaluation model can be done by a multi-task network based on a simple recognition model. One of the branches of the multi-task network is extracted after face recognition. The face feature of, the other branch, after passing the sigmoid function, a score value between 0-1 is obtained. Multiply the score value with the previously extracted face features to obtain the final face recognition feature; then, after the triplet loss loss function is trained, the final quality evaluation model can be obtained. When inferring the model, input a picture to get a quality evaluation score. The closer the quality evaluation score is to 1, the higher the quality of the face region is considered.
其中,Triplet Loss是深度学习中的一种损失函数,用于训练差异性较小的样本,如人脸等,Feed数据包括锚(Anchor)示例、正(Positive)示例、负(Negative)示例,通过优化锚示例与负示例的距离,减去锚示例与正示例的距离,使得这个差值越大越好。Among them, Triplet Loss is a loss function in deep learning, which is used to train samples with small differences, such as human faces. Feed data includes Anchor examples, Positive examples, and Negative examples. By optimizing the distance between the anchor example and the negative example, and subtracting the distance between the anchor example and the positive example, the larger the difference, the better.
S140:对所述质量评估分数满足预设值范围的人脸区域进行可见性评估,并获取对应的可见性评估分数。S140: Perform visibility evaluation on the face area whose quality evaluation score meets a preset value range, and obtain a corresponding visibility evaluation score.
其中,对所述质量评估分数满足预设值范围的人脸区域进行可见性评估,并获取对应的可见性评估分数的步骤包括:Wherein, the step of performing visibility evaluation on the face area whose quality evaluation score meets the preset value range, and obtaining the corresponding visibility evaluation score includes:
训练可见性评估模型;Training visibility evaluation model;
基于所述可见性评估模型对满足质量评估的人脸区域进行可见性评估;Performing visibility evaluation on the face area that meets the quality evaluation based on the visibility evaluation model;
所述可见性评估模型的训练步骤包括:The training steps of the visibility evaluation model include:
基于所述多任务神经网络,输入摆正后的人脸区域的人脸关键点的可见性,输出所述人脸区域的人脸关键点可见性概率;Based on the multi-task neural network, input the visibility of the face key points of the face area after the correction, and output the visibility probability of the face key points of the face area;
基于所述人脸关键点可见性概率及损失函数进行网络训练,获得所述可见性评估模型,所述损失函数为关键点的可见性判断错误的个数。Perform network training based on the visibility probability and loss function of the key points of the face to obtain the visibility evaluation model. The loss function is the number of errors in the visibility judgment of the key points.
S150:对可见性评估分数满足预设值范围的人脸区域进行人脸识别及特征提取。S150: Perform face recognition and feature extraction on a face region whose visibility evaluation score meets a preset value range.
需要说明的是,无论是对齐模型、质量评估模型还是可见性评估模型,都是基于人脸的特征来实现的。尤其人脸对齐坐标和可见性均是人脸的不同 属性而已,而人脸点的质量好坏,也是质量评估模型中的一部分。因此,本申请设计一个人脸识别辅助系统,将对齐、质量评估及可见性评估集成在一个多任务模型中。It should be noted that whether it is an alignment model, a quality evaluation model, or a visibility evaluation model, they are all implemented based on the characteristics of the human face. In particular, face alignment coordinates and visibility are different attributes of the face, and the quality of face points is also part of the quality evaluation model. Therefore, this application designs a face recognition assistance system that integrates alignment, quality evaluation, and visibility evaluation into a multi-task model.
具体地,该辅助系统的多任务模型包括权重共享的底层,以及与底层分别连接且相互独立的多个分支,如图4所示。Specifically, the multi-task model of the auxiliary system includes a weight-sharing bottom layer, and multiple branches connected to the bottom layer and independent of each other, as shown in FIG. 4.
辅助系统的多任务模型分支进一步包括:获得人脸关键点的对齐模型分支、质量评估模型分支和判断遮挡的可见性评估模型分支。在多任务模型训练过程中,对上述三个分支分别进行训练,最后用三个标签(具有关键点、质量评估分数和可见性评估分数)均有的图片对模型进行微调。The multi-task model branch of the auxiliary system further includes: an alignment model branch for obtaining key points of a face, a quality evaluation model branch, and a visibility evaluation model branch for judging occlusion. In the multi-task model training process, the above three branches are trained separately, and finally the model is fine-tuned with pictures with three tags (with key points, quality evaluation scores, and visibility evaluation scores).
其中,首先训练对齐模型,亦即shared weights和landmarks两个模快,此时IQA(image quality assessment,质量评估)模型和Visibility(可见性)模型分支的参数固定不变,在训练过程中只回传landmarks分支的损失loss。当对齐模型分支训练完成之后,固定权重共享底层shared weights的参数和landmarks的参数。然后,分别打开IQA和Visibility模块,训练这两个分支。最后,将所有模块的权重均打开,用三个标签均有的图片进行短暂的微调工作。至此,多任务模型训练完成。Among them, the alignment model is first trained, that is, shared weights and landmarks. At this time, the parameters of the IQA (image quality assessment) model and the Visibility (visibility) model branch are fixed, and only return during the training process. Pass the loss of the landmarks branch. After the alignment model branch training is completed, the fixed weights share the parameters of the underlying shared weights and the parameters of the landmarks. Then, open the IQA and Visibility modules respectively to train these two branches. Finally, turn on the weights of all modules, and use pictures with three tabs for short-term fine-tuning. So far, the multi-task model training is completed.
以下对多任务模型训练的训练过程进行具体表述:The following is a detailed description of the training process of multi-task model training:
当训练landmarks时,和单独的关键点对齐网络一致,训练模型的输入为图片和图片的68个点的坐标,输出为68个点的坐标,损失函数为归一化的L2Loss(也就是归一化的欧氏距离差)。When training landmarks, it is consistent with a separate keypoint alignment network. The input of the training model is the picture and the 68 point coordinates of the picture, the output is the coordinates of 68 points, and the loss function is the normalized L2Loss (that is, normalized Euclidean distance difference).
当训练visibility分支时,训练模型的输入为68个点的可见性(可见为1,不可见为0),最后输出一个2*68的向量(2代表可见和不可见的概率,可见和不可见的概率和为1,我们取其中大于0.5的结果为最终输出的结果),损失函数为68个点的可见性的判断错误的个数,使其乘以一个设定系数,以防止过拟合。When training the visibility branch, the input of the training model is the visibility of 68 points (visible as 1, and invisible as 0), and finally a 2*68 vector is output (2 represents the probability of visible and invisible, visible and invisible The sum of the probability is 1, we take the result greater than 0.5 as the final output result), the loss function is the number of visibility judgment errors of 68 points, and it is multiplied by a set coefficient to prevent overfitting .
当训练IQA分支时,我们的输入为三张图(anchor,positive,negative),其中anchor和positive为同一人,negative是不同的人,最后的loss是乘以系数(也就是IQA score)的triplet loss,让前两者anchor和positive的特征更近,第一者和第三者(anchor和negative)的距离最远。When training the IQA branch, our input is three images (anchor, positive, negative), where anchor and positive are the same person, negative is a different person, and the final loss is the triplet multiplied by the coefficient (that is, IQA score) Loss, makes the features of the first two anchor and positive closer, and the distance between the first and the third (anchor and negative) is the farthest.
最终所有模块一起训练时,我们需要输入三张图,以及这三张图各自的 landmarks的坐标,以及landmark的可见性。最终的损失loss,为三个模块单独损失loss的代数和。In the end, when all modules are trained together, we need to input three pictures, as well as the coordinates of the landmarks of these three pictures, and the visibility of the landmarks. The final loss loss is the algebraic sum of the loss loss of the three modules.
具体地,运行上述多任务模型得到了结果后,我们通过IQA分支得到一个质量评估结果,当判断质量评估分数score小于0.5时,表明图片质量不能满足识别要求,则不继续进行后续操作。当判断质量评估分数大于0.5时,继续采用visibility分支进行可见性判断,例如当判断出图片人脸有超过20%的点不可见(68*20%=14)时,表明人脸遮挡较多,不继续进行人脸识别操作,反之,进行人脸识别操作。Specifically, after running the above-mentioned multi-task model to obtain the result, we obtain a quality evaluation result through the IQA branch. When the quality evaluation score is judged to be less than 0.5, it indicates that the image quality cannot meet the recognition requirements, and the subsequent operations are not continued. When the judgment quality evaluation score is greater than 0.5, the visibility branch continues to be used for visibility judgment. For example, when it is judged that more than 20% of the points of the face in the picture are invisible (68*20%=14), it indicates that the face is more occluded. The face recognition operation is not continued, on the contrary, the face recognition operation is performed.
利用上述基于人脸点可见性的人脸识别方法,将人脸点模型,可见性模型与质量评估三个模块融合到了一起,不仅解决了质量评估模块中无法解决遮挡的问题,还能够大大地提升识别的精度;而且还将多个模块运用多任务的方式解决,还能够大大地减少整个流程的推理时间。Using the above-mentioned face recognition method based on the visibility of face points, the three modules of face point model, visibility model and quality evaluation are integrated together, which not only solves the problem of occlusion that cannot be solved in the quality evaluation module, but also greatly Improve the accuracy of recognition; and use multi-tasking to solve multiple modules, which can also greatly reduce the reasoning time of the entire process.
另外,上述多任务模型除了用于人脸识别之外,还可以用于其他各个人脸属性的判别,比如我们进行眼皮分类时,当判断眼皮上的点被眼镜框遮挡住时,我们就可以不进行眼皮分类,当进行胡子分类时,若胡子被手、话筒等物体遮挡时,也可不进行胡子分类,诸如此类,能够显著提高人脸属性的推理速度,同时避免因遮挡带来的识别错误所导致的精度下降。In addition, the above-mentioned multi-task model can be used for face recognition as well as other face attributes. For example, when we classify eyelids, when we judge that the points on the eyelids are blocked by the glasses frame, we can Eyelid classification is not performed. When performing beard classification, if the beard is hidden by objects such as hands, microphones, etc., the beard classification can also not be performed. This can significantly improve the inference speed of face attributes and avoid recognition errors caused by occlusion. The resulting accuracy is reduced.
实施例三Example three
与前述基于人脸可见性的人脸识别方法以及电子装置相对应,本申请还提供一种基于人脸可见性的人脸识别系统,其逻辑结构与前述电子装置中基于人脸可见性的人脸识别程序10(如图2所示)的模块构成相类似,位置确定单元11、人脸摆正单元12、质量评估单元13、可见性评估单元14、人脸识别单元15所实现的功能或操作步骤均与本实施例的基于人脸可见性的人脸识别系统的逻辑构成类似。例如其中:Corresponding to the aforementioned face recognition method and electronic device based on face visibility, this application also provides a face recognition system based on face visibility. Its logical structure is similar to that of the aforementioned electronic device based on face visibility. The module composition of the face recognition program 10 (shown in Figure 2) is similar, and the functions implemented by the position determining unit 11, the face straightening unit 12, the quality evaluation unit 13, the visibility evaluation unit 14, and the face recognition unit 15 are or The operation steps are similar to the logical composition of the face recognition system based on face visibility in this embodiment. For example:
位置确定单元,用于检测待处理图片,获取待处理图片中的人脸区域的位置;The position determining unit is used to detect the picture to be processed and obtain the position of the face area in the picture to be processed;
人脸摆正单元,用于基于关键点对齐技术,在位置确定单元所获取的人脸区域中的确定人脸的关键点,并对人脸进行摆正处理;The face straightening unit is used to determine the key points of the face in the face area obtained by the position determining unit based on the key point alignment technology, and perform the straightening process on the face;
质量评估单元,用于对经人脸摆正单元摆正处理后的人脸区域进行质量评估,并获取质量评估分数;The quality evaluation unit is used to evaluate the quality of the face region after the face correction unit has been straightened, and obtain a quality evaluation score;
可见性评估单元,用于对质量评估分数满足预设值范围的人脸区域进行可见性评估,并获取对应的可见性评估分数;The visibility evaluation unit is used to evaluate the visibility of the face area whose quality evaluation score meets the preset value range, and obtain the corresponding visibility evaluation score;
人脸识别单元,用于对可见性评估分数满足预设值范围的人脸区域进行人脸识别及特征提取。The face recognition unit is used to perform face recognition and feature extraction on the face area whose visibility evaluation score meets the preset value range.
相应的,在本实施例的基于人脸可见性的人脸识别系统中,人脸摆正单元可以包括对齐模型训练单元、坐标信息确定单元和摆正单元(图中未示出)。其中,对齐模型训练单元,用于获取标注图像数据,基于所述标注图像数据训练对齐模型;坐标信息确定单元,用于通过将待处理图片输入所述对齐模型中,输出与所述待处理图片对应的人脸的关键点坐标信息;旋转摆正单元,用于基于所述关键点坐标信息,获取所述待处理图片的摆正角度,并按照所述摆正角度旋转所述待处理图片,获取摆正后的人脸区域。Correspondingly, in the face recognition system based on face visibility of this embodiment, the face straightening unit may include an alignment model training unit, a coordinate information determining unit, and a straightening unit (not shown in the figure). Wherein, the alignment model training unit is used to obtain annotated image data and train an alignment model based on the annotated image data; the coordinate information determination unit is used to input the image to be processed into the alignment model, and output the same as the image to be processed. Corresponding key point coordinate information of the human face; a rotating correction unit for obtaining the correction angle of the picture to be processed based on the key point coordinate information, and rotating the picture to be processed according to the correction angle, Get the face area after straightening.
其中的标注图像数据为预先标注好关键点的图片集或图像集,关键点的坐标可以用回归的方法来获取,直接回归出图像中68个关键点的x坐标和y坐标,即输出为68*2=136个的一个一维向量。当分别确定人脸的左眼睛点和右眼睛点的x,y坐标后,可以求得一个旋转角度,基于旋转角度旋转图片实现人脸摆正。The annotated image data is a picture set or image set with pre-marked key points. The coordinates of the key points can be obtained by the regression method. The x and y coordinates of the 68 key points in the image are directly returned, that is, the output is 68 *2 = a one-dimensional vector of 136. When the x and y coordinates of the left eye point and the right eye point of the face are determined respectively, a rotation angle can be obtained, and the face can be straightened by rotating the picture based on the rotation angle.
为获取对齐模型,需要大量的标注关键点的人脸数据,本实施例中将人脸区域中的人脸的关键点的数目设定为68个。对齐模型的训练模型的网络输入为进行人脸检测后得到的一个人脸的图像,训练模型的标签为人脸上的68个点的坐标位置。In order to obtain the alignment model, a large amount of face data labeled with key points is required. In this embodiment, the number of face key points in the face area is set to 68. The network input of the training model of the alignment model is an image of a face obtained after face detection, and the label of the training model is the coordinate position of 68 points on the face.
在本实施例的另一优选实施方式中,对齐模型单元还包括标注图像数据获取单元、训练单元、归一化单元和迭代单元(图中未示出)。In another preferred implementation of this embodiment, the alignment model unit further includes an annotation image data acquisition unit, a training unit, a normalization unit, and an iteration unit (not shown in the figure).
其中,标注图像数据获取单元用于获取预先标注好关键点的图像集作为标注图像数据;训练单元,用于以所述标注图像对应的人脸图像作为所述对齐模型的训练模型的网络输入,所述人脸图像的预先标注好的关键点坐标位置作为所述训练模型的标签进行训练;归一化单元,用于获取所述训练模型的输出和所述标签之间的欧式距离和,并进行归一化处理,得到损失函数;迭代单元,用于基于损失函数进行参数迭代,直至获取训练好的对齐模型。Wherein, the annotated image data acquiring unit is used to acquire an image set with pre-annotated key points as annotated image data; the training unit is used to use the face image corresponding to the annotated image as the network input of the training model of the alignment model, The pre-labeled key point coordinate positions of the face image are used as the label of the training model for training; the normalization unit is used to obtain the sum of the Euclidean distance between the output of the training model and the label, and Perform normalization processing to obtain a loss function; an iterative unit is used to iterate parameters based on the loss function until a trained alignment model is obtained.
在本实施例的另一优选实施方式中,质量评估单元还包括第一评估模型训练单元和第一评估单元(图中未示出)。In another preferred implementation of this embodiment, the quality evaluation unit further includes a first evaluation model training unit and a first evaluation unit (not shown in the figure).
其中,第一评估模型训练单元用于训练质量评估模型;第一评估单元用于基于第一评估模型训练单元所训练的质量评估模型,对摆正后的人脸区域进行质量评估,并获取质量评估分数。Among them, the first evaluation model training unit is used to train the quality evaluation model; the first evaluation unit is used to evaluate the quality of the corrected face region based on the quality evaluation model trained by the first evaluation model training unit, and obtain the quality Evaluation score.
在本实施例的又一优选实施方式中,第一评估模型训练单元进一步包括第一网络训练单元、人脸识别特征确定单元和评估模型获取单元(图中未示出)。其中,网络训练单元用于训练多任务神经网络,该多任务神经网络的输入为摆正后的人脸区域,该多任务神经网络的输出为所述人脸区域的人脸特征以及和所述人脸特征对应的分数值;人脸识别特征确定单元用于将人脸特征和对应的分数值相乘,得到最终的人脸识别特征;评估模型获取单元用于基于最终的人脸识别特征和损失函数进行网络训练,获得质量评估模型。In another preferred implementation of this embodiment, the first evaluation model training unit further includes a first network training unit, a face recognition feature determination unit, and an evaluation model acquisition unit (not shown in the figure). Wherein, the network training unit is used to train a multi-task neural network, the input of the multi-task neural network is the face area after the square, the output of the multi-task neural network is the face features of the face area and the The score value corresponding to the face feature; the face recognition feature determination unit is used to multiply the face feature and the corresponding score value to obtain the final face recognition feature; the evaluation model acquisition unit is used to based on the final face recognition feature and The loss function is used for network training to obtain a quality evaluation model.
该质量评估单元可以用一个基于简易的识别模型的多任务网络来做,其中一个分支,是人脸识别之后提取到的人脸特征,另一个分支,则经过sigmoid函数之后得到一个在0-1之间的分数值。将该分数值与之前提取到的人脸特征相乘,得到最终的人脸识别的特征;然后,经过triplet loss损失函数进行训练,可以得到最终的质量评估的模型。在进行模型的推理时,输入一张图片,则可以得到一个质量评估分数。该质量评估分数越接近于1,则认为该人脸区域的质量越高。The quality evaluation unit can be done with a multi-task network based on a simple recognition model. One of the branches is the facial features extracted after face recognition, and the other branch, after passing through the sigmoid function, gets a value between 0-1 The number of points in between. Multiply the score value with the previously extracted face features to obtain the final face recognition feature; then, after the triplet loss loss function is trained, the final quality evaluation model can be obtained. When inferring the model, input a picture to get a quality evaluation score. The closer the quality evaluation score is to 1, the higher the quality of the face region is considered.
在本实施例的另一优选实施方式中,可见性评估单元还包括第二评估模型训练单元和第二评估单元(图中未示出)。In another preferred implementation of this embodiment, the visibility evaluation unit further includes a second evaluation model training unit and a second evaluation unit (not shown in the figure).
其中,第二评估模型训练单元用于训练可见性评估模型;第二评估单元则用于基于第二评估模型训练单元所训练的可见性评估模型,对满足质量评估的人脸区域进行可见性评估。Among them, the second evaluation model training unit is used to train the visibility evaluation model; the second evaluation unit is used to evaluate the visibility of the face area that meets the quality evaluation based on the visibility evaluation model trained by the second evaluation model training unit .
进一步,上述第二评估模型训练单元又包括第二网络训练单元和第二评估模型获取单元(图中未示出)。其中,第二网络训练单元用于基于多任务神经网络,输入摆正后的人脸区域的人脸关键点的可见性,输出所述人脸区域的人脸关键点可见性概率;Further, the above-mentioned second evaluation model training unit further includes a second network training unit and a second evaluation model acquisition unit (not shown in the figure). Wherein, the second network training unit is used for inputting the visibility of face key points in the face area after correcting based on a multi-task neural network, and outputting the visibility probability of face key points in the face area;
第二评估模型获取单元用于基于人脸关键点可见性概率及损失函数进行网络训练,获得所述可见性评估模型,所述损失函数为关键点的可见性判断错误的个数。The second evaluation model acquisition unit is used to perform network training based on the visibility probability and loss function of key points of the face to obtain the visibility evaluation model, where the loss function is the number of errors in the visibility judgment of the key points.
应当明了,上述实施方式并非本实施例三的所有实施方式,本实施例三 的具体实施方式与上述基于人脸可见性的人脸识别方法、电子装置的具体实施方式大致相同,在此不再赘述。It should be understood that the foregoing implementation manners are not all implementation manners of the third embodiment, and the specific implementation manners of the third embodiment are substantially the same as the foregoing specific implementation manners of the face recognition method and electronic device based on the visibility of the face, and will not be omitted here. Go into details.
实施例四Example four
此外,本申请实施例还提出一种存储介质,该存储介质中包括基于人脸可见性的人脸识别程序,该基于人脸可见性的人脸识别程序被处理器执行时实现如前所述的基于人脸可见性的人脸识别方法的步骤以及如前所述的基于人脸可见性的人脸识别系统的操作。In addition, an embodiment of the present application also proposes a storage medium that includes a face recognition program based on face visibility. When the face recognition program based on face visibility is executed by a processor, the implementation is as described above. The steps of the face recognition method based on face visibility and the operation of the face recognition system based on face visibility as described above.
本申请之计算机可读存储介质的具体实施方式与上述基于人脸可见性的人脸识别方法、系统、电子装置的具体实施方式大致相同,在此不再赘述。The specific implementation of the computer-readable storage medium of the present application is substantially the same as the specific implementation of the above-mentioned face recognition method, system, and electronic device based on face visibility, and will not be repeated here.
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、装置、物品或者方法不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、装置、物品或者方法所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、装置、物品或者方法中还存在另外的相同要素。It should be noted that in this article, the terms "include", "include" or any other variants thereof are intended to cover non-exclusive inclusion, so that a process, device, article or method including a series of elements not only includes those elements, It also includes other elements not explicitly listed, or elements inherent to the process, device, article, or method. If there are no more restrictions, the element defined by the sentence "including a..." does not exclude the existence of other identical elements in the process, device, article, or method that includes the element.
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。The serial numbers of the foregoing embodiments of the present application are only for description, and do not represent the advantages and disadvantages of the embodiments. Through the description of the above implementation manners, those skilled in the art can clearly understand that the above-mentioned embodiment method can be implemented by means of software plus the necessary general hardware platform, of course, it can also be implemented by hardware, but in many cases the former is better.的实施方式。 Based on this understanding, the technical solution of this application essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM) as described above. , Magnetic disks, optical disks), including several instructions to make a terminal device (which can be a mobile phone, a computer, a server, or a network device, etc.) execute the method described in each embodiment of the present application.
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。The above are only the preferred embodiments of the application, and do not limit the scope of the patent for this application. Any equivalent structure or equivalent process transformation made using the content of the description and drawings of the application, or directly or indirectly applied to other related technical fields , The same reason is included in the scope of patent protection of this application.
Claims (20)
- 一种基于人脸可见性的人脸识别方法,应用于电子装置,其特征在于,所述方法包括:A face recognition method based on face visibility, applied to an electronic device, characterized in that the method includes:检测待处理图片,获取所述待处理图片中的人脸区域的位置;Detecting the picture to be processed, and obtaining the position of the face area in the picture to be processed;基于关键点对齐技术,确定所述人脸区域中的人脸的关键点,并对所述人脸进行摆正处理;Based on the key point alignment technology, determine the key points of the face in the face area, and perform a straightening process on the face;对所述摆正处理后的人脸区域进行质量评估,并获取质量评估分数;Perform quality evaluation on the face area after the straightening process, and obtain a quality evaluation score;对所述质量评估分数满足预设值范围的人脸区域进行可见性评估,并获取对应的可见性评估分数;Perform visibility evaluation on the face area whose quality evaluation score meets the preset value range, and obtain a corresponding visibility evaluation score;对可见性评估分数满足预设值范围的人脸区域进行人脸识别及特征提取。Face recognition and feature extraction are performed on the face area whose visibility evaluation score meets the preset value range.
- 根据权利要求1所述的基于人脸可见性的人脸识别方法,其特征在于,所述确定所述人脸区域中的人脸的关键点,并对所述人脸进行摆正处理的步骤包括:The face recognition method based on the visibility of the face according to claim 1, wherein the step of determining the key points of the face in the face area, and correcting the face include:获取标注图像数据,基于所述标注图像数据训练对齐模型;Acquiring annotated image data, and training an alignment model based on the annotated image data;将待处理图片输入所述对齐模型中,输出与所述待处理图片对应的人脸的关键点坐标信息;Inputting the picture to be processed into the alignment model, and outputting key point coordinate information of the face corresponding to the picture to be processed;基于所述关键点坐标信息,获取所述待处理图片的摆正角度,并按照所述摆正角度旋转所述待处理图片,获取摆正后的人脸区域。Based on the key point coordinate information, obtain the correction angle of the picture to be processed, and rotate the picture to be processed according to the correction angle to obtain the face area after correction.
- 根据权利要求2所述的基于人脸可见性的人脸识别方法,其特征在于,获取标注图像数据,基于所述标注图像数据训练对齐模型的步骤包括:The face recognition method based on face visibility according to claim 2, wherein the step of acquiring annotated image data, and training an alignment model based on the annotated image data comprises:获取预先标注好关键点的图像集作为标注图像数据;Obtain an image set with pre-marked key points as annotated image data;以所述标注图像对应的人脸图像作为所述对齐模型的训练模型的网络输入,以所述人脸图像的预先标注好的关键点坐标位置作为所述训练模型的标签进行训练;Using the face image corresponding to the labeled image as the network input of the training model of the alignment model, and using the pre-annotated key point coordinate positions of the face image as the label of the training model for training;获取所述训练模型的输出和所述标签之间的欧式距离和,并进行归一化处理,得到损失函数;Obtain the sum of Euclidean distances between the output of the training model and the label, and perform normalization processing to obtain a loss function;基于损失函数进行参数迭代,直至获取训练好的对齐模型。Perform parameter iteration based on the loss function until a trained alignment model is obtained.
- 根据权利要求3所述的基于人脸可见性的人脸识别方法,其特征在于,所述标注图像数据为预先标注好关键点的图片集或图像集,所述关键点的坐 标采用回归的方法获取,直接回归出图片或者图像中的关键点的x坐标和y坐标。The face recognition method based on face visibility according to claim 3, wherein the labeled image data is a set of pictures or image sets marked with key points in advance, and the coordinates of the key points adopt a regression method Obtain, directly return the x-coordinate and y-coordinate of the key point in the picture or image.
- 根据权利要求4所述的基于人脸可见性的人脸识别方法,其特征在于,所述人脸区域中的人脸的关键点的数目为68个。The face recognition method based on the visibility of the face according to claim 4, wherein the number of key points of the face in the face area is 68.
- 根据权利要求3所述的基于人脸可见性的人脸识别方法,其特征在于,The face recognition method based on face visibility according to claim 3, characterized in that,按照预设标注顺序标注所述人脸图像的关键点坐标位置作为所述训练模型的标签。Label the key point coordinate positions of the face image according to a preset labeling sequence as the label of the training model.
- 根据权利要求1所述的基于人脸可见性的人脸识别方法,其特征在于,所述对摆正处理后的人脸区域进行质量评估,并获取质量评估分数的步骤包括:The face recognition method based on face visibility according to claim 1, wherein the step of evaluating the quality of the face area after the straightening process and obtaining the quality evaluation score comprises:训练质量评估模型;Training quality evaluation model;基于所述质量评估模型对摆正后的人脸区域进行质量评估,并获取质量评估分数。Based on the quality evaluation model, the quality evaluation of the face area after the correction is performed, and the quality evaluation score is obtained.
- 根据权利要求7所述的基于人脸可见性的人脸识别方法,其特征在于,所述质量评估模型的训练步骤包括:The face recognition method based on face visibility according to claim 7, wherein the training step of the quality evaluation model comprises:训练多任务神经网络,所述多任务神经网络的输入为摆正后的人脸区域,所述多任务神经网络的输出为所述人脸区域的人脸特征以及和所述人脸特征对应的分数值;Train a multi-task neural network, the input of the multi-task neural network is the face area after the square, the output of the multi-task neural network is the face feature of the face area and the face feature corresponding to the face Point value将所述人脸特征和对应的分数值相乘,得到最终的人脸识别特征;Multiplying the face feature and the corresponding score value to obtain the final face recognition feature;基于所述最终的人脸识别特征和损失函数进行网络训练,获得所述质量评估模型。Perform network training based on the final face recognition feature and loss function to obtain the quality evaluation model.
- 根据权利要求8所述的基于人脸可见性的人脸识别方法,其特征在于,基于所述质量评估模型对摆正后的人脸区域进行质量评估,并获取质量评估分数的步骤包括:The face recognition method based on the visibility of the face according to claim 8, wherein the step of performing quality assessment on the face area after the correction based on the quality assessment model and obtaining the quality assessment score comprises:采用一基于简易的识别模型的多任务网络作为所述质量评估模型;其中,所述多任务网络的一个分支为人脸识别之后提取到的人脸特征,所述多任务网络的另一个分支,经过sigmoid函数之后得到一个在0-1之间的分数值;A multi-task network based on a simple recognition model is used as the quality evaluation model; wherein, one branch of the multi-task network is the facial features extracted after face recognition, and the other branch of the multi-task network passes through After the sigmoid function, a score value between 0-1 is obtained;将所述分数值与所提取到的人脸特征相乘,得到最终的人脸识别的特征;Multiplying the score value with the extracted facial features to obtain the final facial recognition feature;将所述最终的人脸识别的特征经过triplet loss损失函数进行训练,得到最终的质量评估的模型。The final feature of face recognition is trained through a triplet loss loss function to obtain a final quality evaluation model.
- 根据权利要求8所述的基于人脸可见性的人脸识别方法,其特征在于,所述对所述质量评估分数满足预设值范围的人脸区域进行可见性评估,并获取对应的可见性评估分数的步骤包括:The face recognition method based on face visibility according to claim 8, wherein the visibility evaluation is performed on the face area whose quality evaluation score meets a preset value range, and the corresponding visibility is obtained The steps to assess scores include:训练可见性评估模型;Training visibility evaluation model;基于所述可见性评估模型对满足质量评估的人脸区域进行可见性评估。Based on the visibility evaluation model, the visibility evaluation of the face area that meets the quality evaluation is performed.
- 根据权利要求10所述的基于人脸可见性的人脸识别方法,其特征在于,所述可见性评估模型的训练步骤包括:The face recognition method based on face visibility according to claim 10, wherein the training step of the visibility evaluation model comprises:基于所述多任务神经网络,输入摆正后的人脸区域的人脸关键点的可见性,输出为所述人脸区域的人脸关键点可见性概率;Based on the multi-task neural network, the visibility of the key points of the face in the face area after the correction is input, and the output is the visibility probability of the key points of the face in the face area;基于所述人脸关键点可见性概率及损失函数进行网络训练,获得所述可见性评估模型,所述损失函数为关键点的可见性判断错误的个数。Perform network training based on the visibility probability and loss function of the key points of the face to obtain the visibility evaluation model. The loss function is the number of errors in the visibility judgment of the key points.
- 一种基于人脸可见性的人脸识别系统,其特征在于,包括:A face recognition system based on face visibility, which is characterized in that it includes:位置确定单元,用于检测待处理图片,获取所述待处理图片中的人脸区域的位置;A position determining unit, configured to detect the picture to be processed, and obtain the position of the face area in the picture to be processed;人脸摆正单元,用于基于关键点对齐技术,确定所述人脸区域中的人脸的关键点,并对所述人脸进行摆正处理;The face straightening unit is configured to determine the key points of the face in the face area based on the key point alignment technology, and perform straightening processing on the face;质量评估单元,用于对所述摆正处理后的人脸区域进行质量评估,并获取质量评估分数;A quality evaluation unit, configured to evaluate the quality of the face area after the straightening process, and obtain a quality evaluation score;可见性评估单元,用于对所述质量评估分数满足预设值范围的人脸区域进行可见性评估,并获取对应的可见性评估分数;The visibility evaluation unit is configured to evaluate the visibility of the face area whose quality evaluation score meets the preset value range, and obtain the corresponding visibility evaluation score;人脸识别单元,用于对可见性评估分数满足预设值范围的人脸区域进行人脸识别及特征提取。The face recognition unit is used to perform face recognition and feature extraction on the face area whose visibility evaluation score meets the preset value range.
- 根据权利要求12所述的基于人脸可见性的人脸识别系统,其特征在于,所述人脸摆正单元包括:The face recognition system based on face visibility according to claim 12, wherein the face straightening unit comprises:对齐模型训练单元,用于获取标注图像数据,基于所述标注图像数据训练对齐模型;An alignment model training unit, configured to obtain labeled image data, and train an alignment model based on the labeled image data;坐标信息确定单元,用于通过将待处理图片输入所述对齐模型中,输出与所述待处理图片对应的人脸的关键点坐标信息;The coordinate information determining unit is configured to output the key point coordinate information of the face corresponding to the picture to be processed by inputting the picture to be processed into the alignment model;旋转摆正单元,用于基于所述关键点坐标信息,获取所述待处理图片的摆正角度,并按照所述摆正角度旋转所述待处理图片,获取摆正后的人脸区 域。The rotating straightening unit is configured to obtain the straightening angle of the picture to be processed based on the key point coordinate information, and to rotate the picture to be processed according to the straightening angle to obtain the face area after straightening.
- 根据权利要求13所述的基于人脸可见性的人脸识别系统,其特征在于,所述对齐模型单元包括:The face recognition system based on face visibility according to claim 13, wherein the alignment model unit comprises:标注图像数据获取单元,用于获取预先标注好关键点的图像集作为标注图像数据;Annotated image data acquisition unit for acquiring an image set with pre-annotated key points as annotated image data;训练单元,用于以所述标注图像对应的人脸图像作为所述对齐模型的训练模型的网络输入,所述人脸图像的预先标注好的关键点坐标位置作为所述训练模型的标签进行训练;The training unit is configured to use the face image corresponding to the labeled image as the network input of the training model of the alignment model, and the pre-labeled key point coordinate positions of the face image are used as the label of the training model for training ;归一化单元,用于获取所述训练模型的输出和所述标签之间的欧式距离和,并进行归一化处理,得到损失函数;A normalization unit, configured to obtain the sum of Euclidean distances between the output of the training model and the label, and perform normalization processing to obtain a loss function;迭代单元,用于基于损失函数进行参数迭代,直至获取训练好的对齐模型。The iterative unit is used to iterate parameters based on the loss function until a trained alignment model is obtained.
- 根据权利要求12所述的基于人脸可见性的人脸识别系统,其特征在于,所述质量评估单元包括:The face recognition system based on face visibility according to claim 12, wherein the quality evaluation unit comprises:第一评估模型训练单元,用于训练质量评估模型;The first evaluation model training unit is used to train the quality evaluation model;评估分数获取单元,用于基于所述质量评估模型对摆正后的人脸区域进行质量评估,并获取质量评估分数。The evaluation score obtaining unit is configured to evaluate the quality of the face area after the correction based on the quality evaluation model, and obtain the quality evaluation score.
- 一种电子装置,其特征在于,该电子装置包括:存储器、处理器,所述存储器中包括基于人脸可见性的人脸识别程序,所述基于人脸可见性的人脸识别程序被所述处理器执行时实现如下步骤:An electronic device, characterized in that the electronic device includes a memory and a processor, the memory includes a face recognition program based on the visibility of the face, and the face recognition program based on the visibility of the face is used by the When the processor executes, the following steps are implemented:检测待处理图片,获取所述待处理图片中的人脸区域的位置;Detecting the picture to be processed, and obtaining the position of the face area in the picture to be processed;基于关键点对齐技术,确定所述人脸区域中的人脸的关键点,并对所述人脸进行摆正处理;Based on the key point alignment technology, determine the key points of the face in the face area, and perform a straightening process on the face;对所述摆正处理后的人脸区域进行质量评估,并获取质量评估分数;Perform quality evaluation on the face area after the straightening process, and obtain a quality evaluation score;对所述质量评估分数满足预设值范围的人脸区域进行可见性评估,并获取对应的可见性评估分数;Perform visibility evaluation on the face area whose quality evaluation score meets the preset value range, and obtain a corresponding visibility evaluation score;对可见性评估分数满足预设值范围的人脸区域进行人脸识别及特征提取。Face recognition and feature extraction are performed on the face area whose visibility evaluation score meets the preset value range.
- 根据权利要求16所述的电子装置,其特征在于,所述确定所述人脸区域中的人脸的关键点,并对所述人脸进行摆正处理的步骤包括:The electronic device according to claim 16, wherein the step of determining the key points of the human face in the human face area and performing the correction processing on the human face comprises:获取标注图像数据,基于所述标注图像数据训练对齐模型;Acquiring annotated image data, and training an alignment model based on the annotated image data;将待处理图片输入所述对齐模型中,输出与所述待处理图片对应的人脸的关键点坐标信息;Inputting the picture to be processed into the alignment model, and outputting key point coordinate information of the face corresponding to the picture to be processed;基于所述关键点坐标信息,获取所述待处理图片的摆正角度,并按照所述摆正角度旋转所述待处理图片,获取摆正后的人脸区域。Based on the key point coordinate information, obtain the correction angle of the picture to be processed, and rotate the picture to be processed according to the correction angle to obtain the face area after correction.
- 根据权利要求16所述的电子装置,其特征在于,获取标注图像数据,基于所述标注图像数据训练对齐模型的步骤包括:The electronic device according to claim 16, wherein the step of acquiring annotated image data, and training an alignment model based on the annotated image data comprises:获取预先标注好关键点的图像集作为标注图像数据;Obtain an image set with pre-marked key points as annotated image data;所述标注图像对应的人脸图像作为所述对齐模型的训练模型的网络输入,所述人脸图像的预先标注好的关键点坐标位置作为所述训练模型的标签进行训练;The face image corresponding to the labeled image is used as the network input of the training model of the alignment model, and the pre-labeled key point coordinate positions of the face image are used as the label of the training model for training;获取所述训练模型的输出和所述标签之间的欧式距离和,并进行归一化处理,得到损失函数;Obtain the sum of Euclidean distances between the output of the training model and the label, and perform normalization processing to obtain a loss function;基于损失函数进行参数迭代,直至获取训练好的对齐模型。Perform parameter iteration based on the loss function until a trained alignment model is obtained.
- 根据权利要求16所述的电子装置,其特征在于,所述对摆正后的人脸区域进行质量评估,并获取质量评估分数的步骤包括:The electronic device according to claim 16, wherein the step of evaluating the quality of the face area after the straightening and obtaining the quality evaluation score comprises:训练质量评估模型;Training quality evaluation model;基于所述质量评估模型对摆正后的人脸区域进行质量评估,并获取质量评估分数;Performing quality evaluation on the face area after the straightening based on the quality evaluation model, and obtaining a quality evaluation score;所述质量评估模型的训练步骤包括:The training steps of the quality assessment model include:训练多任务神经网络,所述多任务神经网络的输入为摆正后的人脸区域,所述多任务神经网络的输出为所述人脸区域的人脸特征以及和所述人脸特征对应的分数值;Train a multi-task neural network, the input of the multi-task neural network is the face area after the square, the output of the multi-task neural network is the face feature of the face area and the face feature corresponding to the face Point value将所述人脸特征和对应的分数值相乘,得到最终的人脸识别特征;Multiplying the face feature and the corresponding score value to obtain the final face recognition feature;基于所述最终的人脸识别特征和损失函数进行网络训练,获得所述质量评估模型。Perform network training based on the final face recognition feature and loss function to obtain the quality evaluation model.
- 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中包括基于人脸可见性的人脸识别程序,所述基于人脸可见性的人脸识别程序被处理器执行时,实现如权利要求1至11中任一项所述的基于人脸可见性的人脸识别方法的步骤。A computer-readable storage medium, wherein the computer-readable storage medium includes a face recognition program based on face visibility, and when the face recognition program based on face visibility is executed by a processor, The steps of realizing the face recognition method based on face visibility according to any one of claims 1 to 11.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910885914.3 | 2019-09-19 | ||
CN201910885914.3A CN110751043B (en) | 2019-09-19 | 2019-09-19 | Face recognition method and device based on face visibility and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021051611A1 true WO2021051611A1 (en) | 2021-03-25 |
Family
ID=69276755
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/118428 WO2021051611A1 (en) | 2019-09-19 | 2019-11-14 | Face visibility-based face recognition method, system, device, and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110751043B (en) |
WO (1) | WO2021051611A1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113255539A (en) * | 2021-06-01 | 2021-08-13 | 平安科技(深圳)有限公司 | Multi-task fusion face positioning method, device, equipment and storage medium |
CN113591763A (en) * | 2021-08-09 | 2021-11-02 | 平安科技(深圳)有限公司 | Method and device for classifying and identifying face shape, storage medium and computer equipment |
CN113657195A (en) * | 2021-07-27 | 2021-11-16 | 浙江大华技术股份有限公司 | Face image recognition method, face image recognition equipment, electronic device and storage medium |
CN113792704A (en) * | 2021-09-29 | 2021-12-14 | 山东新一代信息产业技术研究院有限公司 | Cloud deployment method and device of face recognition model |
CN114140647A (en) * | 2021-11-26 | 2022-03-04 | 蜂巢能源科技有限公司 | Fuzzy image recognition algorithm for pole pieces of battery cell pole group |
CN114821737A (en) * | 2022-05-13 | 2022-07-29 | 浙江工商大学 | Moving end real-time wig try-on method based on three-dimensional face alignment |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111382693A (en) * | 2020-03-05 | 2020-07-07 | 北京迈格威科技有限公司 | Image quality determination method and device, electronic equipment and computer readable medium |
CN112785683B (en) * | 2020-05-07 | 2024-03-19 | 武汉金山办公软件有限公司 | Face image adjusting method and device |
CN111598000A (en) * | 2020-05-18 | 2020-08-28 | 中移(杭州)信息技术有限公司 | Face recognition method, device, server and readable storage medium based on multiple tasks |
CN111814840A (en) * | 2020-06-17 | 2020-10-23 | 恒睿(重庆)人工智能技术研究院有限公司 | Method, system, equipment and medium for evaluating quality of face image |
CN111738213B (en) * | 2020-07-20 | 2021-02-09 | 平安国际智慧城市科技股份有限公司 | Person attribute identification method and device, computer equipment and storage medium |
CN112001280B (en) * | 2020-08-13 | 2024-07-09 | 浩鲸云计算科技股份有限公司 | Real-time and online optimized face recognition system and method |
CN112287781A (en) * | 2020-10-19 | 2021-01-29 | 苏州纳智天地智能科技有限公司 | Human face photo quality evaluation method |
CN113792682B (en) * | 2021-09-17 | 2024-05-10 | 平安科技(深圳)有限公司 | Face quality assessment method, device, equipment and medium based on face image |
CN114677743A (en) * | 2022-04-08 | 2022-06-28 | 湖南四方天箭信息科技有限公司 | Face rectification method and device, computer equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104036276A (en) * | 2014-05-29 | 2014-09-10 | 无锡天脉聚源传媒科技有限公司 | Face recognition method and device |
CN106980844A (en) * | 2017-04-06 | 2017-07-25 | 武汉神目信息技术有限公司 | A kind of character relation digging system and method based on face identification system |
CN107679515A (en) * | 2017-10-24 | 2018-02-09 | 西安交通大学 | A kind of three-dimensional face identification method based on curved surface mediation shape image depth representing |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7327891B2 (en) * | 2001-07-17 | 2008-02-05 | Yesvideo, Inc. | Automatic selection of a visual image or images from a collection of visual images, based on an evaluation of the quality of the visual images |
CN108269250A (en) * | 2017-12-27 | 2018-07-10 | 武汉烽火众智数字技术有限责任公司 | Method and apparatus based on convolutional neural networks assessment quality of human face image |
CN109117797A (en) * | 2018-08-17 | 2019-01-01 | 浙江捷尚视觉科技股份有限公司 | A kind of face snapshot recognition method based on face quality evaluation |
CN109614910B (en) * | 2018-12-04 | 2020-11-20 | 青岛小鸟看看科技有限公司 | Face recognition method and device |
CN110046652A (en) * | 2019-03-18 | 2019-07-23 | 深圳神目信息技术有限公司 | Face method for evaluating quality, device, terminal and readable medium |
-
2019
- 2019-09-19 CN CN201910885914.3A patent/CN110751043B/en active Active
- 2019-11-14 WO PCT/CN2019/118428 patent/WO2021051611A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104036276A (en) * | 2014-05-29 | 2014-09-10 | 无锡天脉聚源传媒科技有限公司 | Face recognition method and device |
CN106980844A (en) * | 2017-04-06 | 2017-07-25 | 武汉神目信息技术有限公司 | A kind of character relation digging system and method based on face identification system |
CN107679515A (en) * | 2017-10-24 | 2018-02-09 | 西安交通大学 | A kind of three-dimensional face identification method based on curved surface mediation shape image depth representing |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113255539A (en) * | 2021-06-01 | 2021-08-13 | 平安科技(深圳)有限公司 | Multi-task fusion face positioning method, device, equipment and storage medium |
CN113255539B (en) * | 2021-06-01 | 2024-05-10 | 平安科技(深圳)有限公司 | Multi-task fusion face positioning method, device, equipment and storage medium |
CN113657195A (en) * | 2021-07-27 | 2021-11-16 | 浙江大华技术股份有限公司 | Face image recognition method, face image recognition equipment, electronic device and storage medium |
CN113591763A (en) * | 2021-08-09 | 2021-11-02 | 平安科技(深圳)有限公司 | Method and device for classifying and identifying face shape, storage medium and computer equipment |
CN113591763B (en) * | 2021-08-09 | 2024-05-28 | 平安科技(深圳)有限公司 | Classification recognition method and device for face shapes, storage medium and computer equipment |
CN113792704A (en) * | 2021-09-29 | 2021-12-14 | 山东新一代信息产业技术研究院有限公司 | Cloud deployment method and device of face recognition model |
CN113792704B (en) * | 2021-09-29 | 2024-02-02 | 山东新一代信息产业技术研究院有限公司 | Cloud deployment method and device of face recognition model |
CN114140647A (en) * | 2021-11-26 | 2022-03-04 | 蜂巢能源科技有限公司 | Fuzzy image recognition algorithm for pole pieces of battery cell pole group |
CN114821737A (en) * | 2022-05-13 | 2022-07-29 | 浙江工商大学 | Moving end real-time wig try-on method based on three-dimensional face alignment |
CN114821737B (en) * | 2022-05-13 | 2024-06-04 | 浙江工商大学 | Mobile-end real-time wig try-on method based on three-dimensional face alignment |
Also Published As
Publication number | Publication date |
---|---|
CN110751043B (en) | 2023-08-22 |
CN110751043A (en) | 2020-02-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021051611A1 (en) | Face visibility-based face recognition method, system, device, and storage medium | |
WO2022134337A1 (en) | Face occlusion detection method and system, device, and storage medium | |
US11151363B2 (en) | Expression recognition method, apparatus, electronic device, and storage medium | |
EP3916627A1 (en) | Living body detection method based on facial recognition, and electronic device and storage medium | |
US20200143146A1 (en) | Target object recognition method and apparatus, storage medium, and electronic device | |
US10713532B2 (en) | Image recognition method and apparatus | |
WO2019232866A1 (en) | Human eye model training method, human eye recognition method, apparatus, device and medium | |
WO2020000908A1 (en) | Method and device for face liveness detection | |
WO2019232862A1 (en) | Mouth model training method and apparatus, mouth recognition method and apparatus, device, and medium | |
CN110197146B (en) | Face image analysis method based on deep learning, electronic device and storage medium | |
WO2018028546A1 (en) | Key point positioning method, terminal, and computer storage medium | |
CN113343826B (en) | Training method of human face living body detection model, human face living body detection method and human face living body detection device | |
WO2021174819A1 (en) | Face occlusion detection method and system | |
WO2020248848A1 (en) | Intelligent abnormal cell determination method and device, and computer readable storage medium | |
JP4414401B2 (en) | Facial feature point detection method, apparatus, and program | |
WO2019061658A1 (en) | Method and device for positioning eyeglass, and storage medium | |
CN103902958A (en) | Method for face recognition | |
WO2019033715A1 (en) | Human-face image data acquisition method, apparatus, terminal device, and storage medium | |
WO2021051547A1 (en) | Violent behavior detection method and system | |
WO2021139167A1 (en) | Method and apparatus for facial recognition, electronic device, and computer readable storage medium | |
CN110569756A (en) | face recognition model construction method, recognition method, device and storage medium | |
JP2023502202A (en) | Databases, data structures, and data processing systems for the detection of counterfeit physical documents | |
CN111222433B (en) | Automatic face auditing method, system, equipment and readable storage medium | |
CN107844742B (en) | Facial image glasses minimizing technology, device and storage medium | |
WO2019205633A1 (en) | Eye state detection method and detection apparatus, electronic device, and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19945521 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19945521 Country of ref document: EP Kind code of ref document: A1 |