CN112001932A - Face recognition method and device, computer equipment and storage medium - Google Patents

Face recognition method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112001932A
CN112001932A CN202010902551.2A CN202010902551A CN112001932A CN 112001932 A CN112001932 A CN 112001932A CN 202010902551 A CN202010902551 A CN 202010902551A CN 112001932 A CN112001932 A CN 112001932A
Authority
CN
China
Prior art keywords
face
pose
image
posture
recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010902551.2A
Other languages
Chinese (zh)
Other versions
CN112001932B (en
Inventor
田植良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010902551.2A priority Critical patent/CN112001932B/en
Publication of CN112001932A publication Critical patent/CN112001932A/en
Application granted granted Critical
Publication of CN112001932B publication Critical patent/CN112001932B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/223Analysis of motion using block-matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to artificial intelligence image recognition, and provides a face recognition method, a face recognition device, computer equipment and a storage medium. The method comprises the following steps: acquiring a face image to be recognized, and recognizing the face image to be recognized to obtain a corresponding face identity matching degree; carrying out image segmentation on a face image to be recognized to obtain a face region; identifying a face pose corresponding to the face area to obtain face pose information; determining a target face posture sub-interval matched with the face posture information, and acquiring corresponding face posture matching degree according to the target face posture sub-interval; and determining a face recognition result corresponding to the face image to be recognized based on the face identity matching degree and the face posture matching degree. The method can improve the accuracy of face recognition.

Description

Face recognition method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of internet technologies, and in particular, to a face recognition method, an apparatus, a computer device, and a storage medium.
Background
With the development of artificial intelligence technology, face recognition technology has emerged, which is a biological recognition technology for identity recognition based on the facial feature information of people. A series of related technologies, also commonly called face recognition and face recognition, are used to collect images or video streams containing faces by using a camera or a video camera, automatically detect and track the faces in the images, and then perform face recognition on the detected faces. At present, the face image recognition result is obtained by extracting the features of the face image and comparing the extracted features with the existing face features to perform face recognition. However, the method for identifying the human face by extracting the features of the human face image and comparing the features with the existing human face features has the problem of low accuracy of human face identification.
Disclosure of Invention
In view of the above, it is necessary to provide a face recognition method, an apparatus, a computer device and a storage medium capable of improving face recognition accuracy.
A method of face recognition, the method comprising:
acquiring a face image to be recognized, and recognizing the face image to be recognized to obtain a corresponding face identity matching degree;
carrying out image segmentation on a face image to be recognized to obtain a face region;
identifying a face pose corresponding to the face area to obtain face pose information;
determining a target face posture sub-interval matched with the face posture information, and acquiring corresponding face posture matching degree according to the target face posture sub-interval;
and determining a face recognition result corresponding to the face image to be recognized based on the face identity matching degree and the face posture matching degree.
In one embodiment, obtaining the corresponding face identity matching degree according to the face image to be recognized includes:
inputting a face image to be recognized into a face recognition model for face recognition to obtain a face identity matching degree; the face recognition model is obtained by taking a training face image as input, taking face identity labels corresponding to the training face image as labels and training by using a convolutional neural network.
In one embodiment, determining a face recognition result corresponding to a face image to be recognized based on the face identity matching degree and the face pose matching degree includes:
acquiring a face identity weight corresponding to the face identity matching degree and a face posture weight corresponding to the face posture matching degree;
carrying out weighted calculation according to the face identity weight and the face identity matching degree to obtain the face identity weighted matching degree;
carrying out weighted calculation according to the face posture weight and the face posture matching degree to obtain the face posture weighted matching degree;
and obtaining a target face matching degree according to the face identity weighted matching degree and the face posture weighted matching degree, and obtaining a face recognition result corresponding to the face image to be recognized as the face passes through recognition when the target face matching degree exceeds a preset threshold value.
An apparatus for face recognition, the apparatus comprising:
the identity matching module is used for acquiring a face image to be recognized and obtaining the corresponding face identity matching degree according to the face image to be recognized;
the image segmentation module is used for carrying out image segmentation on the face image to be recognized to obtain a face region;
the gesture recognition module is used for recognizing the face gesture corresponding to the face area to obtain face gesture information;
the gesture matching module is used for determining a target human face gesture sub-interval matched with the human face gesture information and acquiring corresponding human face gesture matching degree according to the target human face gesture sub-interval;
and the result determining module is used for determining a face recognition result corresponding to the face image to be recognized based on the face identity matching degree and the face posture matching degree.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring a face image to be recognized, and recognizing the face image to be recognized to obtain a corresponding face identity matching degree;
carrying out image segmentation on a face image to be recognized to obtain a face region;
identifying a face pose corresponding to the face area to obtain face pose information;
determining a target face posture sub-interval matched with the face posture information, and acquiring corresponding face posture matching degree according to the target face posture sub-interval;
and determining a face recognition result corresponding to the face image to be recognized based on the face identity matching degree and the face posture matching degree.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a face image to be recognized, and recognizing the face image to be recognized to obtain a corresponding face identity matching degree;
carrying out image segmentation on a face image to be recognized to obtain a face region;
identifying a face pose corresponding to the face area to obtain face pose information;
determining a target face posture sub-interval matched with the face posture information, and acquiring corresponding face posture matching degree according to the target face posture sub-interval;
and determining a face recognition result corresponding to the face image to be recognized based on the face identity matching degree and the face posture matching degree.
According to the face recognition method, the face recognition device, the computer equipment and the storage medium, the face identity matching degree is obtained by recognizing the face image to be recognized, then the face posture information of the face image to be recognized is recognized, the face posture matching degree is obtained according to the face posture information, then the face recognition result corresponding to the face image to be recognized is determined according to the face identity matching degree and the face posture matching degree, namely the face recognition result corresponding to the face image to be recognized is determined by using the combined action of the face identity matching degree and the face posture matching degree, namely the face recognition result can be determined by using different information, and the accuracy of the obtained face recognition result is improved.
Drawings
FIG. 1 is a schematic flow chart of a face recognition method in one embodiment;
FIG. 2 is a flow diagram illustrating the determination of a face region in one embodiment;
FIG. 3 is a diagram illustrating an exemplary image segmentation model;
FIG. 4 is a diagram illustrating the results of image segmentation in one embodiment;
FIG. 5 is a schematic flow chart of training an image segmentation model in one embodiment;
FIG. 6 is a schematic flow chart of face region pose recognition in one embodiment;
FIG. 7 is a block diagram of a face pose recognition model in an exemplary embodiment;
FIG. 8 is a diagram illustrating the results of face pose recognition in an exemplary embodiment;
FIG. 9 is a schematic flow chart of training a face pose recognition model according to an embodiment;
FIG. 10 is a flowchart illustrating the process of determining the degree of matching of the face pose in one embodiment;
FIG. 11 is a schematic flow chart of a profile obtained in one embodiment;
FIG. 12 is a flow diagram illustrating the creation of a profile in one embodiment;
FIG. 13 is a schematic illustration of a profile in one embodiment;
FIG. 14 is a flowchart illustrating obtaining face pose matching degrees in one embodiment;
FIG. 15 is a schematic diagram illustrating a process for obtaining a target face pose subinterval in one embodiment;
FIG. 16 is a flow diagram illustrating obtaining a face recognition result in one embodiment;
FIG. 17 is a flowchart illustrating a face recognition method in accordance with an exemplary embodiment;
FIG. 18 is a diagram illustrating a scenario of face recognition unlocking in an exemplary embodiment;
FIG. 19 is a schematic diagram of an interface for successful face recognition unlocking in the embodiment of FIG. 18;
FIG. 20 is a diagram illustrating an exemplary implementation of face recognition unlocking in another embodiment;
fig. 21 is a schematic view of an application scenario of the face recognition method in an embodiment;
FIG. 22 is a schematic view of a scene of face recognition in the embodiment of FIG. 21;
FIG. 23 is a block diagram showing the construction of a face recognition apparatus according to an embodiment;
FIG. 24 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Computer Vision technology (CV) Computer Vision is a science for researching how to make a machine "see", and further refers to that a camera and a Computer are used to replace human eyes to perform machine Vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the Computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technologies, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face recognition and fingerprint recognition.
Machine Learning (ML) is a multi-domain cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and formal education learning.
The scheme provided by the embodiment of the application relates to technologies such as artificial intelligence image recognition and deep learning, and is specifically explained by the following embodiments:
in an embodiment, as shown in fig. 1, a face recognition method is provided, and this embodiment is illustrated by applying the method to a terminal, it is to be understood that the method may also be applied to a server, and may also be applied to a system including the terminal and the server, and is implemented by interaction between the terminal and the server. In this embodiment, the method includes the steps of: :
and 102, acquiring a face image to be recognized, and recognizing the face image to be recognized to obtain a corresponding face identity matching degree.
The face image refers to an image including a face part, and the face image to be recognized refers to a face image to be recognized. The face identity is preset identity information corresponding to a face, and the identity information may include a face image, a face identifier, a face age, a face gender, and the like. The face identification is used for uniquely identifying the corresponding face, and can be a name, an identity card number, a passport number and the like. The face identity matching degree refers to the degree of matching between the face image to be recognized and the face identity, which is obtained through face recognition. Face recognition refers to a biometric technology for identifying an identity based on facial feature information of a person. A series of related technologies, also commonly called face recognition and face recognition, are used to collect images or video streams containing faces by using a camera or a video camera, automatically detect and track the faces in the images, and then perform face recognition on the detected faces.
Specifically, the terminal may acquire a face image to be recognized through a camera device, where the camera device is a device for acquiring an image, and may be a camera, a video camera, or the like. The terminal can also acquire the face image to be recognized stored in the memory. The terminal can also obtain the image to be identified through an instant messaging application, wherein the instant messaging application refers to software for realizing online chat and communication through an instant messaging technology, such as a QQ application, a WeChat application, a nailing application, an ICQ application, an MSN MESSENGER application and the like. The terminal can also acquire the face image to be recognized from the internet, for example, acquire a video image from a video network in the internet, and extract the face image from the video image. For example, a face image is directly downloaded from the internet, and the like. When the terminal acquires a face image to be recognized, the face image to be recognized is subjected to face recognition so as to be recognized, and corresponding face identity matching degree is obtained, wherein various face recognition algorithms can be used for carrying out face recognition on the face image to be recognized, after the face is detected and key feature points of the face are located by the face recognition algorithms, a main face area can be cut out, and after preprocessing, the main face area is fed into a recognition algorithm at the rear end. The recognition algorithm is to extract the face features and compare the extracted face features with face images of known face identities stored in the database to complete the final classification. Face recognition algorithms include, but are not limited to, recognition algorithms based on feature points of the face, recognition algorithms based on the entire face image, template-based recognition algorithms, algorithms that use neural networks for recognition, and algorithms that use support vector machines for recognition. For example, a Neural network model based on CNN (Convolutional Neural Networks) may be used to recognize the face image to be recognized, so as to obtain the corresponding face identity matching degree. The terminal 102 includes, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, camera devices, and devices capable of performing face recognition.
And 104, carrying out image segmentation on the face image to be recognized to obtain a face region.
The image segmentation refers to the area division of the face image to be recognized by using an image segmentation algorithm. The image segmentation algorithm includes, but is not limited to, a threshold-based segmentation method, a region-based segmentation method, an edge-based segmentation method, a particular theory-based segmentation method, a depth information-based segmentation method, a priori information-based segmentation method, a neural network-based segmentation method, and the like. The face region refers to a face part region in a face image to be recognized.
Specifically, the terminal uses an image segmentation algorithm to perform image segmentation on the face image to be recognized, and cuts the face image according to the segmented image to obtain a face region. For example, the terminal may use a convolutional neural network model to perform image segmentation on a face image to be recognized, so as to obtain a face region.
And 106, identifying the face pose corresponding to the face area to obtain face pose information.
The face pose refers to a position between a face and the camera device in the face image to be recognized. The face pose includes, but is not limited to, an angle pose and a distance pose, and the angle pose refers to an angle position between the face and the camera device in the face image to be recognized. The distance posture refers to a distance position between the face and the camera device in the face image to be recognized. The face pose information refers to specific information of the face pose, including but not limited to distance pose information and angle pose information. The distance posture information refers to distance position information between the face and the camera device in the face image to be recognized. The angle posture information refers to bottom crossing position information between the human face and the camera device in the human face image to be recognized.
Specifically, the terminal uses a pre-trained face gesture recognition model to recognize the face gesture corresponding to the face area to obtain face gesture information, wherein the face gesture recognition model is obtained by training through a deep neural network algorithm according to a training face area image and a corresponding face gesture label. The face gesture recognition model can be trained in the server in advance, and then the face gesture recognition model is deployed to the terminal for use. The face gesture recognition model can also be directly trained in the terminal and deployed for use.
And 108, determining a target face posture sub-interval matched with the face posture information, and acquiring the corresponding face posture matching degree according to the target face posture sub-interval.
The human face posture subinterval is an interval obtained by dividing all intervals established by the human face posture information corresponding to the historical user. Namely, an interval is established according to the positions between the human faces and the camera device in all the human face images corresponding to the historical users. For example, when the face pose is a distance pose, a distance pose interval is obtained according to the maximum distance position and the minimum distance position between the face and the camera in the face image, and when the face pose is an angle pose, an angle pose interval is obtained according to the maximum angle pose and the minimum angle pose between the face and the camera in the face image. And then dividing the established posture interval to obtain a posture subinterval, namely the human face posture subinterval. For example, when the gesture interval is a distance gesture interval, the distance gesture interval is divided to obtain distance gesture sub-intervals. And when the attitude interval is an angle attitude interval, dividing the angle attitude interval to obtain angle attitude subintervals. The target face posture sub-interval refers to a face posture sub-interval in which face posture information corresponding to the face image to be recognized is located. The face pose matching degree is the matching degree with the face identity obtained according to the target face pose subinterval. And establishing a corresponding relation between each face posture subinterval and the face posture matching degree in advance according to the face identity corresponding to the historical user.
Specifically, the terminal matches the corresponding face pose sub-interval with the face pose information to obtain a target face pose sub-interval. For example, when the face pose information is distance pose information, the corresponding distance pose sub-interval is matched, and when the face pose information is angle pose information, the corresponding angle pose sub-interval is matched. And then the terminal acquires the face posture matching degree corresponding to the target face posture sub-interval according to the corresponding relation between the stored face posture sub-interval and the face posture matching degree. The target face posture subinterval is matched with each stored face posture subinterval, and the face posture matching degree corresponding to the face posture subinterval with the consistent matching degree is obtained.
And step 110, determining a face recognition result corresponding to the face image to be recognized based on the face identity matching degree and the face posture matching degree.
The face recognition result is the result of whether the face in the face image to be recognized is consistent with the face identity. And when the face in the face image to be recognized is consistent with the face identity, obtaining a face recognition result as the face recognition is passed. And when the face in the face image to be recognized is inconsistent with the face identity, obtaining a face recognition result which is that the face recognition fails.
Specifically, the terminal judges that when the face identity matching degree exceeds a preset face identity matching degree threshold value and the face posture matching degree exceeds a preset face posture matching degree threshold value, it indicates that the face in the face image to be recognized is consistent with the face identity, and the obtained face recognition result is that the face recognition is passed. And the terminal judges that when the face identity matching degree does not exceed the preset face identity matching degree threshold or the face posture matching degree does not exceed the preset face posture matching degree threshold, the fact that the face in the face image to be recognized is inconsistent with the face identity is indicated, and the face recognition result is obtained and is that the face recognition fails. The preset face identity matching degree threshold is a preset face identity matching degree threshold, and the preset face posture matching degree threshold is a preset face posture matching degree threshold, and is used for judging whether the face in the face image to be recognized is consistent with the face identity.
In one embodiment, weighted summation calculation may be performed according to the face identity matching degree and the face pose matching degree to obtain a weighted summation calculation result, and when the weighted summation calculation result exceeds a preset threshold value, it indicates that the face in the face image to be recognized is consistent with the face identity, and the obtained face recognition result passes face recognition. And when the weighted sum calculation result does not exceed a preset threshold value, indicating that the identity of the face in the face image to be recognized is not consistent with the identity of the face, and obtaining a face recognition result which is that the face recognition fails.
In one embodiment, the terminal may acquire a facial image to be recognized, and the terminal sends the facial image to be recognized to a server, where the server may be a server that provides a facial recognition service corresponding to the terminal, such as a cloud server or the like. The server receives the face image to be recognized, corresponding face identity matching degree is obtained according to the recognition of the face image to be recognized, image segmentation is carried out on the face image to be recognized, a face area is obtained, face gestures corresponding to the face area are recognized, and face gesture information is obtained. Determining a target face posture sub-interval matched with the face posture information, acquiring corresponding face posture matching degree according to the target face posture sub-interval, and determining a face recognition result corresponding to the face image to be recognized based on the face identity matching degree and the face posture matching degree. The server returns the face recognition result corresponding to the face image to be recognized to the terminal, the terminal displays the face recognition result corresponding to the face image to be recognized, and the face recognition result corresponding to the face image to be recognized is obtained through the server, so that the efficiency can be improved.
According to the face recognition method, the face identity matching degree is obtained by recognizing the face image to be recognized, then the face posture information of the face image to be recognized is recognized, the face posture matching degree is obtained according to the face posture information, then the face recognition result corresponding to the face image to be recognized is determined according to the face identity matching degree and the face posture matching degree, namely the face recognition result corresponding to the face image to be recognized is determined by using the combined action of the face identity matching degree and the face posture matching degree, namely the face recognition result can be determined by using different information, and the accuracy of the obtained face recognition result is improved.
In one embodiment, after the step 110, after determining a face recognition result corresponding to the face image to be recognized based on the face identity matching degree and the face pose matching degree, the method further includes the steps of:
and when the face recognition result is that the face recognition is passed, executing corresponding target operation according to the face recognition passing result.
The target operation refers to an operation executed by the terminal in response to the face recognition result. For example, the target operation may be an unlocking operation, a payment operation, a login operation, a shutdown operation, an early warning operation, a terminal switching-off operation, a recording operation, and the like.
Specifically, when the terminal detects that the face recognition result is that the face recognition passes, corresponding unlocking operation is executed according to the face recognition passing result, wherein when the terminal is a smart phone, the unlocking operation refers to switching a terminal screen from a locked state to an unlocked state, the locked state refers to a state in which the smart phone cannot be normally used, and the unlocked state refers to a state in which the smart phone can be normally used. In an embodiment, the unlocking operation may also be unlocking the intelligent electronic access control, for example, when the human face recognition result is that the human face recognition passes, an unlocking instruction is sent to the intelligent electronic access control according to the face recognition passing result, and the intelligent electronic access control receives the unlocking instruction to execute the unlocking operation to open the access control. In one embodiment, the payment operation refers to an operation of performing face payment. Namely, when the face recognition result is that the face recognition is passed, the electronic payment operation is executed to carry out the electronic payment. In one embodiment, when the face recognition result is that the face recognition is passed, the login operation is executed to enter the corresponding website and APP application which need to be logged in. The login operation is an operation for logging in an application. In one embodiment, when the face recognition result is that the face recognition is passed, the terminal executes a power-off operation to perform power-off. In one embodiment, when the face recognition result is that the face recognition passes, the terminal executes an early warning operation to perform face identity early warning prompt, wherein the early warning operation refers to an early warning operation performed by face recognition monitoring equipment. In one embodiment, when the face recognition result is that the face recognition is passed, the terminal performs a recording operation to record the current time point and the corresponding face identity.
In one embodiment, when the face recognition result is that the face recognition fails, the face recognition result corresponding to the face image to be recognized is recorded as the face recognition fails.
In the embodiment, when the face recognition result is that the face recognition is passed, the corresponding target operation is executed according to the face recognition passing result, and the obtained face recognition result is more accurate, so that the corresponding target operation can be more accurately obtained for execution, and subsequent use is facilitated.
In one embodiment, the step 102 of obtaining the corresponding face identity matching degree according to the face image to be recognized includes:
inputting a face image to be recognized into a face recognition model for face recognition to obtain a face identity matching degree; the face recognition model is obtained by taking a training face image as input, taking a face identity label corresponding to the training face image as a label and training by using a convolutional neural network.
The training face image refers to a face image used for training a face recognition model. The training Face image may be various Face image datasets for Face recognition model training on the internet, such as a PubFig (Public health Face Database), a CelebA (large Face attribute Dataset) Dataset, a Colorferet Dataset, and an FDDB (Face Detection and evaluation Set) Dataset. The training face image may also be a face image directly acquired from a server database. The training face image can also be acquired by a camera device to obtain a face image. The face identity label is used for uniquely identifying the face identity of the training face image. The convolutional neural network refers to a feedforward neural network which comprises convolutional calculation and has a deep structure, and comprises an input layer, a convolutional layer, a pooling layer, a full-link layer and an output layer. Activation functions of convolutional neural networks include, but are not limited to, Linear rectification functions (relus), sloped relus (lreol), parameterized ReLU (pralu), Randomized ReLU (RReLU), Exponential Linear Units (ELU), Sigmoid functions, and hyperbolic tangent functions. The convolutional neural network may be a function using MSE mean square error loss, SVM (support vector machine) hinge loss, Cross Entropy loss, and the like.
Specifically, a face recognition model is deployed in the terminal, and the face recognition model may be obtained by training in advance by using a training face image as an input of a convolutional neural network in the server, and using a face identity label corresponding to the training face image as a label of the convolutional neural network. When the terminal acquires a face image to be recognized, the face image to be recognized is input into a face recognition model for face recognition, and a face identity matching degree is obtained, wherein the face image to be recognized can be subjected to normalization processing, and the face image subjected to normalization processing is input into the face recognition model for recognition.
In the embodiment, the face recognition is performed through the deployed face recognition model, so that the efficiency and the accuracy of the face recognition are improved.
In an embodiment, as shown in fig. 2, step 104, performing image segmentation on a face image to be recognized to obtain a face region, includes:
step 202, inputting the face image to be recognized into a segmentation feature extraction network of the image segmentation model to obtain image segmentation features.
The image segmentation model is used for segmenting a face image to obtain a face region, and is obtained by training a training face image with face boundary labels by using a deep neural network model, wherein a cross entropy loss function is used as a loss function, and a ReLU function is used as an activation function. The segmentation feature extraction network is a network for extracting image features of an image to be recognized. The image segmentation features refer to feature images obtained by performing convolution on the face images to be recognized through a segmentation feature extraction network.
Specifically, the terminal inputs the face image to be recognized into a segmentation feature extraction network of the image segmentation model, and the segmentation feature extraction network performs convolution operation on the face image to be recognized to obtain image segmentation features.
Step 204, inputting the image segmentation features into an image classification network of the image segmentation model to obtain face pixels and non-face pixels, and determining a face region according to the face pixels and the non-face pixels.
The image classification network is used for classifying the images, and may be a classification network obtained by using an SVM algorithm. The face pixel points refer to pixel points belonging to a face area in a face image to be recognized. The non-human pixel points refer to pixel points which do not belong to human regions in the face image to be recognized.
Specifically, the server inputs the image segmentation features into an image classification network of the image segmentation model for classification to obtain each face pixel point and each non-face pixel point, and a region formed by each face pixel point is determined as a face region.
In a specific embodiment, as shown in fig. 3, the method is a structural schematic diagram of an image segmentation model, specifically, a terminal inputs a face image to be recognized into a CNN convolutional neural network for feature extraction, where the CNN convolutional layer convolves the input face image to be recognized, then performs pooling through a pooling layer, finally outputs feature vectors through a fully-connected layer, and classifies the feature vectors through an SVM to obtain a classification result, that is, determine a face region. For example, as shown in fig. 4, the image is a schematic view of a face region obtained by recognizing a face image to be recognized through a face recognition model.
In the embodiment, the segmentation feature extraction network of the image segmentation model is used to obtain the face pixel points in the image segmentation feature image classification network, and the face region is determined according to the face pixel points, so that the obtained face region is more accurate.
In one embodiment, as shown in FIG. 5, the training of the image segmentation model includes the steps of:
step 502, obtaining a training face image with face boundary labeling.
The face boundary label is used for identifying a face part in a training face image.
Specifically, the terminal may acquire a training face image with a face boundary label from a service party providing the training face image by a third party, and the terminal may also acquire the training face image with the face boundary label from the training face image to perform the face boundary label, so as to obtain the training face image with the face boundary label. The terminal can also acquire a stored training face image with a face boundary label from a server database.
Step 504, inputting the training face image into an initial segmentation feature extraction network of the initial image segmentation model to obtain initial image segmentation features.
The initial image segmentation model refers to an image segmentation model with initialized model parameters. The initial segmentation feature extraction network refers to a segmentation feature extraction network with initialized network parameters. The initial image segmentation feature refers to an image segmentation feature calculated by using an initialized network parameter.
Specifically, the server establishes an initial image segmentation model, and inputs a training face image into an initial segmentation feature extraction network of the initial image segmentation model to perform feature extraction, so as to obtain initial image segmentation features.
Step 506, inputting the initial image segmentation features into an initial image classification network of the initial image segmentation model to obtain initial face pixel points and initial non-face pixel points, and determining an initial face region image according to the initial face pixel points and the initial non-face pixel points.
The initial image classification network refers to an image classification network with initialized network parameters. The initial face pixel points refer to face pixel points obtained by classification and identification through an initial image classification network. The initial non-face pixel points refer to non-face pixel points obtained by classification and identification through an initial image classification network. The initial face region image is a face region image obtained by classifying and identifying through an initial image segmentation model.
Specifically, the server inputs the initial image segmentation features output by the initial segmentation feature extraction network into an initial image classification network of an initial image segmentation model for classification to obtain each initial face pixel point and each initial non-face pixel point, and an initial face region image is divided from the face image to be recognized according to each initial face pixel point and each initial non-face pixel point.
And step 508, calculating the region error information of the initial face region and the face boundary label, and obtaining an image segmentation model until the region error information obtained by training meets the preset training completion condition.
The region error information is used for representing the error between the initial face region and the face boundary label. The preset training completion condition refers to a preset training completion condition of the image segmentation model, and may be that the region error information is smaller than a preset threshold value, or that the training iteration number reaches the maximum iteration.
Specifically, the terminal calculates the area error information of the initial face area and the face boundary label by using a loss function, judges whether the area error information meets the preset training completion condition, updates the model parameters of the initial image segmentation model by using the area error information when the area error information does not meet the preset training completion condition, namely performs back propagation calculation on the initial image segmentation model by using a back propagation algorithm to obtain an updated image segmentation model, performs iterative training again by using the updated image segmentation model, namely returns to the step 502 to continue executing until the area error information obtained by training meets the preset training completion condition, and takes the image segmentation model updated for the last time as the image segmentation model obtained when the training is completed. And the terminal deploys and uses the trained image segmentation model.
In one embodiment, a terminal can obtain a training face image with a face boundary label, the training face image with the face boundary label is sent to a server, the server receives the training face image with the face boundary label, the training face image is input into an initial segmentation feature extraction network of an initial image segmentation model to obtain initial image segmentation features, the initial image segmentation features are input into an initial image classification network of the initial image segmentation model to obtain initial face pixel points and initial non-face pixel points, and an initial face region image is determined according to the initial face pixel points and the initial non-face pixel points. Calculating the region error information of the initial face region and the face boundary label, obtaining an image segmentation model until the region error information obtained by training meets the preset training completion condition, deploying the image segmentation model into a terminal for use, and training the image segmentation model in a server, so that the training efficiency can be improved. In one embodiment, the server may invoke the image segmentation model deployed in the server for use by calling an interface.
In the embodiment, the training face image with the face boundary label is used for training to obtain the image segmentation model, and then the image segmentation model is deployed and used, so that the efficiency of obtaining the face recognition result is improved.
In one embodiment, step 106, recognizing a face pose corresponding to the face region image to obtain face pose information, includes:
the face region image is input into a face gesture recognition model for recognition to obtain face gesture information, and the face gesture recognition model is obtained by training through a multi-task regression model according to a training face region image and a corresponding face gesture label.
The training face region image refers to a face region image of a training face gesture recognition model. The face region image may be an image cropped from a training face image. The face pose marking is used for identifying face pose information corresponding to the face region image. The multi-task regression model is a model obtained by multi-task learning through a deep neural network. Multitask learning is a machine learning method that learns by putting multiple related tasks together based on a shared representation.
Specifically, a face pose recognition model is deployed in the terminal, and the face pose recognition model may be obtained by the terminal training using a multitask regression model according to a training face region image and a corresponding face pose label in advance, or may be obtained by the server training using the multitask regression model according to the training face region image and the corresponding face pose label, where a loss function used in the training is a cross entropy loss function, and an RELU function is used as an activation function. And then the server deploys the face gesture recognition model into the terminal. The terminal can also call the face gesture recognition model deployed in the server through a calling interface. And the terminal inputs the face region image into a face gesture recognition model for recognition to obtain face gesture information.
In the embodiment, the face pose is identified through the face pose identification model to obtain the face pose information, so that the efficiency of obtaining the face pose information is improved.
In one embodiment, the face pose information includes distance pose information and angle pose information, and the face pose recognition model includes a pose feature extraction network, a distance pose recognition network, and an angle pose recognition network.
As shown in fig. 6, inputting the face region image into the face pose recognition model for recognition, so as to obtain face pose information, including:
step 602, inputting the face region image into a pose feature extraction network for feature extraction, so as to obtain a face pose feature.
The distance posture information refers to distance position information between the face and the camera device in the face image to be recognized. The angle posture information refers to bottom crossing position information between the human face and the camera device in the human face image to be recognized. The pose feature extraction network is a convolutional neural network for extracting image pose features from the face region image. The distance posture recognition network is a full communication neural network for recognizing distance posture information. The angle posture recognition network is used for recognizing a full communication neural network of angle posture information.
Specifically, the terminal inputs the face region image into the pose feature extraction network for feature extraction, and the face pose feature is obtained.
Step 604, inputting the face pose characteristics into a distance pose recognition network for recognition to obtain distance pose information, and simultaneously inputting the face pose characteristics into an angle pose recognition network for recognition to obtain angle pose information.
Specifically, the terminal inputs the face posture features extracted through the posture feature extraction network into the distance posture recognition network for recognition to obtain distance posture information, and simultaneously inputs the face posture features into the angle posture recognition network for recognition to obtain angle posture information.
In a specific embodiment, as shown in fig. 7, a schematic structural diagram of a human face pose recognition model is shown, specifically: the terminal inputs the face region image into a CNN network, and the face posture characteristics are obtained through calculation of a convolution layer, a pooling layer and a full communication layer, wherein the full communication layer is a full connection layer. And then, respectively inputting the face posture characteristics into a distance posture recognition network based on full communication and an angle posture recognition network based on full communication for recognition, namely obtaining output results through a full communication layer, a Dropout layer (for preventing model overfitting) and a Relu layer (a nonlinear excitation function layer), and performing regression processing on the output results through a multilayer perceptron to obtain corresponding output results, namely distance posture information and angle posture information. For example, as shown in fig. 8, a schematic diagram of face pose information is obtained, where after face pose recognition is performed on a face region image, distance pose information is obtained as 30 centimeters and angle pose information is obtained as 30 degrees.
In one embodiment, the face pose information includes, but is not limited to, distance pose information, angular pose information, and three-dimensional coordinate pose information, which refers to angular information of the face orientation of the face in three-dimensional space, including pitch angle coordinate information, yaw angle coordinate information, and roll angle coordinate information. The face gesture recognition model comprises a gesture feature extraction network, a distance gesture recognition network, an angle gesture recognition network and a three-dimensional coordinate gesture recognition network, the terminal inputs the face gesture features into the distance gesture recognition network for recognition to obtain distance gesture information, simultaneously inputs the face gesture features into the angle gesture recognition network for recognition to obtain angle gesture information, and simultaneously inputs the face gesture features into the three-dimensional coordinate gesture recognition network for recognition to obtain three-dimensional coordinate gesture information. The three-dimensional coordinate posture recognition network is used for recognizing three-dimensional coordinate posture information in the face area image.
In the above embodiment, feature extraction is performed in the pose feature extraction network to obtain face pose features, and then recognition is performed through the distance pose recognition network and the angle pose recognition network to obtain distance pose information and angle pose information, so that accuracy and efficiency of the obtained distance pose information and angle pose information are improved.
In one embodiment, as shown in FIG. 9, the training of the face pose recognition model comprises the steps of:
step 902, training data is obtained, wherein the training data comprises face region images and corresponding face pose labels.
Specifically, the terminal may directly obtain the training data from the server. The terminal can also acquire the face image to obtain a face region image, and simultaneously record face posture information during acquisition as human posture labeling. The terminal can also acquire the training data from a service party providing the training data through the Internet.
And 904, inputting the face region image into an initial pose feature extraction network of the initial face pose recognition model for feature extraction to obtain initial face pose features.
The initial face gesture recognition model refers to a face gesture recognition model with initialized model parameters. The initial attitude feature extraction network refers to an attitude feature extraction network with initialized network parameters. The initial human face posture feature is obtained by initial network parameter operation in an initial posture feature extraction network.
Specifically, the terminal establishes an initial face pose recognition model, and then the face region image is input into an initial pose feature extraction network of the initial face pose recognition model for feature extraction, so that initial face pose features are obtained.
Step 906, inputting the initial face pose characteristics into an initial distance pose recognition network of an initial face pose recognition model for recognition to obtain initial distance pose information, inputting the face pose characteristics into an initial angle pose recognition network of the initial face pose recognition model for recognition to obtain initial angle pose information, and obtaining the initial face pose information according to the initial distance pose information and the initial angle pose information.
The initial distance and posture recognition network refers to a distance and posture recognition network with initialized network parameters. The initial distance posture information refers to distance posture information obtained by performing calculation by using an initialized network parameter in the initialized angle posture recognition network. The initial angle posture recognition network refers to an angle posture recognition network with initialized network parameters. The initial angular attitude information refers to angular attitude information obtained by performing calculation by using initial network parameters in the initial angular attitude recognition network.
Specifically, the server performs multi-task learning by using the initial face pose characteristics, namely, the initial distance pose information and the initial angle pose information are obtained by the identification of an initial distance pose identification network and an initial angle pose identification network. And then taking the initial distance pose information and the initial angle pose information as initial face pose information.
And 908, calculating the initial human face posture information and the posture error information labeled by the human face posture, and obtaining a human face posture recognition model when the posture error information obtained by training meets the preset posture error condition.
The attitude error information refers to an error between the initial face attitude information and the face attitude label. The preset attitude error condition means that the attitude error information is smaller than a preset attitude error threshold value or the training reaches the maximum iteration number.
Specifically, the terminal calculates initial face pose information and pose error information labeled by the face pose by using a preset loss function, judges whether the pose error information meets a preset pose error condition or not, updates model parameters of the initial face pose recognition model by using a back propagation algorithm by using the pose error information when the pose error information does not meet the preset pose error condition to obtain an updated face pose recognition model, continues iterative execution by using the updated face pose recognition model, namely returns to the step 904 to continue execution, and takes the face pose recognition model updated for the last time as the face pose recognition model until the pose error information obtained by training meets the preset pose error condition.
In the embodiment, the training data is used for training in advance to obtain the face gesture recognition model, and then the face gesture recognition model is deployed and can be directly used when being used, so that the efficiency is improved.
In an embodiment, as shown in fig. 10, before step 102, that is, before obtaining a face image to be recognized and obtaining a corresponding face identity matching degree according to the face image to be recognized, the method further includes:
step 1002, obtaining each historical face pose information corresponding to the user identifier when executing the target operation.
The user identifier is used for uniquely identifying the terminal corresponding to the user. The historical face pose information refers to face pose information of a user terminal corresponding to the user identification when the user terminal conducts target mouth manipulation historically.
Specifically, the terminal may acquire, from the server database, each piece of historical face pose information corresponding to the user identifier when performing the target operation, and the terminal may acquire, from the memory, each piece of historical face pose information corresponding to the user identifier when performing the target operation, that is, each time the terminal performs the target operation, the face pose information is collected and stored. The terminal can also be obtained from a service party providing historical face pose information from the Internet. The user identification corresponds to historical face pose information each time when target operation is executed.
Step 1002, determining a total face pose interval corresponding to the user identifier according to the historical face pose information.
The total interval of the face pose is used for representing the interval range of the historical face pose information.
Specifically, when the historical face pose information is historical distance pose information, the terminal compares the historical distance pose information to determine the maximum historical distance pose information and the minimum historical distance pose information, and a total face distance interval is obtained according to the maximum historical distance pose information and the minimum historical distance pose information. And when the historical face pose information is historical angle pose information, comparing the historical angle pose information to determine the maximum historical angle pose information and the minimum historical angle pose information, and obtaining a face angle total interval according to the maximum historical angle pose information and the minimum historical angle pose information.
In one embodiment, when the historical face pose information is three-dimensional coordinate pose information, comparing the three-dimensional coordinate pose information, and determining the maximum value and the minimum value of each dimensional coordinate in the three-dimensional coordinate pose information to obtain a total face three-dimensional coordinate interval.
Step 1002, dividing the total interval of the human face postures to obtain sub-intervals of the human face postures.
The human face posture subinterval is an interval range obtained by dividing the human face posture total interval, and the interval range is smaller than the interval range of the human face posture total interval.
Specifically, the terminal divides the total human face posture interval according to preset posture division conditions to obtain each human face posture subinterval. Wherein the preset posture division condition refers to a preset posture division condition,
step 1002, determining a face pose matching degree corresponding to each face pose sub-interval according to each historical face pose information, and storing each face pose sub-interval and the corresponding face pose matching degree in a related mode.
Specifically, each historical face pose information is matched with each face pose subinterval, a face pose subinterval corresponding to each historical face pose information is determined, a face pose matching degree corresponding to each face pose subinterval is determined according to the total number of each historical face pose information and the number of the historical face pose information in the face pose subinterval, and then each face pose subinterval and the corresponding face pose matching degree are stored in an associated mode.
In the embodiment, the face pose total interval corresponding to the user identifier is determined according to the historical face pose information, the face pose total interval is divided to obtain the face pose sub-intervals, and the face pose matching degree corresponding to the face pose sub-intervals is determined according to the historical face pose information, so that the obtained face pose matching degree is more accurate. And then, each face posture subinterval is associated and stored with the corresponding face posture matching degree, so that the subsequent use is facilitated.
In one embodiment, the face pose total interval comprises a face distance total interval and a face angle total interval.
As shown in fig. 11, in step 1002, dividing the total face pose interval to obtain each face pose subinterval, including:
step 1102, dividing the total interval of the face distance to obtain sub-intervals of each face distance.
The face distance subinterval is an interval obtained by dividing the total face distance interval.
Specifically, when the face distance total interval is divided, the interval may be divided according to a preset distance division size, for example, the interval may be divided according to a size of 4 centimeters, and the face distance total interval may be divided into 10 intervals from 10 centimeters to 50 centimeters.
And 1104, dividing the total interval of the face angles to obtain sub-intervals of all the face angles.
The face angle subinterval is an interval obtained by dividing the total face angle interval.
Specifically, when dividing the total face angle interval, the interval division may be performed according to a preset angle division size. For example, the interval division is performed according to the size of 3 degrees, and the total interval of the face angles from 30 degrees to 60 degrees can be divided into 10 intervals.
In an embodiment, when the terminal divides the total face three-dimensional coordinate interval, the interval may be divided according to a preset coordinate value, for example, the interval may be divided according to a 4 coordinate value.
And step 1106, combining each face distance sub-interval and each face angle sub-interval to obtain a distribution map of the face posture sub-intervals.
The distribution diagram is used for representing the distribution state of the human face posture subintervals.
Specifically, the terminal may combine each face distance subinterval with each face angle subinterval, where the combining refers to combining each face distance subinterval and each face angle subinterval into a planar region, so as to obtain a distribution map of the face posture subinterval.
In the above embodiment, the face angle total interval and the face distance total interval are divided to obtain each face distance subinterval and each face angle subinterval, and then each face distance subinterval and each face angle subinterval are combined to obtain a distribution map of the face posture subinterval, so that the obtained face posture subinterval is more accurate.
In one embodiment, as shown in fig. 12, step 1106 combines each face distance subinterval and each face angle subinterval to obtain a distribution map of the face pose subinterval, which includes:
and step 1202, establishing a plane area according to the total face distance interval and the total face angle interval.
The plane area refers to a plane formed by a face distance total interval and a face angle total interval.
Specifically, the terminal may use the total face distance interval as a plane abscissa range, and the total face angle interval as a plane ordinate range, to establish a plane area. The terminal can also establish a plane area by taking the total human face angle interval as a plane horizontal coordinate range and the total human face distance interval as a plane vertical coordinate range.
And 1204, performing area division in the plane area according to each face distance sub-interval and each face angle sub-interval to obtain a plane sub-area corresponding to each face posture sub-interval.
And step 1206, forming a distribution graph of the face posture subintervals according to the plane subregions.
The plane sub-area refers to an area corresponding to the face posture sub-area and is a part of the plane area.
Specifically, the terminal may divide the plane abscissa range in the plane area according to the range of each face distance sub-interval, and divide the plane ordinate range in the plane area according to the range of each face angle sub-interval, to obtain the plane sub-area corresponding to each face posture sub-interval. The terminal can also divide the plane horizontal coordinate range in the plane area according to the range of each face angle sub-interval, and divide the plane vertical coordinate range in the plane area according to the range of each face distance sub-interval to obtain the plane sub-area corresponding to each face posture sub-interval. And finally, the terminal forms a distribution map of the face posture subinterval according to each plane subregion.
In a specific embodiment, as shown in fig. 13, a schematic distribution diagram of the created face pose sub-intervals is created, where a total face distance interval is 10 cm to 60 cm and a total face angle interval is 30 degrees to 60 degrees, a planar area is created, then the total face distance interval is used as a planar ordinate, the total face angle interval is used as a planar abscissa, the total face distance interval is divided according to an interval of 10 cm and the total face angle interval is divided according to an interval of 6 degrees, and a planar sub-area corresponding to each face pose sub-interval, that is, each square in the diagram, is obtained. Then, a distribution graph of the human face posture subinterval is obtained. When the face posture subinterval corresponding to the face posture information is determined, the corresponding interval is determined according to the face distance information and the face angle information, then the grids in the corresponding distribution diagram are obtained, and the corresponding face posture subinterval is obtained.
In the embodiment, the distribution diagram of the face posture subintervals is obtained by establishing the plane area and then carrying out area division on the plane area, so that the accuracy of obtaining the distribution diagram is improved.
In one embodiment, as shown in fig. 14, the step 1008 of determining the face pose matching degree corresponding to each face pose sub-interval according to each historical face pose information includes:
and 1402, counting the total number of the historical face pose information.
Step 1406, determining the face pose sub-interval matched with each historical face pose information, and counting the number of the historical face pose information in each face pose sub-interval.
Specifically, the terminal counts the total number of each historical face pose information corresponding to the user identifier, and then determines a face pose sub-interval corresponding to each historical face pose information, namely, each historical face pose information is divided into corresponding face pose sub-intervals. And counting the number of historical face pose information falling into each face pose subinterval.
Step 1408, calculating the face pose matching degree corresponding to each face pose subinterval according to the number and the total number of the historical face pose information in each face pose subinterval.
Specifically, the terminal calculates the ratio of the number of the historical face pose information in each face pose sub-interval to the total number, and the ratio is used as the face pose matching degree corresponding to each face pose sub-interval. The terminal may also perform the calculation using equation (1) shown below.
Figure BDA0002660256200000211
In one particular embodiment, as shown in FIG. 13, 40 pieces of historical face pose information fall into a grid of 30 to 40 centimeters and 36 to 42 degrees, and the total number of pieces of historical face pose information is 100. The face pose matching degrees of the grids corresponding to 30 to 40 centimeters and 36 to 42 degrees are 40/100-0.4. And calculating the face pose matching degree of each grid in the distribution diagram to obtain the face pose matching degree corresponding to each face pose subinterval.
In the embodiment, the total number of the historical face pose information and the number of the historical face pose information in each face pose sub-interval are counted, so that the face pose matching degree corresponding to each face pose sub-interval is calculated, and the efficiency of obtaining the face pose matching degree is improved.
In one embodiment, the face pose information includes a face distance pose parameter and a face angle pose parameter. As shown in fig. 15, step 108, determining a target face pose subinterval matched with the face pose information includes:
step 1502, the established profile is obtained.
Specifically, the terminal may be a distribution map of the established face pose subinterval acquired from the server database, or the terminal may directly acquire the stored established distribution map from the memory.
And 1504, determining a target plane sub-region from the distribution diagram according to the face distance posture parameter and the face angle posture parameter in the face posture information.
The face distance posture parameter refers to a distance value between a specific face and the camera device. The human face angle posture parameter refers to an angle value between a human face and the camera device. The target plane sub-region is a plane sub-region corresponding to the face pose information of the face image to be recognized.
Specifically, the face distance subinterval and the face angle subinterval where the face distance subinterval and the face angle subinterval are located are determined according to the face distance pose parameter and the face angle pose parameter in the face pose information, and then the target plane subinterval is determined from the distribution diagram according to the face distance subinterval and the face angle subinterval. For example, if the face distance pose parameter in the face pose information is 18 centimeters and the face angle pose parameter is 45 degrees, the interval of the face distance subinterval between 10 centimeters and 20 centimeters is determined, the interval of the face angle subinterval between 42 degrees and 48 degrees is determined, and then the target plane subregion is determined from the distribution map.
And step 1506, acquiring a face pose sub-interval corresponding to the target plane sub-area as a target face pose sub-interval.
Specifically, the terminal acquires a face pose sub-interval corresponding to a target plane sub-area as a target face pose sub-interval.
In the above embodiment, the plane sub-region corresponding to the face pose information is determined according to the established distribution map, and then the face pose sub-region corresponding to the target plane sub-region is acquired as the target face pose sub-region, so that the efficiency of acquiring the target face pose sub-region is improved.
In one embodiment, as shown in fig. 16, step 110, determining a face recognition result corresponding to a face image to be recognized based on the face identity matching degree and the face pose matching degree, includes:
step 1602, a face identity weight corresponding to the face identity matching degree and a face pose weight corresponding to the face pose matching degree are obtained.
The face identity weight is a preset weight corresponding to the face identity matching degree, and the face posture weight is a preset weight corresponding to the face posture matching degree.
Specifically, the terminal acquires a face identity weight corresponding to the face identity matching degree and a face pose weight corresponding to the face pose matching degree from the memory.
And 1604, performing weighted calculation according to the face identity weight and the face identity matching degree to obtain the face identity weighted matching degree.
And 1606, performing weighting calculation according to the face pose weight and the face pose matching degree to obtain a face pose weighting matching degree.
Step 1608, obtaining a target face matching degree according to the face identity weighted matching degree and the face pose weighted matching degree, and obtaining a face recognition result corresponding to the face image to be recognized as a face passing recognition when the target face matching degree exceeds a preset threshold value.
The face identity weighted matching degree is the matching degree obtained after the face identity matching degree is weighted. The face pose weighting matching degree is the matching degree obtained after weighting the face pose matching degree. The target face matching degree is used for representing the matching degree of the face image to be recognized and the face identity. The preset threshold is a preset target face matching degree threshold.
Specifically, the terminal may perform weighting calculation according to the face identity weight and the face identity matching degree to obtain the face identity weighted matching degree. Carrying out weighting calculation according to the face posture weight and the face posture matching degree to obtain a face posture weighting matching degree, then carrying out average calculation according to the face identity weighting matching degree and the face posture weighting matching degree to obtain a target face matching degree, judging whether the target face matching degree exceeds a preset threshold value or not, when the target face matching degree exceeds the preset threshold value, obtaining a face recognition result corresponding to the face image to be recognized as the face recognition passing, and when the target face matching degree does not exceed the preset threshold value, obtaining a face recognition result corresponding to the face image to be recognized as the face recognition failing. In one embodiment, the target face matching degree may also be calculated directly by using the following formula (2).
Formula (2) of target face matching degree w1+ face identity matching degree w2
Wherein w1 is the face pose weight, and w2 is the face identity weight.
In the embodiment, the face posture matching degree and the face identity matching degree are respectively weighted to obtain the final target face matching degree, and then the face recognition result is obtained according to the target face matching degree, so that the obtained face recognition result is more accurate.
In a specific embodiment, as shown in fig. 17, the face recognition method specifically includes the following steps:
1702, obtaining historical face pose information corresponding to the user identifier when executing the target operation, and determining a face pose total interval corresponding to the user identifier according to the historical face pose information, where the face pose total interval includes a face distance total interval and a face angle total interval.
1704, dividing the face distance total interval and the face angle total interval to obtain each face distance sub-interval and each face angle sub-interval, establishing a plane area according to the face distance total interval and the face angle total interval, performing area division in the plane area according to each face distance sub-interval and each face angle sub-interval to obtain plane sub-areas corresponding to each face posture sub-interval, and forming a distribution diagram of the face posture sub-intervals according to each plane sub-area.
And 1706, counting the total number of the historical face pose information, determining a face pose sub-interval matched with the historical face pose information, counting the number of the historical face pose information in each face pose sub-interval, calculating the face pose matching degree corresponding to each face pose sub-interval according to the number and the total number of the historical face pose information in each face pose sub-interval, and storing the face pose sub-interval and the corresponding face pose matching degree in a related manner.
1708, obtaining a face image to be recognized, and inputting the face image to be recognized into a face recognition model for face recognition to obtain a face identity matching degree.
1710, inputting a face image to be recognized into a segmentation feature extraction network of an image segmentation model to obtain image segmentation features, inputting the image segmentation features into an image classification network of the image segmentation model to obtain face pixel points and non-face pixel points, and determining a face region according to the face pixel points and the non-face pixel points.
1712, inputting the face region image into a pose feature extraction network in the face pose recognition model for feature extraction to obtain face pose features. And simultaneously, the face posture features are input into an angle posture recognition network in the face posture recognition model for recognition to obtain angle posture information.
1714, obtaining the established distribution diagram, determining a target plane sub-region from the distribution diagram according to the face distance attitude parameter and the face angle attitude parameter in the face attitude information, and obtaining a face attitude sub-region corresponding to the target plane sub-region as a target face attitude sub-region. And acquiring the face pose matching degree corresponding to the target face pose sub-interval according to the association relation between each face pose sub-interval and the corresponding face pose matching degree.
1716, obtaining a face identity weight corresponding to the face identity matching degree and a face posture weight corresponding to the face posture matching degree, performing weighting calculation according to the face identity weight and the face identity matching degree to obtain a face identity weighting matching degree, performing weighting calculation according to the face posture weight and the face posture matching degree to obtain a face posture weighting matching degree, and obtaining a target face matching degree according to the face identity weighting matching degree and the face posture weighting matching degree.
1718, when the matching degree of the target face exceeds a preset threshold, obtaining a face recognition result corresponding to the face image to be recognized as a face passing result, and executing corresponding target operation according to the face recognition passing result.
The application also provides an application scene, and the application scene applies the face recognition method. Specifically, the application of the face recognition method in the application scene is as follows:
the face recognition method is applied to an unlocking scene, wherein when face recognition unlocking is needed, a face image to be recognized is collected through a front-facing camera on a smart phone, as shown in fig. 18, a schematic diagram of unlocking the smart phone for a user is shown, a corresponding face identity matching degree is obtained according to the face image to be recognized, image segmentation is carried out on the face image to be recognized to obtain a face region, a face gesture corresponding to the face region is recognized to obtain face gesture information, wherein the angle is 30 degrees, the distance is 15 centimeters, a target face gesture sub-interval matched with the face gesture information is determined, and a corresponding face gesture matching degree is obtained according to the target face gesture sub-interval, wherein the face image to be recognized is 0.95. Determining a face recognition result corresponding to a face image to be recognized, namely a target face matching degree, to be 0.95 based on the face identity matching degree and the face posture matching degree, and exceeding a preset threshold value of 0.9, which indicates that the face recognition result is that the face recognition passes, i.e., the person unlocks the smart phone, at this time, the smart phone executes a phone unlocking operation, as shown in fig. 19, the smart phone is an interface schematic diagram of successful unlocking of the smart phone through the face recognition, wherein 1902 in the interface is a front-facing camera of the smart phone, and 1904 in the interface is an indication of successful unlocking. In another specific embodiment, as shown in fig. 20, a schematic diagram of unlocking a smartphone for a user is shown, where a face identity matching degree obtained by recognizing a face image to be recognized is 0.95, face pose information is obtained, where an angle is 150 degrees, a distance is 8 centimeters, a face pose matching degree is 0.6, an obtained target face matching degree is (0.95+0.6)/2 ═ 0.775, the target face matching degree does not exceed a preset threshold value of 0.9, and an obtained face recognition result is that face recognition is failed, and the smartphone prompts "non-self unlocking information and a face pose is wrong".
The application also additionally provides an application scene, and the application scene applies the face recognition method. Specifically, the application of the face recognition method in the application scene is as follows:
the face recognition method is applied to an access control scene, as shown in fig. 21, which is an application scene schematic diagram of the face recognition method, wherein an access control monitoring device acquires a face image to be recognized, and a scene diagram of the access control monitoring device acquiring the face image to be recognized is shown in fig. 22. Sending a face image to be recognized to a server, carrying out image segmentation on the face image to be recognized according to the face image to be recognized to obtain a face region, recognizing a face gesture corresponding to the face region to obtain face gesture information, wherein the angle is 110 degrees and the distance is 50 centimeters, determining a target face gesture sub-interval matched with the face gesture information, and obtaining a corresponding face gesture matching degree of 0.97 according to the target face gesture sub-interval. And determining a face recognition result corresponding to the face image to be recognized based on the face identity matching degree and the face posture matching degree, wherein the target face matching degree is (0.85+ 0.97)/2-0.96, and the target face matching degree exceeds a preset threshold value of 0.90. The face recognition result is that the face recognition is passed, at this moment, the server sends a door opening instruction to the access control equipment, and the access control equipment receives the door opening instruction to execute door opening operation.
It should be understood that although the various steps in the flowcharts of fig. 1, 2, 5, 6, 9-12 and 14-17 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1, 2, 5, 6, 9-12, and 14-17 may include multiple steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of execution of the steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 23, a face recognition apparatus 2300 is provided, which may be a part of a computer device using a software module or a hardware module, or a combination of the two, and specifically includes: an identity matching module 2302, an image segmentation module 2304, a pose recognition module 2306, a pose matching module 2308, and a result determination module 2310, wherein:
the identity matching module 2302 is used for acquiring a face image to be recognized and obtaining a corresponding face identity matching degree according to the face image to be recognized;
the image segmentation module 2304 is configured to perform image segmentation on a face image to be recognized to obtain a face region;
a pose recognition module 2306, configured to recognize a face pose corresponding to the face region, to obtain face pose information;
the pose matching module 2308 is used for determining a target face pose sub-interval matched with the face pose information and acquiring the corresponding face pose matching degree according to the target face pose sub-interval;
the result determining module 2310 is configured to determine a face recognition result corresponding to the face image to be recognized based on the face identity matching degree and the face pose matching degree.
In one embodiment, the face recognition apparatus 2300 further comprises:
and the operation execution module is used for executing corresponding target operation according to the face recognition passing result when the face recognition result is that the face recognition passes.
In one embodiment, the identity matching module 2302 is further configured to input a face image to be recognized into a face recognition model for face recognition, so as to obtain a face identity matching degree; the face recognition model is obtained by taking a training face image as input, taking a face identity label corresponding to the training face image as a label and training by using a convolutional neural network.
In one embodiment, the image segmentation module 2304 includes:
the segmentation feature obtaining module is used for inputting the face image to be recognized into a segmentation feature extraction network of the image segmentation model to obtain image segmentation features;
and the region determining module is used for inputting the image segmentation characteristics into an image classification network of the image segmentation model to obtain face pixel points and non-face pixel points, and determining a face region according to the face pixel points and the non-face pixel points.
In one embodiment, the face recognition apparatus 2300 further comprises:
the image acquisition module is used for acquiring a training face image with face boundary labels;
the training module is used for inputting the training face image into an initial segmentation feature extraction network of the initial image segmentation model to obtain initial image segmentation features; inputting the initial image segmentation characteristics into an initial image classification network of an initial image segmentation model to obtain initial face pixel points and initial non-face pixel points, and determining an initial face region image according to the initial face pixel points and the initial non-face pixel points;
and the segmentation model obtaining module is used for calculating the region error information of the initial face region and the face boundary label until the region error information obtained by training meets the preset training completion condition, and obtaining the image segmentation model.
In one embodiment, gesture recognition module 2306 includes:
and the model identification unit is used for inputting the face region image into the face gesture identification model for identification to obtain face gesture information, and the face gesture identification model is obtained by training through a multi-task regression model according to the training face region image and the corresponding face gesture label.
In one embodiment, the face pose information comprises distance pose information and angle pose information, and the face pose recognition model comprises a pose feature extraction network, a distance pose recognition network and an angle pose recognition network;
the model identification unit is also used for inputting the face region image into the attitude feature extraction network for feature extraction to obtain face attitude features; and simultaneously, the human face posture features are input into the angle posture recognition network for recognition to obtain the angle posture information.
In one embodiment, the face recognition apparatus 2300 further comprises:
the data acquisition module is used for acquiring training data, and the training data comprises a face region image and a corresponding face posture label;
the human face posture training module is used for inputting the human face region image into an initial posture feature extraction network of an initial human face posture recognition model for feature extraction to obtain initial human face posture features; inputting the initial face posture characteristics into an initial distance posture recognition network of an initial face posture recognition model for recognition to obtain initial distance posture information, inputting the face posture characteristics into an initial angle posture recognition network of the initial face posture recognition model for recognition to obtain initial angle posture information, and obtaining the initial face posture information according to the initial distance posture information and the initial angle posture information;
and the recognition module obtaining module is used for calculating the initial face posture information and the posture error information labeled by the face posture, and obtaining a face posture recognition model until the posture error information obtained by training meets the preset posture error condition.
In one embodiment, the face recognition apparatus 2300 further comprises:
the information acquisition module is used for acquiring the corresponding historical human face posture information when the user identifier executes the target operation;
the interval determining module is used for determining a total interval of the face postures corresponding to the user identification according to the historical face posture information;
the interval division module is used for dividing the total interval of the human face postures to obtain sub-intervals of the human face postures;
and the matching degree determining module is used for determining the face posture matching degree corresponding to each face posture sub-interval according to each historical face posture information, and storing each face posture sub-interval and the corresponding face posture matching degree in a correlation mode.
In one embodiment, the face pose total interval comprises a face distance total interval and a face angle total interval; an interval division module comprising:
the dividing unit is used for dividing the total interval of the face distance to obtain sub-intervals of the face distance; dividing the total interval of the face angles to obtain sub-intervals of all the face angles;
and the distribution diagram obtaining unit is used for combining each face distance sub-interval and each face angle sub-interval to obtain a distribution diagram of the face posture sub-interval.
In one embodiment, the histogram obtaining unit is further configured to establish a plane area according to the total face distance interval and the total face angle interval; performing area division in the plane area according to each face distance sub-interval and each face angle sub-interval to obtain a plane sub-area corresponding to each face posture sub-interval; and forming a distribution map of the face posture subintervals according to the plane subregions.
In one embodiment, the matching degree determination module is further configured to count the total number of the historical face pose information; determining a face pose sub-interval matched with each historical face pose information, and counting the number of the historical face pose information in each face pose sub-interval; and calculating to obtain the face pose matching degree corresponding to each face pose subinterval according to the quantity and the total quantity of the historical face pose information in each face pose subinterval.
In one embodiment, the face pose information includes a face distance pose parameter and a face angle pose parameter; the pose matching module 2308 is further configured to obtain an established distribution map; determining a target plane sub-region from the distribution diagram according to the face distance attitude parameter and the face angle attitude parameter in the face attitude information; and acquiring a face posture sub-interval corresponding to the target plane sub-area as a target face posture sub-interval.
In one embodiment, the result determining module 2310 is further configured to obtain a face identity weight corresponding to the face identity matching degree and a face pose weight corresponding to the face pose matching degree; carrying out weighted calculation according to the face identity weight and the face identity matching degree to obtain the face identity weighted matching degree; carrying out weighted calculation according to the face posture weight and the face posture matching degree to obtain the face posture weighted matching degree; and obtaining a target face matching degree according to the face identity weighted matching degree and the face posture weighted matching degree, and obtaining a face recognition result corresponding to the face image to be recognized as the face passes through recognition when the target face matching degree exceeds a preset threshold value.
For the specific limitations of the face recognition device, reference may be made to the above limitations of the face recognition method, which is not described herein again. All or part of the modules in the face recognition device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 24. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a face recognition method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 24 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, in which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
In one embodiment, a computer program product or computer program is provided that includes computer instructions stored in a computer-readable storage medium. The computer instructions are read by a processor of a computer device from a computer-readable storage medium, and the computer instructions are executed by the processor to cause the computer device to perform the steps in the above-mentioned method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (15)

1. A face recognition method, comprising:
acquiring a face image to be recognized, and recognizing the face image to be recognized to obtain a corresponding face identity matching degree;
carrying out image segmentation on the face image to be recognized to obtain a face region;
identifying a face gesture corresponding to the face area to obtain face gesture information;
determining a target face posture sub-interval matched with the face posture information, and acquiring corresponding face posture matching degree according to the target face posture sub-interval;
and determining a face recognition result corresponding to the face image to be recognized based on the face identity matching degree and the face posture matching degree.
2. The method according to claim 1, after determining a face recognition result corresponding to the face image to be recognized based on the face identity matching degree and the face pose matching degree, further comprising:
and when the face recognition result is that the face recognition is passed, executing corresponding target operation according to the face recognition passing result.
3. The method according to claim 1, wherein the image segmentation of the face image to be recognized to obtain a face region comprises:
inputting the face image to be recognized into a segmentation feature extraction network of an image segmentation model to obtain image segmentation features;
inputting the image segmentation characteristics into an image classification network of the image segmentation model to obtain face pixels and non-face pixels, and determining the face region according to the face pixels and the non-face pixels.
4. The method of claim 3, wherein the training of the image segmentation model comprises the steps of:
acquiring a training face image with face boundary labels;
inputting the training face image into an initial segmentation feature extraction network of an initial image segmentation model to obtain initial image segmentation features;
inputting the initial image segmentation features into an initial image classification network of the initial image segmentation model to obtain initial face pixel points and initial non-face pixel points, and determining an initial face region image according to the initial face pixel points and the initial non-face pixel points;
and calculating the region error information of the initial face region and the face boundary label until the region error information obtained by training meets the preset training completion condition, and obtaining the image segmentation model.
5. The method according to claim 1, wherein the recognizing the face pose corresponding to the face region image to obtain face pose information comprises:
and inputting the face region image into a face gesture recognition model for recognition to obtain the face gesture information, wherein the face gesture recognition model is obtained by training through a multi-task regression model according to a training face region image and a corresponding face gesture label.
6. The method of claim 5, wherein the face pose information comprises distance pose information and angular pose information, and the face pose recognition model comprises a pose feature extraction network, a distance pose recognition network, and an angular pose recognition network;
the step of inputting the face region image into a face gesture recognition model for recognition to obtain the face gesture information includes:
inputting the face region image into the attitude feature extraction network for feature extraction to obtain face attitude features;
inputting the human face posture characteristics into the distance posture recognition network for recognition to obtain distance posture information, and simultaneously inputting the human face posture characteristics into the angle posture recognition network for recognition to obtain angle posture information.
7. The method of claim 5, wherein the training of the face pose recognition model comprises the steps of:
acquiring training data, wherein the training data comprises a face region image and a corresponding face posture mark;
inputting the face region image into an initial pose feature extraction network of an initial face pose recognition model for feature extraction to obtain initial face pose features;
inputting the initial face pose characteristics into an initial distance pose recognition network of the initial face pose recognition model for recognition to obtain initial distance pose information, inputting the face pose characteristics into an initial angle pose recognition network of the initial face pose recognition model for recognition to obtain initial angle pose information, and obtaining initial face pose information according to the initial distance pose information and the initial angle pose information;
and calculating the initial face posture information and the posture error information labeled by the face posture until the posture error information obtained by training meets a preset posture error condition, and obtaining the face posture recognition model.
8. The method according to claim 1, before the obtaining of the face image to be recognized and the obtaining of the corresponding face identity matching degree according to the face image to be recognized, further comprising:
acquiring corresponding historical human face posture information when a user identifier executes target operation;
determining a total face posture interval corresponding to the user identification according to the historical face posture information;
dividing the total interval of the human face postures to obtain sub-intervals of the human face postures;
and determining the face pose matching degree corresponding to each face pose sub-interval according to the historical face pose information, and storing the face pose sub-intervals and the corresponding face pose matching degrees in a related mode.
9. The method of claim 8, wherein the total interval of human face poses comprises a total interval of human face distances and a total interval of human face angles;
dividing the total interval of the human face postures to obtain sub-intervals of the human face postures, and the method comprises the following steps:
dividing the total interval of the face distances to obtain sub-intervals of the face distances;
dividing the total interval of the face angles to obtain sub-intervals of all the face angles;
and combining the human face distance subintervals and the human face angle subintervals to obtain a distribution map of the human face posture subintervals.
10. The method of claim 9, wherein said combining said each face distance subinterval and said each face angle subinterval to obtain a profile of said face pose subinterval comprises:
establishing a plane area according to the total face distance interval and the total face angle interval;
performing area division in the plane area according to the face distance sub-intervals and the face angle sub-intervals to obtain plane sub-areas corresponding to the face posture sub-intervals;
and forming a distribution map of the human face posture subinterval according to each plane subregion.
11. The method of claim 8, wherein determining the face pose matching degree corresponding to each face pose sub-interval according to each historical face pose information comprises:
counting the total number of the historical human face posture information;
determining a face pose sub-interval matched with each historical face pose information, and counting the number of the historical face pose information in each face pose sub-interval;
and calculating the face pose matching degree corresponding to each face pose sub-interval according to the number of the historical face pose information in each face pose sub-interval and the total number.
12. The method of claim 1, wherein the face pose information comprises a face distance pose parameter and a face angle pose parameter;
the determining of the target face pose subinterval matched with the face pose information comprises:
acquiring an established distribution map;
determining a target plane sub-region from the distribution diagram according to the face distance attitude parameter and the face angle attitude parameter in the face attitude information;
and acquiring a face posture sub-interval corresponding to the target plane sub-area as the target face posture sub-interval.
13. An apparatus for face recognition, the apparatus comprising:
the identity matching module is used for acquiring a face image to be recognized and obtaining the corresponding face identity matching degree according to the face image to be recognized;
the image segmentation module is used for carrying out image segmentation on the face image to be recognized to obtain a face region;
the gesture recognition module is used for recognizing the face gesture corresponding to the face area to obtain face gesture information;
the gesture matching module is used for determining a target human face gesture subinterval matched with the human face gesture information and acquiring corresponding human face gesture matching degree according to the target human face gesture subinterval;
and the result determining module is used for determining a face recognition result corresponding to the face image to be recognized based on the face identity matching degree and the face posture matching degree.
14. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 12.
15. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 12.
CN202010902551.2A 2020-09-01 2020-09-01 Face recognition method, device, computer equipment and storage medium Active CN112001932B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010902551.2A CN112001932B (en) 2020-09-01 2020-09-01 Face recognition method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010902551.2A CN112001932B (en) 2020-09-01 2020-09-01 Face recognition method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112001932A true CN112001932A (en) 2020-11-27
CN112001932B CN112001932B (en) 2023-10-31

Family

ID=73465531

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010902551.2A Active CN112001932B (en) 2020-09-01 2020-09-01 Face recognition method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112001932B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112528858A (en) * 2020-12-10 2021-03-19 北京百度网讯科技有限公司 Training method, device, equipment, medium and product of human body posture estimation model
CN112529073A (en) * 2020-12-07 2021-03-19 北京百度网讯科技有限公司 Model training method, attitude estimation method and apparatus, and electronic device
CN113160475A (en) * 2021-04-21 2021-07-23 深圳前海微众银行股份有限公司 Access control method, device, equipment and computer readable storage medium
CN113822287A (en) * 2021-11-19 2021-12-21 苏州浪潮智能科技有限公司 Image processing method, system, device and medium
CN114550088A (en) * 2022-02-22 2022-05-27 北京城建设计发展集团股份有限公司 Multi-camera fused passenger identification method and system and electronic equipment
CN114565814A (en) * 2022-02-25 2022-05-31 平安国际智慧城市科技股份有限公司 Feature detection method and device and terminal equipment
CN115564387A (en) * 2022-10-19 2023-01-03 常州瀚森科技股份有限公司 Digital intelligent operation management method and system for industrial park under digital economic condition
WO2023124040A1 (en) * 2021-12-31 2023-07-06 深圳须弥云图空间科技有限公司 Facial recognition method and apparatus
CN117058738A (en) * 2023-08-07 2023-11-14 深圳市华谕电子科技信息有限公司 Remote face detection and recognition method and system for mobile law enforcement equipment

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050147292A1 (en) * 2000-03-27 2005-07-07 Microsoft Corporation Pose-invariant face recognition system and process
US20110052081A1 (en) * 2009-08-31 2011-03-03 Sony Corporation Apparatus, method, and program for processing image
WO2011065952A1 (en) * 2009-11-30 2011-06-03 Hewlett-Packard Development Company, L.P. Face recognition apparatus and methods
CN102087702A (en) * 2009-12-04 2011-06-08 索尼公司 Image processing device, image processing method and program
US20140147023A1 (en) * 2011-09-27 2014-05-29 Intel Corporation Face Recognition Method, Apparatus, and Computer-Readable Recording Medium for Executing the Method
CN105117463A (en) * 2015-08-24 2015-12-02 北京旷视科技有限公司 Information processing method and information processing device
KR20160042646A (en) * 2014-10-10 2016-04-20 인하대학교 산학협력단 Method of Recognizing Faces
CN106295480A (en) * 2015-06-09 2017-01-04 上海戏剧学院 Multi-orientation Face identification interactive system
US9971933B1 (en) * 2017-01-09 2018-05-15 Ulsee Inc. Facial image screening method and face recognition system thereof
WO2019042195A1 (en) * 2017-08-31 2019-03-07 杭州海康威视数字技术股份有限公司 Method and device for recognizing identity of human target
CN110287880A (en) * 2019-06-26 2019-09-27 西安电子科技大学 A kind of attitude robust face identification method based on deep learning
CN110427849A (en) * 2019-07-23 2019-11-08 深圳前海达闼云端智能科技有限公司 Face pose determination method and device, storage medium and electronic equipment
CN110647865A (en) * 2019-09-30 2020-01-03 腾讯科技(深圳)有限公司 Face gesture recognition method, device, equipment and storage medium
CN111160307A (en) * 2019-12-31 2020-05-15 帷幄匠心科技(杭州)有限公司 Face recognition method and face recognition card punching system
CN111199029A (en) * 2018-11-16 2020-05-26 株式会社理光 Face recognition device and face recognition method
CN111310512A (en) * 2018-12-11 2020-06-19 杭州海康威视数字技术股份有限公司 User identity authentication method and device

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050147292A1 (en) * 2000-03-27 2005-07-07 Microsoft Corporation Pose-invariant face recognition system and process
US20110052081A1 (en) * 2009-08-31 2011-03-03 Sony Corporation Apparatus, method, and program for processing image
WO2011065952A1 (en) * 2009-11-30 2011-06-03 Hewlett-Packard Development Company, L.P. Face recognition apparatus and methods
CN102087702A (en) * 2009-12-04 2011-06-08 索尼公司 Image processing device, image processing method and program
US20140147023A1 (en) * 2011-09-27 2014-05-29 Intel Corporation Face Recognition Method, Apparatus, and Computer-Readable Recording Medium for Executing the Method
KR20160042646A (en) * 2014-10-10 2016-04-20 인하대학교 산학협력단 Method of Recognizing Faces
CN106295480A (en) * 2015-06-09 2017-01-04 上海戏剧学院 Multi-orientation Face identification interactive system
CN105117463A (en) * 2015-08-24 2015-12-02 北京旷视科技有限公司 Information processing method and information processing device
US9971933B1 (en) * 2017-01-09 2018-05-15 Ulsee Inc. Facial image screening method and face recognition system thereof
WO2019042195A1 (en) * 2017-08-31 2019-03-07 杭州海康威视数字技术股份有限公司 Method and device for recognizing identity of human target
CN111199029A (en) * 2018-11-16 2020-05-26 株式会社理光 Face recognition device and face recognition method
CN111310512A (en) * 2018-12-11 2020-06-19 杭州海康威视数字技术股份有限公司 User identity authentication method and device
CN110287880A (en) * 2019-06-26 2019-09-27 西安电子科技大学 A kind of attitude robust face identification method based on deep learning
CN110427849A (en) * 2019-07-23 2019-11-08 深圳前海达闼云端智能科技有限公司 Face pose determination method and device, storage medium and electronic equipment
CN110647865A (en) * 2019-09-30 2020-01-03 腾讯科技(深圳)有限公司 Face gesture recognition method, device, equipment and storage medium
CN111160307A (en) * 2019-12-31 2020-05-15 帷幄匠心科技(杭州)有限公司 Face recognition method and face recognition card punching system

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
DWI ANA RATNA WATI 等: "Design of face detection and recognition system for smart home security application", 《2017 2ND INTERNATIONAL CONFERENCES ON INFORMATION TECHNOLOGY, INFORMATION SYSTEMS AND ELECTRICAL ENGINEERING》, pages 342 - 347 *
JOHN WRIGHT 等: "Implicit elastic matching with random projections for pose-variant face recognition", 《2009 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》, pages 1502 - 1509 *
ZHAOCUI HAN 等: "Face Recognition with Integrating Multiple Cues", 《J. SIGN. PROCESS. SYST. 》, pages 391 - 404 *
杜杏菁 等: "基于Candide-3模型的姿态表情人脸识别研究", 《计算机工程与设计》, vol. 33, no. 3, pages 1017 - 1021 *
程福运: "基于深度学习的人脸识别", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112529073A (en) * 2020-12-07 2021-03-19 北京百度网讯科技有限公司 Model training method, attitude estimation method and apparatus, and electronic device
CN112528858A (en) * 2020-12-10 2021-03-19 北京百度网讯科技有限公司 Training method, device, equipment, medium and product of human body posture estimation model
CN113160475A (en) * 2021-04-21 2021-07-23 深圳前海微众银行股份有限公司 Access control method, device, equipment and computer readable storage medium
CN113822287A (en) * 2021-11-19 2021-12-21 苏州浪潮智能科技有限公司 Image processing method, system, device and medium
CN113822287B (en) * 2021-11-19 2022-02-22 苏州浪潮智能科技有限公司 Image processing method, system, device and medium
US12118771B2 (en) 2021-11-19 2024-10-15 Suzhou Metabrain Intelligent Technology Co., Ltd. Method and system for processing image, device and medium
WO2023124040A1 (en) * 2021-12-31 2023-07-06 深圳须弥云图空间科技有限公司 Facial recognition method and apparatus
CN114550088A (en) * 2022-02-22 2022-05-27 北京城建设计发展集团股份有限公司 Multi-camera fused passenger identification method and system and electronic equipment
CN114565814A (en) * 2022-02-25 2022-05-31 平安国际智慧城市科技股份有限公司 Feature detection method and device and terminal equipment
CN115564387A (en) * 2022-10-19 2023-01-03 常州瀚森科技股份有限公司 Digital intelligent operation management method and system for industrial park under digital economic condition
CN117058738A (en) * 2023-08-07 2023-11-14 深圳市华谕电子科技信息有限公司 Remote face detection and recognition method and system for mobile law enforcement equipment
CN117058738B (en) * 2023-08-07 2024-05-03 深圳市华谕电子科技信息有限公司 Remote face detection and recognition method and system for mobile law enforcement equipment

Also Published As

Publication number Publication date
CN112001932B (en) 2023-10-31

Similar Documents

Publication Publication Date Title
CN112001932B (en) Face recognition method, device, computer equipment and storage medium
CN110555481B (en) Portrait style recognition method, device and computer readable storage medium
Lu et al. Dense and sparse reconstruction error based saliency descriptor
Zhang et al. A fine-grained image categorization system by cellet-encoded spatial pyramid modeling
Deng et al. M3 csr: Multi-view, multi-scale and multi-component cascade shape regression
WO2021139324A1 (en) Image recognition method and apparatus, computer-readable storage medium and electronic device
CN111950329A (en) Target detection and model training method and device, computer equipment and storage medium
CN112364827B (en) Face recognition method, device, computer equipment and storage medium
CN110399799A (en) Image recognition and the training method of neural network model, device and system
CN113449704B (en) Face recognition model training method and device, electronic equipment and storage medium
Parashar et al. Deep learning pipelines for recognition of gait biometrics with covariates: a comprehensive review
Anila et al. Simple and fast face detection system based on edges
Dubey et al. Interactive Biogeography Particle Swarm Optimization for Content Based Image Retrieval
CN111382791B (en) Deep learning task processing method, image recognition task processing method and device
CN111898561A (en) Face authentication method, device, equipment and medium
CN110175500B (en) Finger vein comparison method, device, computer equipment and storage medium
CN115188031A (en) Fingerprint identification method, computer program product, storage medium and electronic device
Juang et al. Stereo-camera-based object detection using fuzzy color histograms and a fuzzy classifier with depth and shape estimations
CN112241680A (en) Multi-mode identity authentication method based on vein similar image knowledge migration network
Hu et al. An effective head pose estimation approach using Lie Algebrized Gaussians based face representation
CN118038303A (en) Identification image processing method, device, computer equipment and storage medium
Ying et al. Dynamic random regression forests for real-time head pose estimation
Chen et al. Multi‐directional saliency metric learning for person re‐identification
Herlambang et al. Cloud-based architecture for face identification with deep learning using convolutional neural network
CN113762249A (en) Image attack detection and image attack detection model training method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant