WO2023124040A1 - 一种人脸识别方法及装置 - Google Patents

一种人脸识别方法及装置 Download PDF

Info

Publication number
WO2023124040A1
WO2023124040A1 PCT/CN2022/107827 CN2022107827W WO2023124040A1 WO 2023124040 A1 WO2023124040 A1 WO 2023124040A1 CN 2022107827 W CN2022107827 W CN 2022107827W WO 2023124040 A1 WO2023124040 A1 WO 2023124040A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
feature
recognized
pose
features
Prior art date
Application number
PCT/CN2022/107827
Other languages
English (en)
French (fr)
Inventor
黄泽元
Original Assignee
深圳须弥云图空间科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳须弥云图空间科技有限公司 filed Critical 深圳须弥云图空间科技有限公司
Publication of WO2023124040A1 publication Critical patent/WO2023124040A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Definitions

  • the present disclosure relates to the technical field of data processing, and in particular to a face recognition method and device.
  • the face recognition system is an emerging biometric technology, and it is a high-tech technology that is tackling key problems in the international scientific and technological field and has broad development prospects.
  • Most of the existing face detection technologies have good performance on faces in ideal environments, and their generalization ability and performance effect on the environment are poor for some applications in complex environments and cross-working scenarios.
  • the face recognition accuracy of the existing methods is still far from the requirements of practical applications, and further research and improvement are needed.
  • the embodiments of the present disclosure provide a face recognition method, device, computer equipment, and computer-readable storage medium to solve the inaccurate face recognition results in the prior art when the face posture changes rapidly and drastically.
  • the problem is a face recognition method, device, computer equipment, and computer-readable storage medium to solve the inaccurate face recognition results in the prior art when the face posture changes rapidly and drastically. The problem.
  • a face recognition method includes:
  • the user recognition result of the face image to be recognized is determined according to the target face feature of the face image to be recognized.
  • the second aspect of the embodiments of the present disclosure provides a face recognition device, which includes:
  • an image acquisition unit configured to acquire a face image to be recognized
  • the feature extraction unit is used to extract the face pose feature and face feature of the face image to be recognized
  • the information generation unit is used to generate target face features, face pose information and face pose information corresponding score vectors of the face image to be recognized according to face pose features and face features;
  • the result determination unit is used to determine the user identification of the face image to be recognized according to the target face feature of the face image to be recognized if the face pose information and the score vector corresponding to the face pose information meet the preset conditions result.
  • a third aspect of the embodiments of the present disclosure provides a computer device, including a memory, a processor, and a computer program stored in the memory and operable on the processor, and the processor implements the steps of the above method when executing the computer program.
  • a fourth aspect of the embodiments of the present disclosure provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the above method are implemented.
  • the embodiments of the present disclosure have the following beneficial effects: the embodiment of the present disclosure can first obtain the face image to be recognized; then, the face pose feature and face feature of the face image to be recognized can be extracted; and then , according to the face pose features and face features, generate the target face features and face pose information of the face image to be recognized; if the face pose information meets the preset conditions, then according to the target face image to be recognized Face features, determine the user recognition result of the face image to be recognized.
  • the face pose features reflect the spatial information of the face, and include the structural features, edge features, and angle features of the face, and the face features reflect the skin color, texture and other features of the face, so , the face pose feature and face feature are fused, that is, the detailed information of the face pose feature and face feature can be fused, and the target face feature of the face image to be recognized can be generated according to the fused feature and face pose information, so that the accuracy of the determined target face features and face pose information can be improved, and the accuracy of user recognition results determined according to the target face features and face pose information can be improved.
  • the face pose feature and face feature are fused, that is, the detailed information of the face pose feature and face feature can be fused, and the target face feature of the face image to be recognized can be generated according to the fused feature and face pose information, so that the accuracy of the determined target face features and face pose information can be improved, and the accuracy of user recognition results determined according to the target face features and face pose information can be improved.
  • FIG. 1 is a schematic diagram of an application scenario of an embodiment of the present disclosure
  • FIG. 2 is a flowchart of a face recognition method provided by an embodiment of the present disclosure
  • FIG. 3 is a block diagram of a face recognition device provided by an embodiment of the present disclosure.
  • Fig. 4 is a schematic diagram of a computer device provided by an embodiment of the present disclosure.
  • the present invention provides a face recognition method, in this method, the face image to be recognized can be obtained first; then, the face pose feature and the face feature of the face image to be recognized can be extracted; Then, according to the face pose feature and face feature, generate the target face feature and face pose information of the face image to be recognized; if the face pose information meets the preset conditions, then according to the face image to be recognized The target face feature is used to determine the user recognition result of the face image to be recognized.
  • the face pose features reflect the spatial information of the face, and include the structural features, edge features, and angle features of the face, and the face features reflect the skin color, texture and other features of the face, so , the face pose feature and face feature are fused to generate the target face feature and face pose information of the face image to be recognized, which can improve the accuracy of the determined target face feature and face pose information , so that the accuracy of the user recognition result determined according to the target face features and face pose information can be improved.
  • this embodiment of the present invention can be applied to the application scenario shown in FIG. 1 .
  • a terminal device 1 and a server 2 may be included.
  • the terminal device 1 can be hardware or software. When the terminal device 1 is hardware, it can be various electronic devices that have an image acquisition function and support communication with the server 2, including but not limited to smart phones, tablet computers, laptop computers and desktop computers, etc.; when the terminal device When 1 is software, it can be installed in the electronic device as above.
  • the terminal device 1 may be implemented as multiple software or software modules, or may be implemented as a single software or software module, which is not limited in this embodiment of the present disclosure.
  • Server 2 may be a server that provides various services, for example, a background server that receives requests sent by terminal devices that establish a communication connection with it, and the background server can receive and analyze requests sent by terminal devices, and generate processing result.
  • the server 2 may be one server, or a server cluster composed of several servers, or a cloud computing service center, which is not limited in this embodiment of the present disclosure.
  • the server 2 may be hardware or software. When the server 2 is hardware, it may be various electronic devices that provide various services for the terminal device 1 . When the server 2 is software, it can be a plurality of software or software modules that provide various services for the terminal device 1, or a single software or software module that provides various services for the terminal device 1. No limit.
  • the terminal device 1 and the server 2 can be connected through a network for communication.
  • the network can be a wired network connected by coaxial cable, twisted pair and optical fiber, or a wireless network that can realize the interconnection of various communication devices without wiring, for example, Bluetooth (Bluetooth), Near Field Communication (Near Field Communication, NFC), infrared (Infrared), etc., which are not limited in this embodiment of the present disclosure.
  • the user can input the face image to be recognized through the terminal device 1, and the terminal device 1 sends the business data to be evaluated and the target evaluation method to the server 2.
  • Server 2 first extracts the face pose feature and face feature of the face image to be recognized; then, can generate the target face feature and face pose of the face image to be recognized according to the face pose feature and face feature information; if the face pose information meets the preset conditions, the user recognition result of the face image to be recognized can be determined according to the target face feature of the face image to be recognized; and the user recognition result of the face image to be recognized can be sent to The terminal device 1 returns, so that the terminal device 1 can present the user recognition result of the face image to be recognized to the user.
  • the face pose feature and the face feature are fused, that is, the detailed information of the face pose feature and the face feature can be fused, and the target face of the face image to be recognized can be generated according to the fused feature features and face pose information, so that the accuracy of the determined target face features and face pose information can be improved, and the accuracy of the user recognition results determined according to the target face features and face pose information can be improved. Accuracy.
  • terminal device 1, the server 2, and the network may be adjusted according to actual requirements of the application scenario, which is not limited in this embodiment of the present disclosure.
  • Fig. 2 is a flow chart of a face recognition method provided by an embodiment of the present disclosure.
  • a face recognition method in FIG. 2 may be executed by the terminal device or the server in FIG. 1 .
  • the face recognition method includes:
  • S201 Acquire a face image to be recognized.
  • the face image to be recognized can be understood as an image that requires face recognition.
  • the face image to be recognized may be collected by a surveillance camera installed at a fixed location, or may be collected by a mobile terminal device, or may be read from a storage device pre-stored with images .
  • the face pose feature reflects the spatial information of the face, for example, it can include the structural features, edge features, and angle features of the face; and the face features can reflect the skin color, texture, age, illumination, race, etc. Aspect semantic information.
  • S203 Generate target face features and face pose information of the face image to be recognized according to the face pose features and face features.
  • the face pose feature and the face feature can be fused first, so that the face pose feature can be used to supplement the information of the face feature, so as to enrich the information of the face feature. Then, according to the face pose features and face features, the target face features and face pose information of the face image to be recognized can be generated, so that the accuracy of the determined target face features and face pose information can be improved.
  • the target face feature can be understood as a feature vector including face detail information (such as texture information, skin color information, etc.).
  • the face pose information can include the yaw angle, pitch angle, and roll angle of the face, and the face pose information can reflect the face's steering angle and steering magnitude.
  • the preset condition may be that the yaw angle is less than or equal to the preset yaw angle threshold, and the pitch angle is less than or equal to the preset yaw angle threshold. Or equal to the preset pitch angle threshold, and the roll angle is less than or equal to the preset roll angle threshold.
  • the user recognition result of the face image to be recognized can be determined according to the target face features of the face image to be recognized.
  • several preset user information can be preset, and each preset user information has its corresponding preset user face features;
  • the similarity between the target facial features of the face image and each preset user's facial features, for example, the vector distance can be measured by means of Euclidean distance, cosine distance, etc., so as to determine the similarity according to the vector distance; then, you can The user information corresponding to the preset user face feature with the highest similarity is used as the user recognition result of the face image to be recognized.
  • the embodiments of the present disclosure have the following beneficial effects: the embodiment of the present disclosure can first obtain the face image to be recognized; then, the face pose feature and face feature of the face image to be recognized can be extracted; and then , according to the face pose features and face features, generate the target face features and face pose information of the face image to be recognized; if the face pose information meets the preset conditions, then according to the target face image to be recognized Face features, determine the user recognition result of the face image to be recognized.
  • the face pose features reflect the spatial information of the face, and include the structural features, edge features, and angle features of the face, and the face features reflect the skin color, texture and other features of the face, so , the face pose feature and face feature are fused, that is, the detailed information of the face pose feature and face feature can be fused, and the target face feature of the face image to be recognized can be generated according to the fused feature and face pose information, so that the accuracy of the determined target face features and face pose information can be improved, and the accuracy of user recognition results determined according to the target face features and face pose information can be improved.
  • the face pose feature and face feature are fused, that is, the detailed information of the face pose feature and face feature can be fused, and the target face feature of the face image to be recognized can be generated according to the fused feature and face pose information, so that the accuracy of the determined target face features and face pose information can be improved, and the accuracy of user recognition results determined according to the target face features and face pose information can be improved.
  • the face pose features may include N face pose features
  • the face features may include N face features
  • S202 may include the following steps:
  • S202a For the first face feature, input the face image to be recognized into the first convolution layer to obtain the first face feature;
  • S202b For the first face pose feature, input the face image to be recognized into the first residual block to obtain the first face pose feature;
  • S202c For the pose feature of the i-th face, input the pose feature of the i-1 face into the i-th residual block to obtain the pose feature of the i-th face;
  • S202d For the i-th face feature, input the i-1th face pose feature and the i-1th face feature into the i-th convolutional layer to obtain the i-th face feature; wherein, i is greater than or equal to 2, and i is less than or equal to N, and both N and i are positive integers.
  • the facial pose feature extraction model may include at least N residual blocks, which are respectively the first residual block, the second residual block, ..., the Nth residual block, wherein the N residual blocks are the joint
  • the facial pose feature extraction model can include at least 4 residual blocks, and each residual block is composed of 2, 3, and 2 residual networks, and each residual
  • the network architecture in the network can include two convolutional layers (ie conv), two batch normalization layers (ie bn) and two hyperbolic tangent activation functions (ie tanh).
  • connection structure conv+bn+tanh +conv+bn+tanh, and the number of output feature map channels of these three residual blocks are 64, 128, 256 respectively. It should be noted that the reason for using the hyperbolic tangent activation function as the activation function is to enable each feature calculation to take a value between (-1, 1), which is beneficial to subsequent pose calculations.
  • the face image to be recognized can be input into the first residual block to obtain the first face pose feature; the first face pose feature can be input into the second residual block to obtain the second face pose feature Features; ...;
  • the i-th face pose feature input the i-1 face pose feature into the i-th residual block to obtain the i-th face pose feature; where i is greater than or equal to 2, and i is less than Or equal to N, where N and i are both positive integers.
  • the face feature extraction model may include at least N convolutional layers, which are respectively the first convolutional layer, the second convolutional layer, ..., the Nth convolutional layer, wherein the N convolutional layers are cascaded connections .
  • the face feature extraction model can be an IResNet-50 network, and each convolution layer can include a convolution operator with a convolution kernel size of 3x3 and a channel number of 192 and a convolution kernel size of 1x1 with a channel number of 128 convolution operators.
  • the face image to be recognized is input into the first convolutional layer to obtain the first face feature;
  • the first face pose feature such as dimension is (28, 28, 64)
  • the first face feature for example, the dimension is (28, 28, 128)
  • the second convolutional layer for example, the first face feature and the first face feature can be fused, and then the fused feature input into the second convolutional layer
  • i is greater than or equal to 2
  • i is less than or equal to N
  • both N and i are positive integers.
  • the reason for using the face pose feature as a supplement and inputting the face feature into the convolution layer to calculate the next face feature is that the pose information processing considers more spatial information, and its impact on the structure of the face Feature, edge feature, and angle feature extraction are relatively complete, while face feature extraction needs to consider various semantic information such as age, illumination, race, skin color, texture, etc., and there are certain deficiencies and semantic confusion in the spatial structure.
  • the facial pose features extracted by the pose feature extraction model are added to the face feature extraction model, which will bring effective supplements to the information processing of face features.
  • the number of residual blocks in the face pose feature extraction model is less than the number of convolutional layers and channels in the face feature extraction model. The reason is that the face pose feature extraction model deals with Relatively simple semantic information does not require a large amount of calculation.
  • the step of "generating the target face features of the face image to be recognized according to the face pose features and face features" in S203 may include the following steps:
  • the facial feature extraction model also includes an N+1th convolutional layer, and the Nth facial pose feature and the Nth facial feature can be input into the N+1th convolutional layer to obtain a face to be recognized The target face features of the image.
  • the step of "generating face pose information of the face image to be recognized according to face pose features and face features" in S203 may include the following steps:
  • Step a According to the face pose feature, generate the attention map corresponding to the face pose feature.
  • all face pose features may be down-sampled first, so that the dimensions of all face pose features are the same. Then, each face pose feature is input into the attention model to obtain the attention map corresponding to the face pose feature.
  • the attention model includes a convolution operator with a convolution kernel size of 1x1 and a channel number of 1, and a sigmoid function.
  • the dimension of a face pose feature is (28, 28, 64), and the face pose feature is first subjected to two convolution calculations with a convolution kernel size of 3x3, a step size of 2, and a channel number of 64.
  • Step b Generate an attention map corresponding to the facial features according to the facial features
  • all facial features may be down-sampled first, so that the dimensions of all facial features are the same. Then, each face feature is input into the attention model to obtain the attention map corresponding to the face feature.
  • the attention model includes a convolution operator with a convolution kernel size of 1x1 and a channel number of 1, and a sigmoid function.
  • the face feature is firstly subjected to two convolution calculations with a convolution kernel size of 3x3, a step size of 2, and a channel number of 64, and the dimension Reduce to (7, 7, 64), and then introduce the above 1x1 convolution calculation and sigmoid calculation to obtain the corresponding attention map as the attention map of dimension (7, 7, 1), and then change the three dimensions to one Dimension
  • the dimension of the final attention map is (7, 7, 1).
  • Step c According to the attention map corresponding to the face pose feature, the attention map corresponding to the face feature, and the Nth face pose feature, a full-space attention map is obtained.
  • a two-stream attention map may be generated first according to the attention map corresponding to the face pose feature and the attention map corresponding to the face feature.
  • a deep and shallow attention map may be generated based on the attention maps corresponding to all facial features.
  • the full-spatial attention map can be obtained according to the dual-stream attention map, deep-shallow attention map, and Nth face pose features.
  • Step d According to the full space attention map, generate the face pose information of the face image to be recognized.
  • the face pose information of the face image to be recognized and the score vector corresponding to the face pose information can be generated according to the full spatial attention map.
  • the score vector corresponding to the face pose information is used to evaluate the richness of the overall information of the face (referring to the discriminative information contained in the face picture). It can be understood that the higher the score vector, the better the overall information of the face. The higher the richness of the face, on the contrary, the lower the score vector, the lower the richness of the overall face information.
  • the full-spatial attention map combines various forms of full-spatial attention information, it also combines deep and shallow poses (ie deep and shallow attention maps), overall feature information of faces (ie Nth face Therefore, the face pose information of the face image to be recognized and the score vector corresponding to the face pose information generated according to the full spatial attention map are more accurate.
  • the face pose information satisfies the preset condition
  • determine the user recognition result of the face image to be recognized which may include:
  • the face pose information and the score vector corresponding to the face pose information meet the preset conditions, then determine the user recognition result of the face image to be recognized according to the target face features of the face image to be recognized.
  • the preset condition can be that the yaw angle is less than or equal to the preset yaw angle threshold, the pitch angle is less than or equal to the preset pitch angle threshold, the roll angle is less than or equal to the preset roll angle threshold, and the face pose information corresponds to The score vector is greater than or equal to the preset score. It should be noted that since the score vector corresponding to the face pose information is less than the preset score value, the face will be easily attacked (for example, it is easy to use a fake user photo for decryption), therefore, the face pose information corresponds to The scores of require the vector to be greater than or equal to the preset score.
  • N class centers one positive class center and N ⁇ 1 negative class centers
  • N vectors may be introduced. These vectors are dot-multiplied with the face vector T (after normalization) to obtain N values, which represent the similarity x between the current face and the N class centers.
  • the general training method will perform the softmax operation on these N x, and then calculate the cross entropy. But such a decision boundary is inaccurate, and the training method is also inefficient. Therefore, in this embodiment, it can be set that the similarity between T and the center of the positive class is x + , where x + should be minus a decision value related to pose and score value. Subtracting a larger decision value means that the decision boundary of face features is smaller. Among them, set yaw angle y, pitch angle p, roll angle r, scoring value s, decision value m, the formula is:
  • f is a function
  • i there is also a value i in the above formula that determines which faces are currently given a larger decision value and a smaller decision boundary.
  • the larger m is, the tighter the decision boundary is, and the face features should be closer to the center of the class, resulting in a larger gradient.
  • the above formula will give a face with a small yaw angle, pitch angle, and roll angle and a high score with a larger decision value, a smaller decision boundary, and a larger gradient;
  • i Gradually increasing the above formula will give a face with a larger yaw angle, pitch angle, and roll angle and a lower score value a larger decision value, a smaller decision boundary, and a larger gradient. That is to say, when i takes a value of 0, the network will mainly train the front face and give the gradient of the front face; when i gradually increases, the network will mainly train the side face and give the gradient of the side face.
  • the technical solution for face recognition model training decision-making is: when the model starts training, i is set to 0. Subsequently, the model loss is gradually reduced.
  • i can be gradually increased by 0.1 each time, such as from 0 to 0.1, 0.1 to 0.2, etc. Continue to observe the accuracy on the validation set. Once the accuracy is found to decrease, i should be reduced by 0.1, and after training for a period of time, try to increase it back to the original value. If it is repeatedly increased to the original value for 3 times and it is found that the accuracy cannot continue to be improved, it will be reduced by 0.1, and the training will end after fitting. At this time, the spatial distribution of faces we get will show a reasonable distribution of faces with small angles near the center of the class and faces with large angles at the edge of the class.
  • Fig. 3 is a schematic diagram of a face recognition device provided by an embodiment of the present disclosure. As shown in Figure 3, the face recognition device includes:
  • An image acquisition unit 301 configured to acquire a face image to be recognized
  • Feature extraction unit 302 for extracting the face pose feature and face feature of the face image to be recognized
  • the information generation unit 303 is used to generate target face features, face pose information, and face pose information corresponding score vectors of the face image to be recognized according to the face pose features and face features;
  • the result determining unit 304 is used to determine the user of the face image to be recognized according to the target face feature of the face image to be recognized if the face pose information and the score vector corresponding to the face pose information meet the preset conditions recognition result.
  • the face pose features include N face pose features, and the face features include N face features;
  • the feature extraction unit 302 is used for:
  • For the first face feature input the face image to be recognized into the first convolutional layer to obtain the first face feature;
  • For the first face pose feature input the face image to be recognized into the first residual block to obtain the first face pose feature;
  • For the i-th face pose feature input the i-1th face pose feature into the i-th residual block to obtain the i-th face pose feature;
  • i-th face feature For the i-th face feature, input the i-1th face pose feature and the i-1th face feature into the i-th convolutional layer to obtain the i-th face feature; where i is greater than or equal to 2, and i is less than Or equal to N, where N and i are both positive integers.
  • the information generating unit 303 is configured to:
  • the information generating unit 303 is configured to:
  • the face features generate the attention map corresponding to the face features
  • the attention map corresponding to the face pose feature the attention map corresponding to the face feature, and the Nth face pose feature, a full-space attention map is obtained;
  • the face pose information of the face image to be recognized is generated.
  • the information generating unit 303 is specifically used for:
  • the attention map corresponding to the facial features generate a deep and shallow attention map
  • the information generating unit 303 is specifically used for:
  • the full space attention map According to the full space attention map, generate the face pose information of the face image to be recognized and the score vector corresponding to the face pose information;
  • the result determination unit 304 is configured to:
  • the face pose information and the score vector corresponding to the face pose information meet the preset conditions, then determine the user recognition result of the face image to be recognized according to the target face features of the face image to be recognized.
  • the face pose information includes yaw angle, pitch angle, and roll angle;
  • the preset condition is that the yaw angle is less than or equal to the preset yaw angle threshold, the pitch angle is less than or equal to the preset pitch angle threshold, and the roll angle It is less than or equal to the preset roll angle threshold and the score vector corresponding to the face pose information is greater than or equal to the preset score.
  • the result determining unit 304 is configured to:
  • the user information corresponding to the preset user face feature with the highest similarity is used as the user recognition result of the face image to be recognized.
  • the technical solution provided by the embodiments of the present disclosure is a face recognition device, the device includes: an image acquisition unit, used to obtain a face image to be recognized; a feature extraction unit, used to extract the face pose of the face image to be recognized Features and face features; an information generation unit for generating target face features, face pose information, and face pose information corresponding scores of the face image to be recognized according to face pose features and face features Vector; the result determination unit is used to determine the face image to be recognized according to the target face feature of the face image to be recognized if the face pose information and the score vector corresponding to the face pose information meet the preset conditions User identification results.
  • the face pose features reflect the spatial information of the face, and include the structural features, edge features, and angle features of the face, and the face features reflect the skin color, texture and other features of the face, so , the face pose feature and face feature are fused, that is, the detailed information of the face pose feature and face feature can be fused, and the target face feature of the face image to be recognized can be generated according to the fused feature and face pose information, so that the accuracy of the determined target face features and face pose information can be improved, and the accuracy of user recognition results determined according to the target face features and face pose information can be improved.
  • the face pose feature and face feature are fused, that is, the detailed information of the face pose feature and face feature can be fused, and the target face feature of the face image to be recognized can be generated according to the fused feature and face pose information, so that the accuracy of the determined target face features and face pose information can be improved, and the accuracy of user recognition results determined according to the target face features and face pose information can be improved.
  • FIG. 4 is a schematic diagram of a computer device 4 provided by an embodiment of the present disclosure.
  • the computer device 4 of this embodiment includes: a processor 401 , a memory 402 , and a computer program 403 stored in the memory 402 and capable of running on the processor 401 .
  • the processor 401 executes the computer program 403
  • the steps in the foregoing method embodiments are implemented.
  • the processor 401 executes the computer program 403 the functions of the modules/units in the foregoing device embodiments are implemented.
  • the computer program 403 can be divided into one or more modules/units, and one or more modules/units are stored in the memory 402 and executed by the processor 401 to complete the present disclosure.
  • One or more modules/units may be a series of computer program instruction segments capable of accomplishing specific functions, and the instruction segments are used to describe the execution process of the computer program 403 in the computer device 4 .
  • the computer device 4 can be a computer device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the computer device 4 may include, but is not limited to, a processor 401 and a memory 402 .
  • FIG. 4 is only an example of computer equipment 4, and does not constitute a limitation to computer equipment 4. It may include more or less components than those shown in the illustration, or combine certain components, or different components. , for example, computer equipment may also include input and output equipment, network access equipment, bus, and so on.
  • the processor 401 can be a central processing unit (Central Processing Unit, CPU), and can also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), on-site Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the memory 402 may be an internal storage unit of the computer device 4 , for example, a hard disk or a memory of the computer device 4 .
  • Memory 402 also can be the external storage device of computer equipment 4, for example, the plug-in type hard disk equipped on computer equipment 4, smart memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card, flash memory card ( Flash Card), etc.
  • the memory 402 may also include both an internal storage unit of the computer device 4 and an external storage device.
  • the memory 402 is used to store computer programs and other programs and data required by the computer equipment.
  • the memory 402 can also be used to temporarily store data that has been output or will be output.
  • the disclosed apparatus/computer equipment and methods may be implemented in other ways.
  • the device/computer device embodiments described above are only illustrative, for example, the division of modules or units is only a logical function division, and there may be other division methods in actual implementation, and multiple units or components can be Incorporation may either be integrated into another system, or some features may be omitted, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • a unit described as a separate component may or may not be physically separated, and a component displayed as a unit may or may not be a physical unit, that is, it may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
  • an integrated module/unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the present disclosure realizes all or part of the processes in the methods of the above embodiments, and can also be completed by instructing related hardware through computer programs.
  • the computer programs can be stored in computer-readable storage media, and the computer programs can be processed. When executed by the controller, the steps in the above-mentioned method embodiments can be realized.
  • a computer program may include computer program code, which may be in source code form, object code form, executable file, or some intermediate form or the like.
  • the computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, U disk, removable hard disk, magnetic disk, optical disk, computer memory, read-only memory (Read-Only Memory, ROM), random access Memory (Random Access Memory, RAM), electrical carrier signal, telecommunication signal and software distribution medium, etc. It should be noted that the content contained in computer readable media may be appropriately increased or decreased according to the requirements of legislation and patent practice in the jurisdiction. For example, in some jurisdictions, computer readable media may not Including electrical carrier signals and telecommunication signals.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种人脸识别方法、装置。该方法可将人脸位姿特征和人脸特征进行融合,即可将人脸位姿特征和人脸特征的细节信息进行融合,并根据融合后的特征生成待识别人脸图像的目标人脸特征以及人脸位姿信息,从而提高所确定的目标人脸特征和人脸位姿信息的精准度,进而提高根据目标人脸特征和人脸位姿信息所确定的用户识别结果的准确度。

Description

一种人脸识别方法及装置 技术领域
本公开涉及数据处理技术领域,尤其涉及一种人脸识别方法及装置。
背景技术
人脸识别系统是一项新兴的生物识别技术,是当今国际科技领域攻关的高精尖技术具有广阔的发展前景。现有的人脸检测技术大多是对理想环境下的人脸才具有较好的表现,对于一些复杂环境、跨工作场景等的应用中其对环境的泛化能力和表现效果都较差。特别是现实应用场景中,尤其当人脸姿态发生快速和剧烈改变时,现有方法的人脸识别精度距离实际应用的要求还有较大差距,需要进一步研究和提升。
发明内容
有鉴于此,本公开实施例提供了一种人脸识别方法、装置、计算机设备及计算机可读存储介质,以解决现有技术中当人脸姿态发生快速和剧烈改变时人脸识别结果不准确的问题。
本公开实施例的第一方面,提供了一种人脸识别方法,方法包括:
获取待识别人脸图像;
提取待识别人脸图像的人脸位姿特征和人脸特征;
根据人脸位姿特征和人脸特征,生成待识别人脸图像的目标人脸特征以及人脸位姿信息;
若人脸位姿信息满足预设条件,则根据待识别人脸图像的目标人脸特征,确定待识别人脸图像的用户识别结果。
本公开实施例的第二方面,提供了一种人脸识别装置,装置包括:
图像获取单元,用于获取待识别人脸图像;
特征提取单元,用于提取待识别人脸图像的人脸位姿特征和人脸特征;
信息生成单元,用于根据人脸位姿特征和人脸特征,生成待识别人脸图像的目标人脸特征、人脸位姿信息以及人脸位姿信息对应的分值向量;
结果确定单元,用于若人脸位姿信息以及人脸位姿信息对应的分值向量满足预设条件,则根据待识别人脸图像的目标人脸特征,确定待识别人脸图像的用户识别结果。
本公开实施例的第三方面,提供了一种计算机设备,包括存储器、处理器以及存储在存储器中并且可以在处理器上运行的计算机程序,该处理器执行计算机程序时实现上述方法的 步骤。
本公开实施例的第四方面,提供了一种计算机可读存储介质,该计算机可读存储介质存储有计算机程序,该计算机程序被处理器执行时实现上述方法的步骤。
本公开实施例与现有技术相比存在的有益效果是:本公开实施例可以先获取待识别人脸图像;然后,可以提取待识别人脸图像的人脸位姿特征和人脸特征;接着,根据人脸位姿特征和人脸特征,生成待识别人脸图像的目标人脸特征以及人脸位姿信息;若人脸位姿信息满足预设条件,则根据待识别人脸图像的目标人脸特征,确定待识别人脸图像的用户识别结果。在本实施例中,由于人脸位姿特征反映了人脸的空间信息,且包括人脸的结构性特征、边缘特征、角度特征,而人脸特征反映了人脸肤色、纹理等特征,因此,将人脸位姿特征和人脸特征进行融合,即可以将人脸位姿特征和人脸特征的细节信息进行融合,并可以根据融合后的特征生成待识别人脸图像的目标人脸特征以及人脸位姿信息,从而便可以提高所确定的目标人脸特征和人脸位姿信息的精准度,进而可以提高根据目标人脸特征和人脸位姿信息所确定的用户识别结果的准确度。
附图说明
为了更清楚地说明本公开实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1是本公开实施例的应用场景的场景示意图;
图2是本公开实施例提供的人脸识别方法的流程图;
图3是本公开实施例提供的人脸识别装置的框图;
图4是本公开实施例提供的计算机设备的示意图。
具体实施方式
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本公开实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本公开。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本公开的描述。
下面将结合附图详细说明根据本公开实施例的一种人脸识别方法和装置。
在现有技术中,现有的人脸检测技术大多是对理想环境下的人脸才具有较好的表现,对 于一些复杂环境、跨工作场景等的应用中其对环境的泛化能力和表现效果都较差。特别是现实应用场景中,尤其当人脸姿态发生快速和剧烈改变时,现有方法的人脸识别精度距离实际应用的要求还有较大差距,需要进一步研究和提升。
为了解决上述问题,本发明提供了一种人脸识别方法,在本方法中,可以先获取待识别人脸图像;然后,可以提取待识别人脸图像的人脸位姿特征和人脸特征;接着,根据人脸位姿特征和人脸特征,生成待识别人脸图像的目标人脸特征以及人脸位姿信息;若人脸位姿信息满足预设条件,则根据待识别人脸图像的目标人脸特征,确定待识别人脸图像的用户识别结果。在本实施例中,由于人脸位姿特征反映了人脸的空间信息,且包括人脸的结构性特征、边缘特征、角度特征,而人脸特征反映了人脸肤色、纹理等特征,因此,将人脸位姿特征和人脸特征进行融合并生成待识别人脸图像的目标人脸特征以及人脸位姿信息,可以提高所确定的目标人脸特征和人脸位姿信息的精准度,从而可以提高根据目标人脸特征和人脸位姿信息所确定的用户识别结果的准确度。
举例说明,本发明实施例可以应用到如图1所示的应用场景。在该场景中,可以包括终端设备1和服务器2。
终端设备1可以是硬件,也可以是软件。当终端设备1为硬件时,其可以是具有图像采集功能且支持与服务器2通信的各种电子设备,包括但不限于智能手机、平板电脑、膝上型便携计算机和台式计算机等;当终端设备1为软件时,其可以安装在如上该的电子设备中。终端设备1可以实现为多个软件或软件模块,也可以实现为单个软件或软件模块,本公开实施例对此不作限制。服务器2可以是提供各种服务的服务器,例如,对与其建立通信连接的终端设备发送的请求进行接收的后台服务器,该后台服务器可以对终端设备发送的请求进行接收和分析等处理,并生成处理结果。服务器2可以是一台服务器,也可以是由若干台服务器组成的服务器集群,或者还可以是一个云计算服务中心,本公开实施例对此不作限制。
需要说明的是,服务器2可以是硬件,也可以是软件。当服务器2为硬件时,其可以是为终端设备1提供各种服务的各种电子设备。当服务器2为软件时,其可以是为终端设备1提供各种服务的多个软件或软件模块,也可以是为终端设备1提供各种服务的单个软件或软件模块,本公开实施例对此不作限制。
终端设备1与服务器2可以通过网络进行通信连接。网络可以是采用同轴电缆、双绞线和光纤连接的有线网络,也可以是无需布线就能实现各种通信设备互联的无线网络,例如,蓝牙(Bluetooth)、近场通信(Near Field Communication,NFC)、红外(Infrared)等,本公开实施例对此不作限制。
具体地,用户可以通过终端设备1输入待识别人脸图像,终端设备1将待评估业务数 据和目标评估方式向服务器2发送。服务器2先提取待识别人脸图像的人脸位姿特征和人脸特征;然后,可以根据人脸位姿特征和人脸特征,生成待识别人脸图像的目标人脸特征以及人脸位姿信息;若人脸位姿信息满足预设条件,则可以根据待识别人脸图像的目标人脸特征,确定待识别人脸图像的用户识别结果;以及将待识别人脸图像的用户识别结果向终端设备1返回,以便终端设备1可以向用户展示待识别人脸图像的用户识别结果。这样,将人脸位姿特征和人脸特征进行融合,即可以将人脸位姿特征和人脸特征的细节信息进行融合,并可以根据融合后的特征生成待识别人脸图像的目标人脸特征以及人脸位姿信息,从而便可以提高所确定的目标人脸特征和人脸位姿信息的精准度,进而可以提高根据目标人脸特征和人脸位姿信息所确定的用户识别结果的准确度。
需要说明的是,终端设备1和服务器2以及网络的具体类型、数量和组合可以根据应用场景的实际需求进行调整,本公开实施例对此不作限制。
需要注意的是,上述应用场景仅是为了便于理解本公开而示出,本公开的实施方式在此方面不受任何限制。相反,本公开的实施方式可以应用于适用的任何场景。
图2是本公开实施例提供的一种人脸识别方法的流程图。图2的一种人脸识别方法可以由图1的终端设备或服务器执行。如图2所示,该人脸识别方法包括:
S201:获取待识别人脸图像。
在本实施例中,待识别人脸图像可以理解为需要进行人脸识别的图像。作为一种示例,待识别人脸图像可以是通过安装在固定位置的监控摄像头采集的,也可以是通过移动终端设备采集到的,还可以是从预先存储有图像的存储设备中读取到的。
S202:提取待识别人脸图像的人脸位姿特征和人脸特征。
在获取到待识别人脸图像后,为了能够准确识别待识别人脸图像中的侧脸人脸,需要先提取待识别人脸图像的人脸位姿特征和人脸特征。其中,人脸位姿特征反映了可以人脸的空间信息,例如可以包括人脸的结构性特征、边缘特征、角度特征;而人脸特征可以反映人脸肤色、纹理、年龄、光照、种族等方面的语义信息。
S203:根据人脸位姿特征和人脸特征,生成待识别人脸图像的目标人脸特征以及人脸位姿信息。
在本实施例中,可以先将人脸位姿特征和人脸特征进行融合,以便可以利用人脸位姿特征对人脸特征进行信息补充,以便丰富人脸特征的信息。然后,可以根据人脸位姿特征和人脸特征,生成待识别人脸图像的目标人脸特征以及人脸位姿信息,从而可以提高所确定的目标人脸特征和人脸位姿信息的精准度。其中,目标人脸特征可以理解包括人脸细节信息(比如纹理信息、肤色信息等)的特征向量。人脸位姿信息可以包括人脸的偏航角、俯仰角、翻 滚角,人脸位姿信息可以反映人脸的转向角度以及转向幅度。
S204:若人脸位姿信息满足预设条件,则根据待识别人脸图像的目标人脸特征,确定待识别人脸图像的用户识别结果。
若人脸侧脸或者低头的幅度过大,人脸图像的位姿角度也会较大,从而导致可用于识别的人脸信息很少,进而导致该人脸容易被攻击(例如容易使用被冒充的用户照片进行解密)。故此,在本实施例中,需要先判断人脸位姿信息是否满足预设条件,在一种实现方式中,该预设条件可以偏航角小于或等于预设偏航角阈值、俯仰角小于或等于预设俯仰角阈值、翻滚角小于或等于预设翻滚角阈值。
若人脸位姿信息满足预设条件,则可以根据待识别人脸图像的目标人脸特征,确定待识别人脸图像的用户识别结果。作为一种示例,在本实施例中,可以预先设置若干预设用户信息,且每个预设用户信息均具有与其对应的预设用户人脸特征;在本实施例中,可以先确定待识别人脸图像的目标人脸特征分别与各个预设用户人脸特征之间的相似度,例如可以利用欧式距离、余弦距离等方式都可以衡量向量距离,从而根据向量距离确定相似度;然后,可以将相似度最大的预设用户人脸特征对应的用户信息作为待识别人脸图像的用户识别结果。
本公开实施例与现有技术相比存在的有益效果是:本公开实施例可以先获取待识别人脸图像;然后,可以提取待识别人脸图像的人脸位姿特征和人脸特征;接着,根据人脸位姿特征和人脸特征,生成待识别人脸图像的目标人脸特征以及人脸位姿信息;若人脸位姿信息满足预设条件,则根据待识别人脸图像的目标人脸特征,确定待识别人脸图像的用户识别结果。在本实施例中,由于人脸位姿特征反映了人脸的空间信息,且包括人脸的结构性特征、边缘特征、角度特征,而人脸特征反映了人脸肤色、纹理等特征,因此,将人脸位姿特征和人脸特征进行融合,即可以将人脸位姿特征和人脸特征的细节信息进行融合,并可以根据融合后的特征生成待识别人脸图像的目标人脸特征以及人脸位姿信息,从而便可以提高所确定的目标人脸特征和人脸位姿信息的精准度,进而可以提高根据目标人脸特征和人脸位姿信息所确定的用户识别结果的准确度。
接下来,将介绍S202的一种实现方式,即如何提取待识别人脸图像的人脸位姿特征和人脸特征。在本实施例中,人脸位姿特征可以包括N个人脸位姿特征,人脸特征可以包括N个人脸特征,S202可以包括以下步骤:
S202a:对于第一人脸特征,将待识别人脸图像输入第一卷积层,得到第一人脸特征;
S202b:对于第一人脸位姿特征,将待识别人脸图像输入第一残差块,得到第一人脸位姿特征;
S202c:对于第i人脸位姿特征,将第i-1人脸位姿特征输入第i残差块,得到第i人脸位姿特征;
S202d:对于第i人脸特征,将第i-1人脸位姿特征和第i-1人脸特征输入第i卷积层,得到第i人脸特征;其中,i大于或等于2,且i小于或等于N,N、i均为正整数。
在本实施例中,可以设置有两个模型,分别为人脸特征提取模型和人脸位姿特征提取模型。其中,人脸位姿特征提取模型可以至少包括N个残差块,分别为第一残差块、第二残差块、…、第N残差块,其中,这N个残差块为联级连接;在一种实现方式中,人脸位姿特征提取模型可以至少包括4个残差块,每个残差块分别由2个、3个、2个残差网络构成,每个残差网络中的网络架构可以包括两个卷积层(即conv)、两个批标准化层(即bn)和两个双曲正切激活函数(即tanh),具体的连接结构为:conv+bn+tanh+conv+bn+tanh,并且,这三个残差块的输出特征图通道数分别是64、128、256。需要说明的是,将双曲正切激活函数作为激活函数的原因是可以使得每一次的特征计算都在(-1,1)之间取值,有利于后续的位姿计算。可以理解的是,可以将待识别人脸图像输入第一残差块,得到第一人脸位姿特征;将第一人脸位姿特征输入第二残差块,得到第二人脸位姿特征;…;对于第i人脸位姿特征,将第i-1人脸位姿特征输入第i残差块,得到第i人脸位姿特征;其中,i大于或等于2,且i小于或等于N,N、i均为正整数。
其中,人脸特征提取模型可以至少包括N个卷积层,分别为第一卷积层、第二卷积层、…、第N卷积层,其中,这N个卷积层为联级连接。其中,人脸特征提取模型可以为IResNet-50网络,每个卷积层可以包括一个卷积核大小为3x3、通道数为192的卷积算子和一个卷积核大小为1x1、通道数为128的卷积算子。在本实施例中,对于第一人脸特征,将待识别人脸图像输入第一卷积层,得到第一人脸特征;将第一人脸位姿特征(比如维度是(28,28,64))和第一人脸特征(比如维度是(28,28,128))输入第二卷积层(例如可以将第一人脸特征和第一人脸特征进行特征融合,再将融合的特征输入第二卷积层),得到第二人脸特征;…;将第i-1人脸位姿特征和第i-1人脸特征输入第i卷积层,得到第i人脸特征;其中,i大于或等于2,且i小于或等于N,N、i均为正整数。需要强调的是,将人脸位姿特征作为补充,与人脸特征输入到卷积层中计算下一个人脸特征的原因是位姿信息处理更多考虑空间信息,其对人脸的结构性特征、边缘特征、角度特征提取较为完整,而人脸特征提取需要考虑年龄、光照、种族、肤色、纹理等各方面语义信息,在空间结构上有一定的不足和语义混淆,因此,人脸位姿特征提取模型所提取的人脸位姿特征加入人脸特征提取模型,会对人脸特征的信息处理带来有效的补充。
需要说明的是,人脸位姿特征提取模型中的残差块数量比人脸特征提取模型中的卷积 层数量少、通道数少,其原因是人脸位姿特征提取模型所处理的是较为单一的语义信息,不需要较大计算量。
接下来,将介绍S203的一种实现方式,即如何生成待识别人脸图像的目标人脸特征以及人脸位姿信息。在本实施例中,S203中的“根据人脸位姿特征和人脸特征,生成待识别人脸图像的目标人脸特征”的步骤可以包括以下步骤:
将第N人脸位姿特征和第N人脸特征输入第N+1卷积层,得到待识别人脸图像的目标人脸特征。
在本实施例中,人脸特征提取模型还包括第N+1卷积层,可以将第N人脸位姿特征和第N人脸特征输入第N+1卷积层,得到待识别人脸图像的目标人脸特征。
在本实施例中,S203中的“根据人脸位姿特征和人脸特征,生成待识别人脸图像的人脸位姿信息”的步骤可以包括以下步骤:
步骤a:根据人脸位姿特征,生成人脸位姿特征对应的注意力图。
在本实施例中,可以先将所有人脸位姿特征进行降采样处理,使得所有人脸位姿特征的维度均是相同的。然后,分别将每一个人脸位姿特征输入注意力模型,得到人脸位姿特征对应的注意力图。其中,注意力模型包括一个卷积核大小为1x1、通道数1的卷积算子以及sigmoid函数。例如,一个人脸位姿特征的维度为(28,28,64),将该人脸位姿特征先进行2次卷积核大小为3x3、步长为2、通道数为64的卷积计算,并且将维度降为(7,7,64),再引入上述1x1卷积计算和sigmoid计算,得到其对应的注意力图为维度(7,7,1)的注意力图,然后可以将3个维度改为1个维度得到最终的注意力图的维度为(7,7,1)。
步骤b:根据人脸特征,生成人脸特征对应的注意力图;
在本实施例中,可以先将所有人脸特征进行降采样处理,使得所有人脸特征的维度均是相同的。然后,分别将每一个人脸特征输入注意力模型,得到人脸特征对应的注意力图。其中,注意力模型包括一个卷积核大小为1x1、通道数1的卷积算子以及sigmoid函数。例如,一个人脸特征的维度为(28,28,64),将该人脸特征先进行2次卷积核大小为3x3、步长为2、通道数为64的卷积计算,并且将维度降为(7,7,64),再引入上述1x1卷积计算和sigmoid计算,得到其对应的注意力图为维度(7,7,1)的注意力图,然后可以将3个维度改为1个维度得到最终的注意力图的维度为(7,7,1)。
步骤c:根据人脸位姿特征对应的注意力图、人脸特征对应的注意力图和第N人脸位姿特征,得到全空间注意力图。
在本实施例中,可以先根据人脸位姿特征对应的注意力图和人脸特征对应的注意力图,生成双流注意力图。例如,可以根据第i人脸位姿特征对应的注意力图和第i人脸特征对应 的注意力图,生成第i双流注意力图;具体地,可以通过下列公式计算第i双流注意力图D i,D i=reshape([A,B] TW 1);其中,A为人脸特征对应的注意力图、B为人脸位姿特征对应的注意力图;W 1为学习参数矩阵,其矩阵维度是(7x7x2,qx7x7),W 1的每一行代表空间中每个点和其他点的关联值,但这种关联可以有多种形式,因此,引入q种形式的关联学习;reshape()为将矩阵变换成特定维数矩阵的一种函数。
然后,可以根据人脸特征对应的注意力图,生成深浅注意力图。例如,可以根据所有人脸特征对应的注意力图,生成一深浅注意力图。具体地,可以通过下列公式计算深浅注意力图E,E=reshape([A 1,A 2,…,A N] TW 2),reshape()为将矩阵变换成特定维数矩阵的一种函数,A为人脸特征对应的注意力图,N为人脸特征的数量,W 2为学习参数矩阵,W 2 矩阵维度是(7*7*2,q’*q),参数q’可以设为4,q’在此处也起到分区的作用,并且,E的维度是(q’,q)。
接着,可以根据双流注意力图、深浅注意力图和第N人脸位姿特征,得到全空间注意力图。例如,可以通过下列公式计算全空间注意力图R,R=(E TD 1+E TD 2+…+E TD N) TP N,其中,E为深浅注意力图,D i为第i双流注意力图,1≤i≤N,P N为第N人脸位姿特征。
步骤d:根据全空间注意力图,生成待识别人脸图像的人脸位姿信息。
在本实施例中,可以根据全空间注意力图,生成待识别人脸图像的人脸位姿信息以及人脸位姿信息对应的分值向量。其中,人脸位姿信息对应的分值向量用于评价人脸整体信息的丰富度(指人脸图片所包含的判别性信息),可以理解的是,分值向量越高,人脸整体信息的丰富度越高,反之,分值向量越低,人脸整体信息的丰富度越低。具体地,可以通过下列公式计算待识别人脸图像的人脸位姿信息O,O=sigmoid((relu(R TW 3)) TW 4),其中,R为全空间注意力图,W 3和W 4为参数矩阵,W3和W4维度分别是(256,128)、(128,1)。
可以理解的是,由于全空间注意力图融合了多种形式的全空间注意力信息,也融合了深层与浅层的位姿(即深浅注意力图)、人脸整体特征信息(即第N人脸位姿特征),因此,根据全空间注意力图所生成的待识别人脸图像的人脸位姿信息以及人脸位姿信息对应的分值向量更加准确。
相应地,若人脸位姿信息满足预设条件,则根据待识别人脸图像的目标人脸特征,确定待识别人脸图像的用户识别结果,可以包括:
若人脸位姿信息以及人脸位姿信息对应的分值向量满足预设条件,则根据待识别人脸图像的目标人脸特征,确定待识别人脸图像的用户识别结果。
其中,预设条件可以为偏航角小于或等于预设偏航角阈值、俯仰角小于或等于预设俯仰角阈值、翻滚角小于或等于预设翻滚角阈值,以及人脸位姿信息对应的分值向量大于或等 于预设分值。需要说明的是,由于人脸位姿信息对应的分值向量小于预设分值会导致该人脸容易被攻击(例如容易使用被冒充的用户照片进行解密),因此,人脸位姿信息对应的分值需要向量大于或等于预设分值。
需要说明的,上述实施例可以应用于人脸识别模型,其中,该人脸识别模型的训练过程可以如下说明:
在本实施例中,可以引入N个类中心(一个正类中心和N-1个负类中心),即N个向量。这些向量与人脸向量T(归一化后)进行点乘,获得N个数值,这N个数值代表了当前人脸与N个类中心的相似度x。一般的训练方式,会把这N个x做softmax操作,然后计算交叉熵。但这样的决策边界是不准确的,训练方式也是低效的。因此,在本实施例中可以设T和正类中心的相似度是x +,其中,x +应该为减去与位姿、评分值相关的一个决策值。减去一个越大的决策值,意味着人脸特征的决策边界越小。其中,设偏航角y,俯仰角p,翻滚角r,评分值s,决策值m,公式为:
x +=x +-m
其中,
Figure PCTCN2022107827-appb-000001
其中,
Figure PCTCN2022107827-appb-000002
如上式,f是一个函数;m 0和m 1是超参数,例如m0=0.2,m1=0.2。
上式还有一个值i决定了当前给哪些人脸更大的决策值,更小的决策边界。一般来讲,m越大,决策边界越紧,人脸特征应该更靠近类中心,其产生的梯度越大。当设i为0时,上述公式将会给偏航角、俯仰角、翻滚角很小且评分值很高的人脸更大的决策值,更小的决策边界,更大的梯度;当i逐渐增大,上述公式将会给偏航角、俯仰角、翻滚角较大且评分值较低的人脸更大的决策值,更小的决策边界,更大的梯度。也就是说,当i取0值时,网络会主要训练正脸,给到正脸梯度;当i逐渐增大,网络会主要训练侧脸,给到侧脸梯度。
人脸识别模型训练决策技术方案是:模型启动训练时,i设为0。随后,模型损失会逐渐减小。当模型在验证集上精度逐渐提高时,可以逐渐提高i,每次提高0.1,如从0提高至0.1,0.1提高至0.2等。持续观察验证集上的精度情况。一旦发现精度下降,应该将i降低0.1,训一段时间后再尝试提高回原值。如果反复提高至原值3次发现精度始终无法继续提升,就降低0.1,训练至拟合后结束。此时我们得到的人脸空间分布,会呈现出角度小的在类中心附近,角度大的在类中边缘的合理分布。
因为空间分布合理,那么,在推理过程中,可以拿到一个人脸在大姿态下更好的特征 表达,也可以直接拿到三个位姿角和评分值。这使得人脸比对更为灵活,如果人脸比对中两张图片的位姿角度都过大,可以放弃此次比对。
上述所有可选技术方案,可以采用任意结合形成本公开的可选实施例,在此不再一一赘述。
下述为本公开装置实施例,可以用于执行本公开方法实施例。对于本公开装置实施例中未披露的细节,请参照本公开方法实施例。
图3是本公开实施例提供的人脸识别装置的示意图。如图3所示,该人脸识别装置包括:
图像获取单元301,用于获取待识别人脸图像;
特征提取单元302,用于提取待识别人脸图像的人脸位姿特征和人脸特征;
信息生成单元303,用于根据人脸位姿特征和人脸特征,生成待识别人脸图像的目标人脸特征、人脸位姿信息以及人脸位姿信息对应的分值向量;
结果确定单元304,用于若人脸位姿信息以及人脸位姿信息对应的分值向量满足预设条件,则根据待识别人脸图像的目标人脸特征,确定待识别人脸图像的用户识别结果。
可选的,人脸位姿特征包括N个人脸位姿特征,人脸特征包括N个人脸特征;特征提取单元302,用于:
对于第一人脸特征,将待识别人脸图像输入第一卷积层,得到第一人脸特征;
对于第一人脸位姿特征,将待识别人脸图像输入第一残差块,得到第一人脸位姿特征;
对于第i人脸位姿特征,将第i-1人脸位姿特征输入第i残差块,得到第i人脸位姿特征;
对于第i人脸特征,将第i-1人脸位姿特征和第i-1人脸特征输入第i卷积层,得到第i人脸特征;其中,i大于或等于2,且i小于或等于N,N、i均为正整数。
可选的,信息生成单元303,用于:
将第N人脸位姿特征和第N人脸特征输入第N+1卷积层,得到待识别人脸图像的目标人脸特征。
可选的,信息生成单元303,用于:
根据人脸位姿特征,生成人脸位姿特征对应的注意力图;
根据人脸特征,生成人脸特征对应的注意力图;
根据人脸位姿特征对应的注意力图、人脸特征对应的注意力图和第N人脸位姿特征,得到全空间注意力图;
根据全空间注意力图,生成待识别人脸图像的人脸位姿信息。
可选的,信息生成单元303,具体用于:
根据人脸位姿特征对应的注意力图和人脸特征对应的注意力图,生成双流注意力图;
根据人脸特征对应的注意力图,生成深浅注意力图;
根据双流注意力图、深浅注意力图和第N人脸位姿特征,得到全空间注意力图。
可选的,信息生成单元303,具体用于:
根据全空间注意力图,生成待识别人脸图像的人脸位姿信息以及人脸位姿信息对应的分值向量;
相应地,结果确定单元304,用于:
若人脸位姿信息以及人脸位姿信息对应的分值向量满足预设条件,则根据待识别人脸图像的目标人脸特征,确定待识别人脸图像的用户识别结果。
可选的,人脸位姿信息包括偏航角、俯仰角、翻滚角;预设条件为偏航角小于或等于预设偏航角阈值、俯仰角小于或等于预设俯仰角阈值、翻滚角小于或等于预设翻滚角阈值以及人脸位姿信息对应的分值向量大于或等于预设分值。
可选的,结果确定单元304,用于:
确定待识别人脸图像的目标人脸特征分别与各个预设用户人脸特征之间的相似度;
将相似度最大的预设用户人脸特征对应的用户信息作为待识别人脸图像的用户识别结果。
本公开实施例提供的技术方案为一种人脸识别装置,该装置包括:图像获取单元,用于获取待识别人脸图像;特征提取单元,用于提取待识别人脸图像的人脸位姿特征和人脸特征;信息生成单元,用于根据人脸位姿特征和人脸特征,生成待识别人脸图像的目标人脸特征、人脸位姿信息以及人脸位姿信息对应的分值向量;结果确定单元,用于若人脸位姿信息以及人脸位姿信息对应的分值向量满足预设条件,则根据待识别人脸图像的目标人脸特征,确定待识别人脸图像的用户识别结果。在本实施例中,由于人脸位姿特征反映了人脸的空间信息,且包括人脸的结构性特征、边缘特征、角度特征,而人脸特征反映了人脸肤色、纹理等特征,因此,将人脸位姿特征和人脸特征进行融合,即可以将人脸位姿特征和人脸特征的细节信息进行融合,并可以根据融合后的特征生成待识别人脸图像的目标人脸特征以及人脸位姿信息,从而便可以提高所确定的目标人脸特征和人脸位姿信息的精准度,进而可以提高根据目标人脸特征和人脸位姿信息所确定的用户识别结果的准确度。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本公开实施例的实施过程构成任何限定。
图4是本公开实施例提供的计算机设备4的示意图。如图4所示,该实施例的计算机设备4包括:处理器401、存储器402以及存储在该存储器402中并且可以在处理器401上运 行的计算机程序403。处理器401执行计算机程序403时实现上述各个方法实施例中的步骤。或者,处理器401执行计算机程序403时实现上述各装置实施例中各模块/单元的功能。
示例性地,计算机程序403可以被分割成一个或多个模块/单元,一个或多个模块/单元被存储在存储器402中,并由处理器401执行,以完成本公开。一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述计算机程序403在计算机设备4中的执行过程。
计算机设备4可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算机设备。计算机设备4可以包括但不仅限于处理器401和存储器402。本领域技术人员可以理解,图4仅仅是计算机设备4的示例,并不构成对计算机设备4的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如,计算机设备还可以包括输入输出设备、网络接入设备、总线等。
处理器401可以是中央处理单元(Central Processing Unit,CPU),也可以是其它通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其它可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
存储器402可以是计算机设备4的内部存储单元,例如,计算机设备4的硬盘或内存。存储器402也可以是计算机设备4的外部存储设备,例如,计算机设备4上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,存储器402还可以既包括计算机设备4的内部存储单元也包括外部存储设备。存储器402用于存储计算机程序以及计算机设备所需的其它程序和数据。存储器402还可以用于暂时地存储已经输出或者将要输出的数据。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本公开的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部 分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本公开的范围。
在本公开所提供的实施例中,应该理解到,所揭露的装置/计算机设备和方法,可以通过其它的方式实现。例如,以上所描述的装置/计算机设备实施例仅仅是示意性的,例如,模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读存储介质中。基于这样的理解,本公开实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,计算机程序可以存储在计算机可读存储介质中,该计算机程序在被处理器执行时,可以实现上述各个方法实施例的步骤。计算机程序可以包括计算机程序代码,计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。计算机可读介质可以包括:能够携带计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、电载波信号、电信信号以及软件分发介质等。需要说明的是,计算机可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如,在某些司法管辖区,根据立法和专利实践,计算机可读介质不包括电载波信号和电信信号。
以上实施例仅用以说明本公开的技术方案,而非对其限制;尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载 的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本公开各实施例技术方案的精神和范围,均应包含在本公开的保护范围之内。

Claims (11)

  1. 一种人脸识别方法,其特征在于,所述方法包括:
    获取待识别人脸图像;
    提取所述待识别人脸图像的人脸位姿特征和人脸特征;
    根据所述人脸位姿特征和所述人脸特征,生成所述待识别人脸图像的目标人脸特征以及人脸位姿信息;
    若所述人脸位姿信息满足预设条件,则根据所述待识别人脸图像的目标人脸特征,确定所述待识别人脸图像的用户识别结果。
  2. 根据权利要求1所述的方法,其特征在于,所述人脸位姿特征包括N个人脸位姿特征,所述人脸特征包括N个人脸特征;所述提取所述待识别人脸图像的人脸位姿特征和人脸特征,包括:
    对于第一人脸特征,将所述待识别人脸图像输入第一卷积层,得到所述第一人脸特征;
    对于第一人脸位姿特征,将所述待识别人脸图像输入第一残差块,得到第一人脸位姿特征;
    对于第i人脸位姿特征,将第i-1人脸位姿特征输入第i残差块,得到所述第i人脸位姿特征;
    对于第i人脸特征,将第i-1人脸位姿特征和第i-1人脸特征输入第i卷积层,得到第i人脸特征;其中,i大于或等于2,且i小于或等于N,N、i均为正整数。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述人脸位姿特征和所述人脸特征,生成所述待识别人脸图像的目标人脸特征,包括:
    将第N人脸位姿特征和第N人脸特征输入第N+1卷积层,得到所述待识别人脸图像的目标人脸特征。
  4. 根据权利要求2所述的方法,其特征在于,所述根据所述人脸位姿特征和所述人脸特征,生成所述待识别人脸图像的人脸位姿信息,包括:
    根据所述人脸位姿特征,生成所述人脸位姿特征对应的注意力图;
    根据所述人脸特征,生成所述人脸特征对应的注意力图;
    根据所述人脸位姿特征对应的注意力图、所述人脸特征对应的注意力图和第N人脸位姿特征,得到全空间注意力图;
    根据所述全空间注意力图,生成所述待识别人脸图像的人脸位姿信息。
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述人脸位姿特征对应的注意 力图、所述人脸特征对应的注意力图和第N人脸位姿特征,得到全空间注意力图,包括:
    根据所述人脸位姿特征对应的注意力图和所述人脸特征对应的注意力图,生成双流注意力图;
    根据所述人脸特征对应的注意力图,生成深浅注意力图;
    根据所述双流注意力图、所述深浅注意力图和第N人脸位姿特征,得到全空间注意力图。
  6. 根据权利要求4所述的方法,其特征在于,所述根据所述全空间注意力图,生成所述待识别人脸图像的人脸位姿信息,包括:
    根据所述全空间注意力图,生成所述待识别人脸图像的人脸位姿信息以及所述人脸位姿信息对应的分值向量;
    相应地,所述若所述人脸位姿信息满足预设条件,则根据所述待识别人脸图像的目标人脸特征,确定所述待识别人脸图像的用户识别结果,包括:
    若所述人脸位姿信息以及所述人脸位姿信息对应的分值向量满足预设条件,则根据所述待识别人脸图像的目标人脸特征,确定所述待识别人脸图像的用户识别结果。
  7. 根据权利要求6所述的方法,其特征在于,所述人脸位姿信息包括偏航角、俯仰角、翻滚角;所述预设条件为所述偏航角小于或等于预设偏航角阈值、所述俯仰角小于或等于预设俯仰角阈值、所述翻滚角小于或等于预设翻滚角阈值以及所述人脸位姿信息对应的分值向量大于或等于预设分值。
  8. 根据权利要求1所述的方法,其特征在于,所述根据所述待识别人脸图像的目标人脸特征,确定所述待识别人脸图像的用户识别结果,包括:
    确定所述待识别人脸图像的目标人脸特征分别与各个预设用户人脸特征之间的相似度;
    将相似度最大的预设用户人脸特征对应的用户信息作为所述待识别人脸图像的用户识别结果。
  9. 一种人脸识别装置,其特征在于,所述装置包括:
    图像获取单元,用于获取待识别人脸图像;
    特征提取单元,用于提取所述待识别人脸图像的人脸位姿特征和人脸特征;
    信息生成单元,用于根据所述人脸位姿特征和所述人脸特征,生成所述待识别人脸图像的目标人脸特征、人脸位姿信息以及所述人脸位姿信息对应的分值向量;
    结果确定单元,用于若所述人脸位姿信息以及所述人脸位姿信息对应的分值向量满足预设条件,则根据所述待识别人脸图像的目标人脸特征,确定所述待识别人脸图像的用户识 别结果。
  10. 一种计算机设备,包括存储器、处理器以及存储在所述存储器中并且可以在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1所述方法的步骤。
  11. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1所述方法的步骤。
PCT/CN2022/107827 2021-12-31 2022-07-26 一种人脸识别方法及装置 WO2023124040A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111656450.2 2021-12-31
CN202111656450.2A CN114330565A (zh) 2021-12-31 2021-12-31 一种人脸识别方法及装置

Publications (1)

Publication Number Publication Date
WO2023124040A1 true WO2023124040A1 (zh) 2023-07-06

Family

ID=81018550

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/107827 WO2023124040A1 (zh) 2021-12-31 2022-07-26 一种人脸识别方法及装置

Country Status (2)

Country Link
CN (1) CN114330565A (zh)
WO (1) WO2023124040A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116597427A (zh) * 2023-07-18 2023-08-15 山东科技大学 一种基于深度学习的舰船驾驶台身份识别方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114330565A (zh) * 2021-12-31 2022-04-12 深圳集智数字科技有限公司 一种人脸识别方法及装置
CN116091896B (zh) * 2023-04-12 2023-07-25 无锡学院 基于IResNet模型网络的防风药材产地识别方法及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111898412A (zh) * 2020-06-16 2020-11-06 深圳市雄帝科技股份有限公司 人脸识别方法、装置、电子设备和介质
CN112001932A (zh) * 2020-09-01 2020-11-27 腾讯科技(深圳)有限公司 人脸识别方法、装置、计算机设备和存储介质
CN112418074A (zh) * 2020-11-20 2021-02-26 重庆邮电大学 一种基于自注意力的耦合姿态人脸识别方法
CN114330565A (zh) * 2021-12-31 2022-04-12 深圳集智数字科技有限公司 一种人脸识别方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111898412A (zh) * 2020-06-16 2020-11-06 深圳市雄帝科技股份有限公司 人脸识别方法、装置、电子设备和介质
CN112001932A (zh) * 2020-09-01 2020-11-27 腾讯科技(深圳)有限公司 人脸识别方法、装置、计算机设备和存储介质
CN112418074A (zh) * 2020-11-20 2021-02-26 重庆邮电大学 一种基于自注意力的耦合姿态人脸识别方法
CN114330565A (zh) * 2021-12-31 2022-04-12 深圳集智数字科技有限公司 一种人脸识别方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116597427A (zh) * 2023-07-18 2023-08-15 山东科技大学 一种基于深度学习的舰船驾驶台身份识别方法
CN116597427B (zh) * 2023-07-18 2023-10-20 山东科技大学 一种基于深度学习的舰船驾驶台身份识别方法

Also Published As

Publication number Publication date
CN114330565A (zh) 2022-04-12

Similar Documents

Publication Publication Date Title
WO2023124040A1 (zh) 一种人脸识别方法及装置
US11487995B2 (en) Method and apparatus for determining image quality
CN108898086B (zh) 视频图像处理方法及装置、计算机可读介质和电子设备
WO2020199693A1 (zh) 一种大姿态下的人脸识别方法、装置及设备
WO2022161286A1 (zh) 图像检测方法、模型训练方法、设备、介质及程序产品
WO2017088432A1 (zh) 图像识别方法和装置
WO2020024484A1 (zh) 用于输出数据的方法和装置
WO2023035531A1 (zh) 文本图像超分辨率重建方法及其相关设备
CN109271930B (zh) 微表情识别方法、装置与存储介质
US20230095182A1 (en) Method and apparatus for extracting biological features, device, medium, and program product
WO2021164269A1 (zh) 基于注意力机制的视差图获取方法和装置
CN111950570B (zh) 目标图像提取方法、神经网络训练方法及装置
CN111091075A (zh) 人脸识别方法、装置、电子设备及存储介质
WO2023173646A1 (zh) 一种表情识别方法及装置
WO2020244151A1 (zh) 图像处理方法、装置、终端及存储介质
WO2021042544A1 (zh) 基于去网纹模型的人脸验证方法、装置、计算机设备及存储介质
CN112614110B (zh) 评估图像质量的方法、装置及终端设备
CN112488054A (zh) 一种人脸识别方法、装置、终端设备及存储介质
CN112348008A (zh) 证件信息的识别方法、装置、终端设备及存储介质
CN111814811A (zh) 图像信息提取方法、训练方法及装置、介质和电子设备
CN114997365A (zh) 图像数据的知识蒸馏方法、装置、终端设备及存储介质
CN113763313A (zh) 文本图像的质量检测方法、装置、介质及电子设备
CN111507421A (zh) 一种基于视频的情感识别方法及装置
CN113255586B (zh) 基于rgb图像和ir图像对齐的人脸防作弊方法及相关设备
CN113221830B (zh) 一种超分活体识别方法、系统、终端及存储介质