CN116246317A - Face recognition processing method, device, computer equipment and storage medium - Google Patents

Face recognition processing method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN116246317A
CN116246317A CN202310001648.XA CN202310001648A CN116246317A CN 116246317 A CN116246317 A CN 116246317A CN 202310001648 A CN202310001648 A CN 202310001648A CN 116246317 A CN116246317 A CN 116246317A
Authority
CN
China
Prior art keywords
angle
facial
target user
feature
score
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310001648.XA
Other languages
Chinese (zh)
Inventor
郝蛟
柳乐怡
邓彬
李浩然
张宗包
王冬
陈栋
刘岩
王子滔
王新
詹隽
佘伊伦
许伯阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Power Supply Bureau Co Ltd
Original Assignee
Shenzhen Power Supply Bureau Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Power Supply Bureau Co Ltd filed Critical Shenzhen Power Supply Bureau Co Ltd
Priority to CN202310001648.XA priority Critical patent/CN116246317A/en
Publication of CN116246317A publication Critical patent/CN116246317A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application relates to a face recognition processing method, a face recognition processing device, computer equipment and a storage medium. The method comprises the following steps: taking the acquisition angle of the first facial information of the target user as a first angle, and extracting the first angle facial features of the target user from the first facial information; performing similarity evaluation on the first angle facial features and a plurality of reference facial features in a facial feature library to obtain a first angle score of a target user; when the first angle score is lower than the score threshold, acquiring second facial information for the target user based on a second angle different from the first angle to obtain a second angle facial feature; and carrying out weighted fusion on the first angle score and a second angle score obtained based on the second angle facial features to obtain a comprehensive score of the target user, and judging that the target user meets the facial recognition condition when the comprehensive score is not lower than a score threshold value, and carrying out facial recognition processing on the target user. By adopting the method, the success rate of facial recognition can be improved.

Description

Face recognition processing method, device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of facial recognition technologies, and in particular, to a facial recognition processing method, a facial recognition processing device, a computer device, and a storage medium.
Background
With the development of the biological recognition technology, the facial recognition technology with the characteristics of noninductive acquisition, non-contact, long-distance recognition and the like is widely applied to identity authentication.
In the prior art, a camera or a camera is generally used to collect an image or a video stream containing a face, and feature information of the face is extracted from the image or the video stream to perform face recognition, and identity information is confirmed based on the result of the face recognition, so that identity authentication is performed.
However, in the face recognition method in the prior art, if the gesture of the user is offset, a large difference exists between the collected face information and the pre-stored face information, which results in a low success rate of face recognition.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a face recognition processing method, apparatus, computer device, computer-readable storage medium, and computer program product that can improve the success rate of face recognition.
In a first aspect, the present application provides a facial recognition processing method. The method comprises the following steps:
Acquiring first facial information acquired by a target user, taking an acquisition angle of the first facial information as a first angle, and extracting first angle facial features of the target user from the first facial information;
performing similarity evaluation on the first angle facial features and a plurality of reference facial features in a facial feature library to obtain a first angle score of a target user;
when the first angle score is lower than the score threshold, obtaining second angle facial features of the target user based on second facial information acquired from a second angle, wherein the second angle is different from the first angle;
and carrying out weighted fusion on the first angle score and a second angle score obtained based on the second angle facial features to obtain a comprehensive score of the target user, and judging that the target user meets the facial recognition condition when the comprehensive score is not lower than a score threshold value, and carrying out facial recognition processing on the target user.
In one embodiment, before acquiring the first facial information collected for the target user, the method includes:
determining a plurality of identifiable users that satisfy the facial recognition condition;
for each identifiable user, acquiring facial features of the identifiable user corresponding to each of a plurality of acquisition angles;
Aiming at each acquisition angle, taking the facial feature corresponding to the acquisition angle as the reference facial feature of the acquisition angle to obtain the reference facial feature of the identifiable user at each of a plurality of acquisition angles;
a facial feature library is constructed based on a respective plurality of reference facial features for each identifiable user.
In one embodiment, a plurality of feature points are included in each extracted facial feature; performing similarity evaluation on the first angle facial features and a plurality of reference facial features in a facial feature library, and obtaining a first angle score of the target user comprises:
taking the reference facial features with the same acquisition angle as the first angle in the facial feature library as reference features to be compared;
aiming at each reference feature to be compared, carrying out similarity evaluation on a plurality of feature points in the first angle facial feature and a plurality of feature points in the reference feature to be compared to obtain feature point similarity scores between the first angle facial feature and the reference feature to be compared;
and taking the highest feature point similarity score in the feature point similarity scores as a first angle score of the target user.
In one embodiment, when the composite score is not lower than the score threshold, determining that the target user satisfies the face recognition condition, after performing the face recognition processing on the target user, includes:
The first angle facial feature and the second angle facial feature of the target user are respectively used as newly added facial features of the target user;
aiming at each newly added facial feature, taking the reference facial feature which corresponds to the newly added facial feature and has the same acquisition angle in a facial feature library of the target user as the reference facial feature to be supplemented of the newly added facial feature;
screening out difference feature points which have differences with feature points of corresponding reference facial features to be supplemented from a plurality of feature points of the newly added facial features;
and taking the difference feature points corresponding to the newly added facial features as auxiliary facial features of the corresponding acquisition angles according to the acquisition angles corresponding to the newly added facial features, and adding the auxiliary facial features to a facial feature library of the target user.
In one embodiment, the facial feature library of the target user stores auxiliary facial features of the target user and reference facial features of the target user; each auxiliary facial feature comprises at least one difference feature point; after adding the auxiliary facial features to the target user's facial feature library, it includes:
for each acquisition angle, acquiring the total data quantity of auxiliary facial features corresponding to the acquisition angle of the target user from a facial feature library of the target user;
Taking the acquisition angle of which the total data quantity exceeds the data quantity threshold value as an acquisition angle to be adjusted, and comparing difference feature points in a plurality of auxiliary facial features corresponding to the acquisition angle to be adjusted;
determining the total number of different difference feature points of the comparison result according to each auxiliary facial feature corresponding to the acquisition angle to be adjusted;
and deleting the auxiliary facial features of which the total number does not meet the number condition, and updating the reference facial features of the target user at the acquisition angle to be adjusted based on the auxiliary facial features of which the total number is the least.
In one embodiment, when the composite score is not lower than the score threshold, it is determined that the target user meets the face recognition condition, and after performing the face recognition processing on the target user, the method further includes:
in a preset effective time, detecting dynamic behaviors of a target user;
and when the target user meets the dynamic behavior detection condition within the effective time, carrying out identity authentication on the target user based on the identity information obtained by carrying out face recognition processing on the target user.
In a second aspect, the present application further provides a facial recognition processing device. The device comprises:
the facial feature extraction module is used for acquiring first facial information acquired by a target user, taking the acquisition angle of the first facial information as a first angle, and extracting first angle facial features of the target user from the first facial information;
The scoring obtaining module is used for evaluating the similarity between the first angle facial features and a plurality of reference facial features in the facial feature library to obtain first angle scores of the target users;
the second angle acquisition module is used for acquiring second angle facial features of the target user based on second facial information acquired from a second angle when the first angle score is lower than a score threshold, wherein the second angle is different from the first angle;
and the comprehensive score obtaining module is used for carrying out weighted fusion on the first angle score and the second angle score obtained based on the second angle facial features to obtain the comprehensive score of the target user, and when the comprehensive score is not lower than the score threshold, judging that the target user meets the facial recognition condition, and carrying out facial recognition processing on the target user.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring first facial information acquired by a target user, taking an acquisition angle of the first facial information as a first angle, and extracting first angle facial features of the target user from the first facial information;
Performing similarity evaluation on the first angle facial features and a plurality of reference facial features in a facial feature library to obtain a first angle score of a target user;
when the first angle score is lower than the score threshold, obtaining second angle facial features of the target user based on second facial information acquired from a second angle, wherein the second angle is different from the first angle;
and carrying out weighted fusion on the first angle score and a second angle score obtained based on the second angle facial features to obtain a comprehensive score of the target user, and judging that the target user meets the facial recognition condition when the comprehensive score is not lower than a score threshold value, and carrying out facial recognition processing on the target user.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring first facial information acquired by a target user, taking an acquisition angle of the first facial information as a first angle, and extracting first angle facial features of the target user from the first facial information;
performing similarity evaluation on the first angle facial features and a plurality of reference facial features in a facial feature library to obtain a first angle score of a target user;
When the first angle score is lower than the score threshold, obtaining second angle facial features of the target user based on second facial information acquired from a second angle, wherein the second angle is different from the first angle;
and carrying out weighted fusion on the first angle score and a second angle score obtained based on the second angle facial features to obtain a comprehensive score of the target user, and judging that the target user meets the facial recognition condition when the comprehensive score is not lower than a score threshold value, and carrying out facial recognition processing on the target user.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
acquiring first facial information acquired by a target user, taking an acquisition angle of the first facial information as a first angle, and extracting first angle facial features of the target user from the first facial information;
performing similarity evaluation on the first angle facial features and a plurality of reference facial features in a facial feature library to obtain a first angle score of a target user;
when the first angle score is lower than the score threshold, obtaining second angle facial features of the target user based on second facial information acquired from a second angle, wherein the second angle is different from the first angle;
And carrying out weighted fusion on the first angle score and a second angle score obtained based on the second angle facial features to obtain a comprehensive score of the target user, and judging that the target user meets the facial recognition condition when the comprehensive score is not lower than a score threshold value, and carrying out facial recognition processing on the target user.
The face recognition processing method, the device, the computer equipment, the storage medium and the computer program product are characterized in that the collected angle of the collected first face information is used as a first angle, the first angle face feature of the target user is extracted from the first face information, similarity evaluation is conducted on the first angle face feature and a plurality of reference face features in a face feature library to obtain a first angle score of the target user, when the first angle score is lower than a score threshold value, second face information of the target user is obtained based on a second angle different from the first angle, the second angle face feature of the target user is obtained, then the first angle score and the second angle score obtained based on the second angle face feature are subjected to weighted fusion, the comprehensive score of the target user is obtained, when the comprehensive score is not lower than the score threshold value, the target user is judged to meet the face recognition condition, and face recognition processing is conducted on the target user. In the whole process, when the face information of the first angle does not meet the face recognition condition, the face information of the target user at other angles is acquired, so that the problem that the face recognition of the target user fails due to the fact that the target user deviates at a certain angle is avoided, the influence caused by the deviation of the posture at a certain angle can be reduced by comprehensively considering a plurality of angles, and the success rate of the face recognition can be improved.
Drawings
FIG. 1 is a diagram of an application environment for a face recognition processing method in one embodiment;
FIG. 2 is a flow chart of a face recognition processing method in one embodiment;
FIG. 3 is a flowchart of a face recognition processing method according to another embodiment;
FIG. 4 is a block diagram of a face recognition processing device in one embodiment;
fig. 5 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The facial recognition processing method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server. The server 104 may acquire first facial information acquired by the target user through the terminal 102 of the target user, take an acquisition angle of the first facial information as a first angle, extract a first angle facial feature of the target user from the first facial information, evaluate similarity between the first angle facial feature and a plurality of reference facial features in a facial feature library to obtain a first angle score of the target user, acquire second facial information of the target user from a second angle different from the first angle through the terminal 102 of the target user when the first angle score is lower than a score threshold, extract a second angle facial feature of the target user, perform weighted fusion on the first angle score and the second angle score obtained based on the second angle facial feature to obtain a comprehensive score of the target user, and determine that the target user meets a facial recognition condition when the comprehensive score is not lower than the score threshold, and perform facial recognition processing on the target user. The terminal 102 may be, but not limited to, a variety of personal computers, notebook computers, smart phones, tablet computers, and other devices equipped with a facial information collecting function. The server 104 may be implemented as a stand-alone server or as a server cluster of multiple servers.
In one embodiment, as shown in fig. 2, a face recognition processing method is provided, and the method is applied to the server in fig. 1 for illustration, and includes the following steps:
step 202, acquiring first facial information acquired for a target user, taking an acquisition angle of the first facial information as a first angle, and extracting first angle facial features of the target user from the first facial information.
The collection angles may specifically include a positive face angle, a left face angle, and a right face angle.
Optionally, after receiving a face recognition request initiated by the target user through the terminal, the server may acquire first face information of the target user through a face information acquisition device of the terminal, identify an acquisition angle of the first face information, take the acquisition angle of the first face information as a first angle, and extract a first angle facial feature of the target user from the first face information. The collection angle of the first facial information may be one of a positive face angle, a left face angle, and a right face angle.
For example, when the server recognizes that the first facial information is the front face information of the target user, the front face angle may be set as the first angle, and when the server recognizes that the first facial information is the left face/right face information of the target user, the server may set the left face/right face angle as the first angle.
And 204, performing similarity evaluation on the first angle facial features and a plurality of reference facial features in the facial feature library to obtain a first angle score of the target user.
The first angle score may specifically be a value between 0 and 100.
Alternatively, the server may evaluate the similarity between the first angular facial feature and the plurality of reference facial features in the facial feature library, respectively, to screen out the reference facial feature having the highest similarity to the first angular facial feature from the plurality of reference facial features, and score the reference facial feature having the highest similarity to the first angular facial feature as the first angular score of the target user. Wherein the higher the similarity between the first angular facial feature and the reference facial feature, the higher the score.
And step 206, obtaining a second angle facial feature of the target user based on the second facial information acquired from the second angle when the first angle score is lower than the score threshold, wherein the second angle is different from the first angle.
The scoring threshold value can be configured according to an actual application scene and is in a numerical form.
Optionally, when the score of the first angle is lower than the score threshold, the server may determine that the face information of the first angle of the target user does not meet the face recognition condition, select a second angle from the collection angles different from the first angle, and obtain second face information collected from the second angle for the target user, so as to obtain the second angle facial feature of the target user from the second face information.
Illustratively, taking the case that the first angle is a positive face angle as an example, when the first face information does not satisfy the face recognition condition, the server may select either one of the right face angle/the left face angle as the second angle.
Illustratively, the server may take the right face angle as the second angle, taking the first angle as the right face angle/the left face angle as an example.
Alternatively, when the first angle score is not lower than the score threshold, the server may determine that the first facial information of the target user satisfies the facial recognition condition, and perform facial recognition processing on the target user to determine the identity information of the target user.
And step 208, carrying out weighted fusion on the first angle score and a second angle score obtained based on the second angle facial features to obtain a comprehensive score of the target user, and judging that the target user meets the facial recognition condition when the comprehensive score is not lower than a score threshold value, and carrying out facial recognition processing on the target user.
Each acquisition angle has a corresponding weight, and the weights of the acquisition angles can be configured according to an actual application scene. Taking the example that the collection angles include a positive face angle, a left face angle and a right face angle as an illustration, in this embodiment, since the feature points collected by the positive face angle are generally greater than the feature points collected by the left face/right face angle, the weight of the configured positive face angle is greater than the weight of the left face/right face angle.
Optionally, after the first angle score is lower than the score threshold, the server may select a second angle, and for the identifiable user corresponding to the first angle score, the server may obtain a reference facial feature of the identifiable user under the second angle, and compare the reference facial feature of the identifiable user under the second angle with the second angle facial feature of the target user, and use the obtained score as the second angle score. And then, the server can perform weighted fusion on the first angle score and the second angle score according to the weights corresponding to the first angle and the second angle, so as to obtain the comprehensive score of the target user, compare the comprehensive score with a pre-configured score threshold, and judge that the target user meets the face recognition condition when the comprehensive score is not lower than the score threshold, and perform face recognition processing on the target user so as to determine the identity information of the target user.
Illustratively, taking the first angle as a positive face angle, the second angle as a left face angle, and the reference facial feature of the identifiable user X at the positive face angle is present, and the similarity between the reference facial feature and the first angular facial feature of the target user is the highest, the server may obtain the score between the reference facial feature of the identifiable user X at the positive face angle and the first angular facial feature through similarity evaluation, and take the obtained score as the first angle score. When the first angle score is lower than the score threshold and the second angle is the left face angle, the server can acquire the reference facial feature of the identifiable user X at the left face angle, evaluate the similarity between the reference facial feature of the identifiable user X at the left face angle and the second angle facial feature of the target user, and take the obtained score as the second angle score so as to carry out weighted fusion on the first angle score and the second angle score to obtain the comprehensive score.
In the face recognition processing method, the collected angle of the collected first face information is used as a first angle, the first angle face feature of the target user is extracted from the first face information, similarity evaluation is conducted on the first angle face feature and a plurality of reference face features in the face feature library, the first angle score of the target user is obtained, when the first angle score is lower than a scoring threshold value, the second face information of the target user is obtained based on a second angle different from the first angle, the second angle face feature of the target user is obtained, then the first angle score and the second angle score obtained based on the second angle face feature are subjected to weighted fusion, the comprehensive score of the target user is obtained, when the comprehensive score is not lower than the scoring threshold value, the target user is judged to meet the face recognition condition, and the face recognition processing is conducted on the target user. In the whole process, when the face information of the first angle does not meet the face recognition condition, the face information of the target user at other angles is acquired, so that the problem that the face recognition of the target user fails due to the fact that the target user deviates at a certain angle is avoided, the influence caused by the deviation of the posture at a certain angle can be reduced by comprehensively considering a plurality of angles, and the success rate of the face recognition can be improved.
In one embodiment, before acquiring the first facial information collected for the target user, the method includes:
determining a plurality of identifiable users that satisfy the facial recognition condition;
for each identifiable user, acquiring facial features of the identifiable user corresponding to each of a plurality of acquisition angles;
aiming at each acquisition angle, taking the facial feature corresponding to the acquisition angle as the reference facial feature of the acquisition angle to obtain the reference facial feature of the identifiable user at each of a plurality of acquisition angles;
a facial feature library is constructed based on a respective plurality of reference facial features for each identifiable user.
Each reference facial feature has a corresponding acquisition angle, and the facial feature library comprises a plurality of facial feature libraries capable of identifying users.
Alternatively, the server may determine in advance identity information of a plurality of identifiable users satisfying the face recognition condition, and after obtaining authorization of each identifiable user, the server may collect face information of the identifiable user from a plurality of collection angles for each identifiable user to obtain facial features of the identifiable user corresponding to each of the plurality of collection angles. Further, for each collection angle, the server may use the facial feature corresponding to the collection angle as a reference facial feature of the collection angle to obtain reference facial features of the identifiable user at each of the collection angles, and construct a facial feature library of each identifiable user based on each of the plurality of reference facial features of each identifiable user to obtain a final facial feature library.
In this embodiment, a face feature library is constructed by a plurality of reference face features of each of a plurality of identifiable users to perform face recognition on a target user based on the reference face features stored in the face feature library.
In one embodiment, a plurality of feature points are included in each extracted facial feature; performing similarity evaluation on the first angle facial features and a plurality of reference facial features in a facial feature library, and obtaining a first angle score of the target user comprises:
taking the reference facial features with the same acquisition angle as the first angle in the facial feature library as reference features to be compared;
aiming at each reference feature to be compared, carrying out similarity evaluation on a plurality of feature points in the first angle facial feature and a plurality of feature points in the reference feature to be compared to obtain feature point similarity scores between the first angle facial feature and the reference feature to be compared;
and taking the highest feature point similarity score in the feature point similarity scores as a first angle score of the target user.
The reference features to be compared comprise: at the same acquisition angle as the first angle, the plurality of identifiable user's respective reference facial features, i.e., the reference features to be compared include a plurality of reference facial features.
Alternatively, for a facial feature library storing a plurality of identifiable user's respective reference facial features, the server may use, as the reference features to be compared, reference facial features having the same collection angle as the first angle in the facial feature library. For each reference feature to be compared, the server may evaluate the similarity between a plurality of feature points in the first angular facial feature and a plurality of feature points in the reference feature to be compared, where the higher the similarity between the feature points in the first angular facial feature and the feature points in the reference feature to be compared, the higher the feature point similarity score between the first angular facial feature and the reference feature to be compared. After obtaining the feature point similarity scores between the first angle facial features and the reference features to be compared, the server may use the highest feature point similarity score among the feature point similarity scores as the first angle score of the target user.
Illustratively, taking the example that the first angle is the positive face angle as an example, when there is a reference facial feature of the identifiable user X at the positive face angle, and the similarity score between the reference facial feature and the first angular facial feature of the target user is highest, the server may preliminarily determine that the identity information of the target user is most likely to be identified as the identifiable user, that is, the identity information of the target user is most likely to satisfy the facial recognition condition with the identifiable user X. The server may then score the feature point similarity between the target user and the identifiable user X as a first angle score for the target user.
Alternatively, the server may determine that the target user satisfies the face recognition condition when the first angle score of the target user is not lower than the score threshold, the identity information of the identifiable user being identical to the identity information of the target user. Further, the server may authenticate the identity of the target user based on the identity information of the identifiable user.
Optionally, taking the reference facial feature of the identifiable user X at the first angle, where the similarity score between the reference facial feature and the target user first angle facial feature is the highest, and describing that the first angle score of the target user is the first angle score of the target user, when the first angle score of the target user is lower than the score threshold, the server may initially determine that the face recognition condition is not satisfied between the identifiable user X and the target user at the first angle, and to comprehensively consider a plurality of angles, the server may acquire the second facial information of the target user based on a second angle different from the first angle, and obtain the second facial feature of the target user. And then, comparing the similarity between the reference facial features of the identifiable user X at the second angle and the second facial features of the target user to obtain a second angle score of the target user. And finally, carrying out weighted fusion on the first angle score and the second angle score to obtain a comprehensive score of the target user, and judging whether the face recognition condition is met between the target user and the identifiable user X based on the comprehensive score.
In this embodiment, by acquiring the first angle score of the target user and comparing the first angle score with the score threshold, it can be determined whether the face recognition condition is satisfied between the identifiable user corresponding to the first angle score and the target user at the same acquisition angle as the first angle.
In one embodiment, when the composite score is not lower than the score threshold, determining that the target user satisfies the face recognition condition, after performing the face recognition processing on the target user, includes:
the first angle facial feature and the second angle facial feature of the target user are respectively used as newly added facial features of the target user;
aiming at each newly added facial feature, taking the reference facial feature which corresponds to the newly added facial feature and has the same acquisition angle in a facial feature library of the target user as the reference facial feature to be supplemented of the newly added facial feature;
screening out difference feature points which have differences with feature points of corresponding reference facial features to be supplemented from a plurality of feature points of the newly added facial features;
and taking the difference feature points corresponding to the newly added facial features as auxiliary facial features of the corresponding acquisition angles according to the acquisition angles corresponding to the newly added facial features, and adding the auxiliary facial features to a facial feature library of the target user.
Optionally, the server may respectively use the first angle facial feature and the second angle facial feature of the target user as new facial features of the target user, for each new facial feature, use a reference facial feature corresponding to the same collection angle as the new facial feature in a facial feature library of the target user as a reference facial feature to be supplemented of the new facial feature, and compare a plurality of feature points in the new facial feature with a plurality of feature points in the corresponding reference facial feature to be supplemented, so as to screen out a difference feature point having a difference with a feature point of the corresponding reference facial feature to be supplemented from the plurality of feature points of the new facial feature. And then, taking the difference feature points corresponding to the newly added facial features as auxiliary facial features of the corresponding acquisition angles according to the acquisition angles corresponding to the newly added facial features, and adding the auxiliary facial features to a facial feature library of the target user so as to supplement the facial feature library of the target user.
For example, taking the face recognition condition satisfied between the identifiable user X and the target user as an example, the face feature library of the identifiable user X is a face feature library of the target user, where the face feature library of the target user stores reference face features corresponding to the face angles of the target user on the front face, the left face and the right face. Taking the first angle as the positive face angle as an example, the server may take the first angle facial feature of the target user as a new facial feature of the target user at the positive face angle, and take the reference facial feature of the positive face angle in the facial feature library of the target user as the reference facial feature to be supplemented of the positive face angle. Then, for the positive face angle, the server may compare a plurality of feature points in the newly added face feature with a plurality of feature points in the corresponding reference face feature to be supplemented, so as to obtain a plurality of difference feature points from the newly added face feature, thereby obtaining an auxiliary face feature corresponding to the positive face angle. In a similar manner, the server may obtain auxiliary facial features corresponding to each of the first angle and the second angle.
Only feature points with differences from corresponding reference facial features to be supplemented in the newly added facial features are recorded in the auxiliary facial features, and feature points without differences are not recorded, so that the data volume in a facial feature library can be reduced.
Optionally, the server does not need to process the second angle when the first angle score is not lower than the score threshold, and therefore only the auxiliary facial features of the first angle are added to the facial feature library of the target user when the first angle score is not lower than the score threshold.
In this embodiment, after the target user satisfies the face recognition condition, the auxiliary facial features of the target user at the first angle and the second angle are added to the facial feature library of the target user, so that the purpose of data supplementation to the facial feature library of the target user can be achieved.
In one embodiment, the facial feature library of the target user stores auxiliary facial features of the target user and reference facial features of the target user; each auxiliary facial feature comprises at least one difference feature point; after adding the auxiliary facial features to the target user's facial feature library, it includes:
for each acquisition angle, acquiring the total data quantity of auxiliary facial features corresponding to the acquisition angle of the target user from a facial feature library of the target user;
Taking the acquisition angle of which the total data quantity exceeds the data quantity threshold value as an acquisition angle to be adjusted, and comparing difference feature points in a plurality of auxiliary facial features corresponding to the acquisition angle to be adjusted;
determining the total number of different difference feature points of the comparison result according to each auxiliary facial feature corresponding to the acquisition angle to be adjusted;
and deleting the auxiliary facial features of which the total number does not meet the number condition, and updating the reference facial features of the target user at the acquisition angle to be adjusted based on the auxiliary facial features of which the total number is the least.
The data quantity threshold value can be configured according to an actual application scene. The total number corresponding to the auxiliary facial features represented by the number condition is not met, and exceeds a total number threshold, wherein the total number threshold can be configured according to an actual application scene.
Optionally, after adding the auxiliary facial features of the target user to the facial feature library of the target user to realize data supplementation of the facial feature library, for each acquisition angle, the server may acquire a total data amount of the auxiliary facial features of the target user corresponding to the acquisition angle from the facial feature library of the target user, and use the acquisition angle of which the total data amount exceeds a data amount threshold as an acquisition angle to be adjusted, then compare the difference feature points in the plurality of auxiliary facial features corresponding to the acquisition angle to be adjusted, and for each auxiliary facial feature corresponding to the acquisition angle to be adjusted, screen out the total number of the difference feature points from other auxiliary facial features except for the auxiliary facial feature, that is, determine the comparison result of each auxiliary facial feature as the total number of the difference feature points. And finally, deleting the auxiliary facial features of which the total number does not meet the number condition, and updating the reference facial features of the target user at the acquisition angle to be adjusted based on the auxiliary facial features of which the total number is the least.
For example, in the facial feature library of the target user after the 4 times of facial recognition conditions are satisfied, description is made by taking an example that the total data amount of the auxiliary facial features corresponding to the positive face angle exceeds the data amount threshold (the positive face angle is the angle to be adjusted), assuming that after the first recognition is successful, the difference feature points A1, A2 and A3 are recorded in the auxiliary facial feature a added to the positive face angle, after the second recognition is successful, the difference feature points B1 and B2 are recorded in the auxiliary facial feature B added to the positive face angle, after the third recognition is successful, the difference feature point C1 is recorded in the auxiliary facial feature C added to the positive face angle, and after the fourth recognition is successful, the difference feature points D1 and D2 are recorded in the auxiliary facial feature D added to the positive face angle. For the auxiliary facial feature A, the server can compare the difference feature points, and screen out difference feature points different from A1, A2 and A3 from B1, B2, C1, D1 and D2 to obtain the total number a corresponding to the auxiliary facial feature A. For the auxiliary facial feature B, the server may screen out different feature points different from B1 and B2 from A1, A2, A3, C1, D1, and D2, to obtain a total number B corresponding to the auxiliary facial feature B. In a similar manner, the server may obtain a total number C of auxiliary facial features C and a total number D of auxiliary facial features D.
Further, taking the total number threshold as O and a > O > b > d as an example, the server may determine that the auxiliary facial feature a corresponding to the first recognition does not satisfy the number condition, that is, the difference feature points recorded in the auxiliary facial feature a are not accurate enough, and there is a large difference between the difference feature points recorded in other auxiliary facial features. Accordingly, the server may delete the auxiliary facial feature a to reduce the amount of data of the auxiliary facial features stored at the positive face angle by the facial feature library of the target user. Then, the server may update the reference facial feature of the target user at the positive face angle based on the auxiliary facial feature D corresponding to the fourth recognition, to obtain an updated reference facial feature of the target user at the positive face angle.
In this embodiment, considering that the facial features of the target user are affected by factors such as age and change, by setting a data amount threshold, the reference facial features corresponding to each acquisition angle in the facial feature library of the target user can be updated, so that the accuracy and success rate of facial recognition are improved.
In one embodiment, when the composite score is not lower than the score threshold, it is determined that the target user meets the face recognition condition, and after performing the face recognition processing on the target user, the method further includes:
In a preset effective time, detecting dynamic behaviors of a target user;
and when the target user meets the dynamic behavior detection condition within the effective time, carrying out identity authentication on the target user based on the identity information obtained by carrying out face recognition processing on the target user.
The effective time can be configured according to the actual application scene, and can be specifically matched with 60-150 seconds. Dynamic behavior detection includes, but is not limited to, in vivo detection, voiceprint detection.
Optionally, after the face recognition processing is performed on the target user to obtain the identity information of the target user, the server may randomly generate a dynamic behavior instruction, and perform dynamic behavior detection on the target user based on the randomly generated dynamic behavior instruction in a preset effective time. When the dynamic behavior of the target user in the effective time is consistent with the dynamic behavior instruction, judging that the target user meets the dynamic behavior detection condition, and carrying out identity authentication on the target user based on the identity information obtained by carrying out face recognition processing on the target user.
Illustratively, the dynamic behavior instructions in living body detection include, but are not limited to, "nodding," "blinking," "shaking," etc., and the above-described "nodding," "blinking," "shaking," etc. may be randomly combined. Dynamic behavior instructions in voiceprint detection include, but are not limited to, "reciting randomly generated numbers," etc., to avoid other people using pre-recorded audio.
In this embodiment, through dynamic behavior detection, it can be ensured that the target user is himself performing face recognition, and other people are prevented from performing face recognition by using multi-angle pictures/prerecorded videos of the target user, so that the security of identity authentication can be further improved.
In another embodiment, as shown in fig. 3, there is provided a flowchart of another face recognition processing method, mainly comprising the steps of:
step 302, determining a plurality of identifiable users meeting face recognition conditions, and acquiring facial features of each identifiable user at a plurality of acquisition angles according to each identifiable user;
step 304, regarding each acquisition angle, taking the facial feature corresponding to the acquisition angle as the reference facial feature of the acquisition angle, obtaining the reference facial feature of each identifiable user at a plurality of acquisition angles, and constructing a facial feature library based on each identifiable user's respective plurality of reference facial features;
step 306, acquiring first facial information acquired by a target user, taking an acquisition angle of the first facial information as a first angle, extracting first angle facial features of the target user from the first facial information, and taking reference facial features with the same acquisition angle as the first angle in a facial feature library as reference features to be compared;
Step 308, for each reference feature to be compared, performing similarity evaluation on a plurality of feature points in the first angle facial feature and a plurality of feature points in the reference feature to be compared to obtain feature point similarity scores between the first angle facial feature and the reference feature to be compared;
step 310, taking the highest feature point similarity score among the feature point similarity scores as a first angle score of the target user, and obtaining a second angle facial feature of the target user based on second facial information acquired from a second angle when the first angle score is lower than a score threshold, wherein the second angle is different from the first angle;
step 312, the first angle score and the second angle score obtained based on the second angle facial features are subjected to weighted fusion to obtain a comprehensive score of the target user, and when the comprehensive score is not lower than a score threshold, the target user is judged to meet the facial recognition condition, and facial recognition processing is carried out on the target user;
step 314, taking the first angle facial feature and the second angle facial feature of the target user as newly added facial features of the target user respectively, and taking the reference facial features with the same acquisition angles corresponding to the newly added facial features in a facial feature library of the target user as reference facial features to be supplemented of the newly added facial features aiming at each newly added facial feature;
Step 316, screening out difference feature points having differences between feature points of the newly added facial features and corresponding reference facial features to be supplemented from a plurality of feature points of the newly added facial features, taking the difference feature points corresponding to the newly added facial features as auxiliary facial features of corresponding acquisition angles according to acquisition angles corresponding to the newly added facial features, and adding the auxiliary facial features to a facial feature library of a target user;
step 318, for each acquisition angle, acquiring the total data amount of the auxiliary facial features corresponding to the acquisition angle of the target user from the facial feature library of the target user, taking the acquisition angle of which the total data amount exceeds the data amount threshold as the acquisition angle to be adjusted, and comparing the difference feature points in the plurality of auxiliary facial features corresponding to the acquisition angle to be adjusted;
step 320, for each auxiliary facial feature corresponding to the acquisition angle to be adjusted, determining that the comparison result is the total number of different difference feature points, deleting the auxiliary facial features with the total number not meeting the number condition, and updating the reference facial features of the target user at the acquisition angle to be adjusted based on the auxiliary facial features with the minimum total number;
And step 322, performing dynamic behavior detection on the target user within a preset effective time, and performing identity authentication on the target user based on the identity information obtained by performing facial recognition processing on the target user when the target user meets the dynamic behavior detection condition within the effective time.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a facial recognition processing device for realizing the above related facial recognition processing method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in the embodiment of one or more face recognition processing devices provided below may refer to the limitation of the face recognition processing method hereinabove, and will not be repeated here.
In one embodiment, as shown in fig. 4, there is provided a face recognition processing apparatus including: a facial feature extraction module 402, a score acquisition module 404, a second angle acquisition module 406, and a composite score acquisition module 408, wherein:
the facial feature extraction module 402 is configured to obtain first facial information acquired by a target user, take an acquisition angle of the first facial information as a first angle, and extract a first angle facial feature of the target user from the first facial information;
the score obtaining module 404 is configured to evaluate similarity between the first angular facial feature and a plurality of reference facial features in the facial feature library, so as to obtain a first angular score of the target user;
A second angle obtaining module 406, configured to obtain, when the first angle score is lower than the score threshold, a second angle facial feature of the target user based on second facial information collected from the second angle for the target user, where the second angle is different from the first angle;
and the comprehensive score obtaining module 408 is configured to perform weighted fusion on the first angle score and the second angle score obtained based on the second angle facial feature, obtain a comprehensive score of the target user, and determine that the target user meets the facial recognition condition when the comprehensive score is not lower than the score threshold, and perform facial recognition processing on the target user.
In one embodiment, the facial recognition processing device further includes a facial feature library construction module, where the facial feature library construction module is configured to determine a plurality of identifiable users satisfying facial recognition conditions, acquire, for each identifiable user, facial features corresponding to the identifiable user at a plurality of collection angles, and for each collection angle, take the facial features corresponding to the collection angle as reference facial features of the collection angles, obtain reference facial features of the identifiable user at the plurality of collection angles, and then construct a facial feature library based on the plurality of reference facial features of each identifiable user.
In one embodiment, a plurality of feature points are included in each extracted facial feature. The scoring module is further configured to use the reference facial features with the same collection angle as the first angle in the facial feature library as reference features to be compared, perform similarity evaluation on a plurality of feature points in the first angle facial features and a plurality of feature points in the reference features to be compared for each reference feature to be compared, obtain feature point similarity scores between the first angle facial features and the reference features to be compared, and then use the highest feature point similarity score in the feature point similarity scores as the first angle score of the target user.
In one embodiment, the facial recognition processing device further includes an auxiliary facial feature adding module, where the auxiliary facial feature adding module is configured to use the first angular facial feature and the second angular facial feature of the target user as newly added facial features of the target user, and for each newly added facial feature, use, in a facial feature library of the target user, a reference facial feature corresponding to the newly added facial feature and having the same collection angle as a reference facial feature to be supplemented of the newly added facial feature, and then screen out a difference feature point having a difference between feature points of the corresponding reference facial feature to be supplemented from a plurality of feature points of the newly added facial feature, and finally, according to the collection angle corresponding to the newly added facial feature, use the difference feature point corresponding to the newly added facial feature as an auxiliary facial feature of the corresponding collection angle, and add the auxiliary facial feature to the facial feature library of the target user.
In one embodiment, the facial feature library of the target user stores auxiliary facial features of the target user, each of which includes at least one difference feature point, and reference facial features of the target user. The face recognition processing device further comprises a reference facial feature updating module, wherein the reference facial feature updating module is used for acquiring the total data quantity of the auxiliary facial features corresponding to the acquisition angles of the target user from a facial feature library of the target user according to each acquisition angle, comparing the acquisition angles with the total data quantity exceeding a data quantity threshold value as acquisition angles to be adjusted, comparing the difference feature points in the plurality of auxiliary facial features corresponding to the acquisition angles to be adjusted, determining that the comparison result is the total number of different difference feature points for each auxiliary facial feature corresponding to the acquisition angles to be adjusted, finally deleting the auxiliary facial features with the total number not meeting the number condition, and updating the reference facial features of the target user in the acquisition angles to be adjusted based on the auxiliary facial features with the minimum total number.
In one embodiment, the facial recognition processing device further includes an identity authentication module, where the identity authentication module is configured to perform dynamic behavior detection on the target user within a preset effective time, and perform identity authentication on the target user based on identity information obtained by performing facial recognition processing on the target user when the target user satisfies a dynamic behavior detection condition within the effective time.
The respective modules in the above-described face recognition processing apparatus may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 5. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is for storing facial recognition processing data. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a facial recognition processing method.
It will be appreciated by those skilled in the art that the structure shown in fig. 5 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data are required to comply with the related laws and regulations and standards of the related countries and regions.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (10)

1. A face recognition processing method, the method comprising:
acquiring first facial information acquired by a target user, taking an acquisition angle of the first facial information as a first angle, and extracting first angle facial features of the target user from the first facial information;
performing similarity evaluation on the first angle facial features and a plurality of reference facial features in a facial feature library to obtain a first angle score of the target user;
When the first angle score is lower than a score threshold, obtaining a second angle facial feature of the target user based on second facial information acquired from a second angle, wherein the second angle is different from the first angle;
and carrying out weighted fusion on the first angle score and a second angle score obtained based on the second angle facial features to obtain a comprehensive score of the target user, and judging that the target user meets the facial recognition condition when the comprehensive score is not lower than the score threshold value, and carrying out facial recognition processing on the target user.
2. The method of claim 1, wherein prior to the acquiring the first facial information collected for the target user, comprising:
determining a plurality of identifiable users that satisfy the facial recognition conditions;
for each identifiable user, acquiring facial features of the identifiable user corresponding to each of a plurality of acquisition angles;
for each acquisition angle, taking the facial feature corresponding to the acquisition angle as the reference facial feature of the acquisition angle to obtain the reference facial feature of the identifiable user at each of a plurality of acquisition angles;
And constructing a facial feature library based on the respective plurality of reference facial features of each identifiable user.
3. The method of claim 1, wherein each extracted facial feature includes a plurality of feature points therein;
the step of evaluating the similarity between the first angle facial feature and the plurality of reference facial features in the facial feature library to obtain a first angle score of the target user includes:
taking the reference facial features with the same acquisition angle as the first angle in the facial feature library as reference features to be compared;
aiming at each reference feature to be compared, carrying out similarity evaluation on a plurality of feature points in the first angle facial feature and a plurality of feature points in the reference feature to be compared to obtain feature point similarity scores between the first angle facial feature and the reference feature to be compared;
and taking the highest feature point similarity score in the feature point similarity scores as a first angle score of the target user.
4. The method according to claim 1, wherein when the composite score is not lower than the score threshold, it is determined that the target user satisfies a face recognition condition, and after performing a face recognition process on the target user, comprising:
The first angle facial feature and the second angle facial feature of the target user are respectively used as newly added facial features of the target user;
for each newly added facial feature, taking a reference facial feature which corresponds to the newly added facial feature and has the same acquisition angle as the newly added facial feature in a facial feature library of the target user as a reference facial feature to be supplemented of the newly added facial feature;
screening out difference feature points which have differences with feature points of corresponding reference facial features to be supplemented from a plurality of feature points of the newly added facial features;
and taking the difference feature points corresponding to the newly added facial features as auxiliary facial features of corresponding acquisition angles according to the acquisition angles corresponding to the newly added facial features, and adding the auxiliary facial features to a facial feature library of the target user.
5. The method according to claim 4, wherein the auxiliary facial features of the target user and the reference facial features of the target user are stored in the facial feature library of the target user; each auxiliary facial feature comprises at least one difference feature point;
after the adding of the auxiliary facial feature to the target user's facial feature library, comprising:
For each acquisition angle, acquiring the total data quantity of auxiliary facial features corresponding to the acquisition angle of the target user from a facial feature library of the target user;
taking the acquisition angle of the total data quantity exceeding the data quantity threshold value as an acquisition angle to be adjusted, and comparing difference feature points in a plurality of auxiliary facial features corresponding to the acquisition angle to be adjusted;
determining the total number of different difference feature points as a comparison result according to each auxiliary facial feature corresponding to the acquisition angle to be adjusted;
and deleting the auxiliary facial features of which the total number does not meet the number condition, and updating the reference facial features of the target user at the acquisition angle to be adjusted based on the auxiliary facial features of which the total number is the least.
6. The method according to claim 1, wherein when the composite score is not lower than the score threshold, it is determined that the target user satisfies a face recognition condition, and after performing a face recognition process on the target user, further comprising:
detecting dynamic behaviors of the target user within a preset effective time;
And when the target user meets the dynamic behavior detection condition within the effective time, carrying out identity authentication on the target user based on the identity information obtained by carrying out facial recognition processing on the target user.
7. A facial recognition processing apparatus, the apparatus comprising:
the facial feature extraction module is used for acquiring first facial information acquired by a target user, taking the acquisition angle of the first facial information as a first angle, and extracting first angle facial features of the target user from the first facial information;
the scoring obtaining module is used for evaluating the similarity between the first angle facial features and a plurality of reference facial features in a facial feature library to obtain a first angle score of the target user;
a second angle obtaining module, configured to obtain a second angle facial feature of the target user based on second facial information acquired from a second angle to the target user when the first angle score is lower than a score threshold, where the second angle is different from the first angle;
and the comprehensive score obtaining module is used for carrying out weighted fusion on the first angle score and a second angle score obtained based on the second angle facial features to obtain a comprehensive score of the target user, and when the comprehensive score is not lower than the score threshold, judging that the target user meets the facial recognition condition, and carrying out facial recognition processing on the target user.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 6 when the computer program is executed.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
CN202310001648.XA 2023-01-03 2023-01-03 Face recognition processing method, device, computer equipment and storage medium Pending CN116246317A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310001648.XA CN116246317A (en) 2023-01-03 2023-01-03 Face recognition processing method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310001648.XA CN116246317A (en) 2023-01-03 2023-01-03 Face recognition processing method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116246317A true CN116246317A (en) 2023-06-09

Family

ID=86626999

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310001648.XA Pending CN116246317A (en) 2023-01-03 2023-01-03 Face recognition processing method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116246317A (en)

Similar Documents

Publication Publication Date Title
CN109948408B (en) Activity test method and apparatus
CN110008903B (en) Face recognition method, device, system, storage medium and face payment method
KR20220076398A (en) Object recognition processing apparatus and method for ar device
CN110689323A (en) Picture auditing method and device, computer equipment and storage medium
CN109800318A (en) A kind of archiving method and device
CN114693192A (en) Wind control decision method and device, computer equipment and storage medium
CN112052251B (en) Target data updating method and related device, equipment and storage medium
CN116703598A (en) Transaction behavior detection method, device, computer equipment and storage medium
CN108875514B (en) Face authentication method and system, authentication device and nonvolatile storage medium
CN116246317A (en) Face recognition processing method, device, computer equipment and storage medium
CN115082999A (en) Group photo image person analysis method and device, computer equipment and storage medium
CN114049608A (en) Track monitoring method and device, computer equipment and storage medium
CN113128278A (en) Image identification method and device
CN116205726B (en) Loan risk prediction method and device, electronic equipment and storage medium
CN110490078B (en) Monitoring video processing method, device, computer equipment and storage medium
CN109408727A (en) User based on Multidimensional Awareness data pays close attention to information intelligent recommended method and system
CN117333926B (en) Picture aggregation method and device, electronic equipment and readable storage medium
CN116824692A (en) Human behavior recognition method, device, computer equipment and storage medium
Saber et al. DeepFake Video Detection
CN109840487B (en) Private key generation method and system of block chain electronic wallet based on fingerprint information
CN117150427A (en) Multi-mode biological identification method and device
CN116416569A (en) Behavior recognition method, behavior recognition device, computer equipment and storage medium
CN117238017A (en) Face recognition method, device, computer equipment and storage medium
CN116863533A (en) Behavior recognition method, behavior recognition device, computer equipment and storage medium
CN110489592B (en) Video classification method, apparatus, computer device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination