CN111242105A - User identification method, device and equipment - Google Patents

User identification method, device and equipment Download PDF

Info

Publication number
CN111242105A
CN111242105A CN202010329567.9A CN202010329567A CN111242105A CN 111242105 A CN111242105 A CN 111242105A CN 202010329567 A CN202010329567 A CN 202010329567A CN 111242105 A CN111242105 A CN 111242105A
Authority
CN
China
Prior art keywords
user
target
face feature
maximum value
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010329567.9A
Other languages
Chinese (zh)
Inventor
赵宏伟
刘贺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202010329567.9A priority Critical patent/CN111242105A/en
Publication of CN111242105A publication Critical patent/CN111242105A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Abstract

The embodiment of the specification discloses a user identification method, a device and equipment, and the scheme comprises the following steps: the method comprises the steps of obtaining a face feature vector of a user to be recognized at a target application, wherein the face feature vector is obtained by performing feature extraction on a face image of the user to be recognized, which is acquired by target equipment; determining an authenticated user with a similarity between a reserved human face feature vector and the human face feature vector larger than a first threshold value as a first target user from the authenticated users of the target application, and determining a historical user of the target device as a second target user; and identifying the user to be identified according to the first target user and the second target user.

Description

User identification method, device and equipment
Technical Field
The present application relates to the field of face recognition technologies, and in particular, to a user recognition method, apparatus, and device.
Background
With the development of computer technology and optical imaging technology, a user recognition mode based on a face recognition technology is becoming popular. In practical application, the appearances of some users are similar, so that the possibility exists that the user to be identified is mistakenly identified as other users with similar appearances to the user to be identified, and the user rights and interests are influenced.
In summary, how to provide a more accurate user identification method has become a technical problem to be solved urgently.
Disclosure of Invention
The embodiment of the specification provides a user identification method, a user identification device and user identification equipment, so that the accuracy of user identification is improved.
In order to solve the above technical problem, the embodiments of the present specification are implemented as follows:
an embodiment of the present specification provides a user identification method, including:
the method comprises the steps of obtaining a face feature vector of a user to be recognized at a target application, wherein the face feature vector is vector data obtained by performing feature extraction on a face image of the user to be recognized, which is acquired by target equipment;
determining a first target user from the authentication users of the target application, wherein the first similarity between the reserved human face feature vector of the first target user and the human face feature vector is greater than a first threshold value;
determining a second target user from the authentication users of the target application, wherein the second target user is a historical user of the target equipment;
and identifying the user to be identified according to the first target user and the second target user.
An embodiment of this specification provides a user identification apparatus, including:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a face feature vector of a user to be identified at a target application, and the face feature vector is vector data obtained by extracting features of a face image of the user to be identified, which is acquired by target equipment;
the first determining module is used for determining a first target user from the authentication users of the target application, and the first similarity between the background-left face feature vector of the first target user and the face feature vector is greater than a first threshold value;
a second determining module, configured to determine a second target user from the authenticated users of the target application, where the second target user is a historical user of the target device;
and the identification module is used for identifying the user to be identified according to the first target user and the second target user.
An embodiment of the present specification provides a user identification device, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
the method comprises the steps of obtaining a face feature vector of a user to be recognized at a target application, wherein the face feature vector is vector data obtained by performing feature extraction on a face image of the user to be recognized, which is acquired by target equipment;
determining a first target user from the authentication users of the target application, wherein the first similarity between the reserved human face feature vector of the first target user and the human face feature vector is greater than a first threshold value;
determining a second target user from the authentication users of the target application, wherein the second target user is a historical user of the target equipment;
and identifying the user to be identified according to the first target user and the second target user.
The embodiment of the specification adopts at least one technical scheme which can achieve the following beneficial effects:
after the face feature vector of a user to be identified at a target application in target equipment is obtained, a first target user with the similarity between a reserved face feature vector and the face feature vector larger than a first threshold value is determined from the authenticated users of the target application; and determining the historical user of the target device as a second target user from the authentication users of the target application. The user to be identified is identified according to the first target user determined based on the preset threshold value and the second target user having the incidence relation with the target device used by the user to be identified, so that personalized face identification comparison of the user to be identified can be realized, the error identification rate of the user is reduced, and the accuracy of user identification is improved.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort.
Fig. 1 is a schematic overall scheme flow diagram of a user identification method in an embodiment of the present specification;
fig. 2 is a schematic flowchart of a user identification method provided in an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a subscriber identity module corresponding to fig. 2 provided in an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a user identification device corresponding to fig. 2 provided in an embodiment of the present specification.
Detailed Description
To make the objects, technical solutions and advantages of one or more embodiments of the present disclosure more apparent, the technical solutions of one or more embodiments of the present disclosure will be described in detail and completely with reference to the specific embodiments of the present disclosure and the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present specification, and not all embodiments. All other embodiments that can be derived by a person skilled in the art from the embodiments given herein without making any creative effort fall within the scope of protection of one or more embodiments of the present specification.
The technical solutions provided by the embodiments of the present description are described in detail below with reference to the accompanying drawings.
In the prior art, a face recognition technology has been widely applied to various scenes with user recognition requirements, such as an application login scene, a risk verification scene, a face brushing payment scene, and the like. At present, a threshold value is usually preset, and when the similarity between the background human face feature vector of the authenticated user of the target application and the human face feature vector of the user to be identified is greater than the threshold value, the user to be identified can be determined as the authenticated user, so as to implement user identification.
Since there may be multiple users with similar appearances at the target application, for example, when user a and user B belong to a homozygote twin, the appearances of user a and user B will typically be similar. Therefore, when the user a is performing user identification, there is a possibility that the user a is erroneously identified as the user B, thereby affecting the accuracy of the user identification.
In order to solve the defects in the prior art, the scheme provides the following embodiments:
fig. 1 is a schematic flowchart of an overall scheme of a user identification method in an embodiment of the present specification. As shown in fig. 1, a user 101 to be recognized may perform a user recognition operation at a target application in a device 102, and the device 102 may acquire a face image 103 of the user 101 to be recognized, so as to extract a face feature vector of the user 101 to be recognized from the face image 103. After the server 104 of the target application acquires the face feature vector of the user 101 to be recognized, based on the authentication user data pre-stored in the database 105, an authentication user whose similarity between the reserved face feature vector and the face feature vector of the user to be recognized is greater than a first threshold value is determined as a first target user, and a historical user of the device 102 is determined as a second target user, so that personalized face recognition comparison is performed on the user to be recognized based on the reserved face feature vectors of the first target user and the second target user, which is beneficial to reducing the user misidentification rate and improving the user recognition accuracy.
Next, a user identification method provided in an embodiment of the specification will be specifically described with reference to the accompanying drawings:
fig. 2 is a flowchart illustrating a user identification method according to an embodiment of the present disclosure. From the viewpoint of a program, the execution subject of the flow may be a program loaded on the application client or the application server.
As shown in fig. 2, the process may include the following steps:
step 202: the method comprises the steps of obtaining a face feature vector of a user to be recognized at a target application, wherein the face feature vector is vector data obtained by performing feature extraction on a face image of the user to be recognized, which is acquired by target equipment.
In this specification, a target device is loaded with a target application, and when a user to be recognized performs a user recognition operation based on face recognition at the target application in the target device, the target device may collect a face image of the user to be recognized, so as to perform a subsequent user recognition operation based on a face feature vector extracted from the face image of the user to be recognized. Wherein the user identification operation may include: a plurality of operations related to user identification, such as login operation, payment operation, and unlock operation.
In practical application, the target device may send the acquired face image of the user to be recognized to an application server (i.e., a server of the target application), so that the application server extracts the face feature vector of the user to be recognized from the face image of the user to be recognized. Or, because the face image of the user to be recognized belongs to the privacy data of the user to be recognized, in order to protect the privacy and the security of the face image of the user to be recognized, the target device may extract the face feature vector from the face image of the user to be recognized, and then send the extracted face feature vector to the application server, so as to avoid transmitting the face image of the user to be recognized, and thus, the privacy and the security of the face image of the user to be recognized can be protected.
Step 204: and determining a first target user from the authentication users of the target application, wherein the first similarity between the reserved human face feature vector of the first target user and the human face feature vector is greater than a first threshold value.
In this specification embodiment, a registered account at a target application generally corresponds to an authenticated user, which may refer to a user who has completed a user authentication operation for the registered account. The reserved human face feature vector of each authenticated user of the target application can be stored at the server (i.e. the application server) of the target application, so as to identify whether the actually used user of the registered account of the target application is the authenticated user of the registered account based on the reserved human face feature vector. The reserved face feature vector of the authenticated user may be feature vector data extracted from a face image of the authenticated user himself.
In this embodiment of the present specification, a first threshold with better universality may be preset, and when the similarity between the reserved face feature vector of a certain authenticated user of the target application and the face feature vector of the user to be recognized is greater than the first threshold, it may be generally considered that the possibility that the user to be recognized and the authenticated user (i.e., the first target user) are the same user is very high.
In this embodiment of the present specification, when the application server determines the first target user, all authenticated users that use the target application do not generally perform face comparison with the user to be identified respectively, but an authenticated user whose similarity between the first-determined set-back face feature vector and the face feature vector of the user to be identified is greater than a first threshold is determined as the first target user, so when a plurality of authenticated users whose appearances are similar to the user to be identified exist at the target application, there is a possibility that the user to be identified is erroneously identified as another authenticated user whose appearance is similar to the user to be identified, which results in a high false identification rate. The false recognition rate is a common index for evaluating the face recognition accuracy in the face recognition technology. False recognition rate = number of face recognition passes but number of recognition errors/number of face recognition passes.
Step 206: and determining a second target user from the authentication users of the target application, wherein the second target user is a historical user of the target equipment.
In the embodiments of the present specification, users who have used the same device generally have a certain relationship. For example, family members may share a smart phone, or users in the same area will typically share common devices in the area, etc. Because the appearance of family members will often be similar, for example, brothers, sisters, parents and children, mothers and women, etc. The appearance of users in the same area may also have similarities, e.g., brown eyes, black hair, high nose bridge, etc. Therefore, after the first target user is determined by using the first threshold with better universality, the historical user of the target device used by the user to be identified can be used as the second target user to identify the user to be identified, so as to further determine whether a user more matched with the user to be identified exists, and thus, the accuracy of the user identification result can be improved.
Step 208: and identifying the user to be identified according to the first target user and the second target user.
In this embodiment of the present specification, when the user to be recognized is recognized according to the first target user and the second target user, if there is a second target user whose appearance is more similar to that of the user to be recognized, that is, when the similarity between the reserved face feature vector of the second target user and the face feature vector of the user to be recognized is higher than the first similarity, the second target user may be determined as the user to be recognized, so that the accuracy of the user recognition result is improved. Or, when the similarity between the second target user and the first target user is close to the similarity between the appearance of the user to be identified and the appearance of the user to be identified, it may not be possible to accurately determine whether the user to be identified is the second target user or the first target user, and therefore, the face identification of the user to be identified may not be passed. Therefore, the risk of passing face recognition but recognizing errors can be reduced, and the false recognition rate is reduced.
It should be understood that the order of some steps in the method described in one or more embodiments of the present disclosure may be interchanged according to actual needs, or some steps may be omitted or deleted.
According to the method in fig. 2, the user to be identified is identified according to the first target user determined based on the first threshold and the historical user (i.e., the second target user) of the target device used by the user to be identified, so that personalized face identification comparison of the user to be identified can be realized, the user misidentification rate can be reduced, and the accuracy of user identification can be improved.
Based on the method of fig. 2, the present specification also provides some specific embodiments of the method, which are described below.
In the embodiments of the present specification, the implementation manner of determining the first target user may be various. For example, the first implementation: a one-to-one face comparison may be used to determine the first target user. Or, the implementation mode two: a one-to-many face comparison mode may be employed to determine the first target user.
Implementation mode one
When the user to be identified performs user identification operation on the specified account at the target application, step 202: the obtaining of the face feature vector of the user to be recognized at the target application may specifically include:
the method comprises the steps of obtaining a user identification request aiming at a designated account of a target application, wherein the user identification request carries a face feature vector of a user to be identified.
Correspondingly, step 204: determining a first target user from the authenticated users of the target application, which may specifically include:
and acquiring the reserved human face feature vector of the authenticated user of the specified account.
And calculating the similarity between the reserved human face feature vector of the authenticated user of the specified account and the human face feature vector of the user to be identified.
And when the similarity is larger than a first threshold value, determining the authenticated user of the specified account as a first target user.
In this embodiment of the present specification, when a user identification request for a specific account is acquired, if a similarity between a face feature vector of a user to be identified and a reserved face feature vector of an authenticated user of the specific account is greater than a first threshold, it may be indicated that the user to be identified is a more authenticated user of the specific account, and therefore, the authenticated user of the specific account may be determined as a first target user, so as to facilitate execution of a subsequent user identification process. If the similarity between the face feature vector of the user to be identified and the reserved face feature vector of the authenticated user of the specified account is less than or equal to the first threshold, the possibility that the user to be identified is the authenticated user of the specified account is low, so that a result indicating that the face identification of the user to be identified fails can be directly generated, and the user identification process is ended.
Implementation mode two
When the user to be identified does not perform user identification operation on the specified account at the target application, step 204: determining a first target user from the authenticated users of the target application may specifically include:
and determining the similarity between the reserved human face feature vector of the target application authentication user and the human face feature vector of the user to be identified one by one.
And determining the authentication user corresponding to the similarity which is determined for the first time and is greater than a first threshold value as a first target user. Or, the authenticated user corresponding to the maximum value in the similarity greater than the first threshold is determined as a first target user.
In this embodiment of the present specification, when a user performs user identification without providing specified account information at a target application, for example, when the user performs a face brushing payment using a self-service settlement device without inputting account information, or when the user uses a face brushing login function without inputting account information, the application server may adopt a one-to-many face comparison manner, and use an authenticated user whose similarity between a first-determined set-back face feature vector and a face feature vector of a user to be identified is greater than a first threshold as a first target user. Therefore, the one-to-many face comparison mode does not need to traverse all the authenticated users of the target application, and the efficiency is high. Or, the authentication user of the target application can be traversed, and the authentication user corresponding to the maximum value of the similarity greater than the first threshold is determined as the first target user, that is, the user with the most similar appearance to the user to be identified in the authentication user of the target application is determined as the first target user, which is favorable for improving the accuracy of user identification.
In the present illustrative embodiment, step 206: determining a second target user from the authenticated users of the target application, which may specifically include:
and determining the authenticated user who uses the target application on the target equipment within a preset time period before the face feature vector is obtained from the authenticated users of the target application.
In the embodiments of the present specification, a history user of the target device may exist among the authenticated users of the target application. For example, when the user logs in a registered account at the target application on the target device, or when the user successfully performs a face-brushing payment operation on the target device, or the like, the authenticated user of the registered account or the authenticated user of the registered account that is deducted at the time of face-brushing payment belongs to the historical user of the target device.
In this embodiment of the present specification, determining, from among the authenticated users of the target application, that the authenticated user of the target application on the target device has been used within a preset time period before the face feature vector is obtained may specifically include:
and acquiring a request message which is sent by the target device within the preset time period and aims at the specified account of the target application.
And determining an authenticated user of the specified account of the target application based on the request message.
In this embodiment of the present specification, when a user interacts with a target application on a target device, for example, an event such as login, payment, page information browsing, and the like, the target device generally needs to send a request message to an application server (i.e., a server of the target application), and the application server may determine a designated account of the target application corresponding to the request message, so as to respond to an operation performed by the user based on data and resources of the designated account.
In practical applications, the request message sent by the target device may include one or more items of information of an identifier of the target device, an identifier of a specific account, an identifier of an authenticated user corresponding to the specific account, an event type, a request time, and the like. Therefore, the application server may determine the historical user of the target device based on the request message sent by the target device received within the preset time period, so as to obtain the second target user. In practical application, when the number of the determined historical users of the target device is large, the historical users of the target device can be ranked according to the sequence from near to far of the request time of the corresponding request message, and the first N users are determined as second target users, so that the calculation amount of the subsequent face recognition step is reduced.
In this embodiment of the present specification, the determined historical user information of the target device may be stored in a database, so as to be called when a user identification is performed on an actually used user of the target device in the following process. In practical application, when the application server receives a request message at a target application sent by the target device, the stored historical user information of the target device can be updated, so that the real-time performance of the stored historical user information of the target device is ensured, and the real-time performance of the determined second target user is further improved.
In this embodiment, step 208 may further include, before: and calculating second similarity between the reserved human face feature vectors of other users and the human face feature vectors of the users to be identified aiming at other users except the first target user in the second target user.
Correspondingly, step 208: identifying the user to be identified according to the first target user and the second target user may specifically include: and identifying the user to be identified according to the first similarity and all the second similarities.
In the embodiment of the present specification, the second target user may include the first target user or may not include the first target user. For example, when the first target user does not use the target application on the target device within the preset time period, the first target user may not be included in the second target user. Otherwise, the second target user includes the first target user.
The similarity (i.e. the first similarity) between the background face feature vector of the first target user and the face feature vector of the user to be recognized is determined. Therefore, the second similarity between the background face feature vectors of the other users and the face feature vector of the user to be recognized can be calculated only for the other users except the first target user in the second target user. And the authentication users corresponding to the first similarity and the second similarities are different.
In this embodiment, according to the first similarity and all the second similarities, there are various ways to identify the user to be identified.
First implementation
The identifying the user to be identified by the first similarity and all the second similarities may specifically include:
and determining the difference between the maximum value and the second maximum value of the first similarity and all the second similarities.
And judging whether the difference between the maximum value and the second maximum value is greater than a second threshold value or not to obtain a first judgment result.
And when the first judgment result shows that the difference between the maximum value and the second maximum value is greater than the second threshold value, determining the user to be identified as the authentication user corresponding to the maximum value.
And when the first judgment result shows that the difference between the maximum value and the second maximum value is less than or equal to the second threshold value, generating a result which shows that the face recognition of the user to be recognized fails.
In this implementation manner, if a difference between a maximum value and a second maximum value of the first similarity and all the second similarities is greater than a second threshold, it may be indicated that only one authenticated user with a similar appearance to the user to be identified is included in the first target user and the second target user, so that it may be determined that the authenticated user corresponding to the maximum value is the user to be identified.
If the difference between the maximum value and the second maximum value of the first similarity and all the second similarities is smaller than or equal to a second threshold, it may be indicated that a plurality of authenticated users having a similar appearance to the user to be identified exist in the first target user and the second target user, and it is impossible to distinguish which authenticated user and the user to be identified are the same user. The user to be identified can perform user identification through other user identification modes (for example, modes of inputting a password, fingerprints and the like) or by performing face identification operation again. Based on this implementation, can reduce user's mistake and know the rate.
For example, assume that user a and user B belong to a same-ovum twin, and that both user a and user B have used a target application on a target device within a preset time period. If the user to be identified is the user A and the determined first target user is the user B, the first similarity corresponding to the first target user is 95%. In the existing scheme, the user A to be identified is identified as the user B by mistake, and the user A to be identified is identified.
In the embodiment of the present specification, since the historical user (i.e., the second target user) of the target device further includes the user a, the second similarity includes a similarity with a value of 95.1%. If all other similarity degrees in the second similarity degrees are less than 95% and the second threshold value is 5%, it can be seen that, because the difference between the maximum value 95.1% and the second maximum value 95% in the first similarity degrees and all the second similarity degrees is 0.1%, and the difference between the maximum value and the second maximum value 0.1% is less than the second threshold value 5%, the current identification of the user to be identified can be failed. The wrong identification of the user A to be identified as the user B is avoided, and the user misidentification rate is reduced.
In this embodiment of the present specification, there may be a plurality of reserved face feature vectors of the authenticated user, and in order to improve the user identification accuracy, before determining the user to be identified as the authenticated user corresponding to the maximum value, the method may further include:
and acquiring a plurality of reserved human face feature vectors of the authenticated user corresponding to the maximum value.
And judging whether the similarity between the plurality of bottom-left face feature vectors and the face feature vector of the user to be identified is larger than a third threshold value or not, and obtaining a second judgment result.
The determining the user to be identified as the authenticated user corresponding to the maximum value specifically includes:
and when the second judgment result shows that the similarity between the plurality of reserved face feature vectors and the face feature vector of the user to be identified is greater than the third threshold, determining the user to be identified as the authenticated user corresponding to the maximum value.
Second implementation
The identifying the user to be identified by the first similarity and all the second similarities may specifically include:
and determining the difference between the maximum value and the second largest value of the first similarity and all the second similarities and the maximum value.
And inputting the difference between the maximum value and the second maximum value and the maximum value into a logistic regression model to obtain a predicted value output by the logistic regression model, wherein the predicted value represents the probability of passing the face recognition of the user to be recognized at this time.
And judging whether the predicted value is larger than a fourth threshold value or not to obtain a third judgment result.
And when the third judgment result shows that the predicted value is greater than the fourth threshold value, determining the user to be identified as the authentication user corresponding to the maximum value.
And when the third judgment result shows that the prediction result is less than or equal to the fourth threshold, generating a result showing that the face recognition of the user to be recognized fails.
In the embodiment of the present specification, a logistic regression model may be used to determine the probability that the user to be identified passes through the identification, and the logistic regression model may be obtained by training using sample data. For example, when a sample user performs user identification at a sample device, a first target user corresponding to the sample user may be determined first, and a historical user of the sample device within a preset time period is determined as a second target user, so as to determine first similarities and second similarities corresponding to the first target user and the second target user, the first similarities and all the second similarities are arranged from large to small, and a maximum value of the first similarities and the second similarities and a difference between the maximum value and a second maximum value are extracted as sample data required for training a logistic regression model. If the first target user and the second target user include the actual authenticated user corresponding to the sample user, and only one authenticated user (i.e., the actual authenticated user corresponding to the sample user) in the first target user and the second target user matches with the appearance of the sample user, the output tag of the sample data may be set to pass the user identification, otherwise, the output tag of the sample data may be set to fail the user identification.
In this embodiment, the input information of the logistic regression model may further include top N similarity values of the first similarity and all the second similarities, and a difference between adjacent ones of the top N similarities. Accordingly, the sample data used to train the logistic regression model may also include the above types of data. By making the input information of the logistic regression model richer, the nonlinear expression capability of the logistic regression model can be improved, and the accuracy of the predicted value output by the logistic regression model can be further improved.
In this embodiment of the present specification, the first similarity and all the second similarities may include a plurality of parallel maximum values, that is, the first target user and the second target user include a plurality of authenticated users having the same appearance as the user to be identified, and at this time, the identity of the user to be identified cannot be determined. In order to reduce the false recognition rate, before determining the difference between the maximum value and the second maximum value of the first similarity and all the second similarities, the method may further include:
and judging whether the first similarity and all the second similarities contain a plurality of maximum values to obtain a fourth judgment result.
And when the fourth judgment result shows that the first similarity and all the second similarities comprise a plurality of maximum values, generating a result showing that the face recognition of the user to be recognized fails.
And when the fourth judgment result shows that the first similarity and all the second similarities do not contain a plurality of maximum values, determining the difference between the maximum value and the second maximum value in the first similarity and all the second similarities.
Based on the same idea, the embodiment of the present specification further provides a device corresponding to the above method. Fig. 3 is a schematic structural diagram of a subscriber identity module corresponding to fig. 2 according to an embodiment of the present disclosure. As shown in fig. 3, the apparatus may include:
an obtaining module 302, configured to obtain a face feature vector of a user to be identified at a target application, where the face feature vector is vector data obtained by performing feature extraction on a face image of the user to be identified, where the face image is acquired by a target device.
A first determining module 304, configured to determine a first target user from the authenticated users of the target application, where a first similarity between the background-left face feature vector of the first target user and the face feature vector is greater than a first threshold.
A second determining module 306, configured to determine a second target user from the authenticated users of the target application, where the second target user is a historical user of the target device.
An identifying module 308, configured to identify the user to be identified according to the first target user and the second target user.
In the apparatus in fig. 3, the identification module 308 identifies the user to be identified according to the first target user determined based on the first threshold and the historical user (i.e., the second target user) of the target device used by the user to be identified, so that personalized face identification comparison of the user to be identified can be realized, which is beneficial to reducing the user misidentification rate and improving the accuracy of user identification.
The examples of this specification also provide some specific embodiments of the process based on the apparatus of fig. 3, which is described below.
Optionally, the second determining module 306 may include:
a first determining unit, configured to determine, from the authenticated users of the target application, the authenticated user who has used the target application on the target device within a preset time period before the face feature vector is acquired.
The first determining unit may be specifically configured to:
and acquiring a request message which is sent by the target device within the preset time period and aims at the specified account of the target application.
And determining an authenticated user of the specified account of the target application based on the request message.
The apparatus in fig. 3, may further include:
and the calculation module is used for calculating a second similarity between the reserved human face feature vectors of other users and the human face feature vector of the user to be recognized aiming at other users except the first target user in the second target user.
The identification module 308 may be specifically configured to:
and identifying the user to be identified according to the first similarity and all the second similarities.
In this embodiment, the identifying module 308 may include:
and the second determining unit is used for determining the difference between the maximum value and the second maximum value in the first similarity and all the second similarities.
And the first judgment unit is used for judging whether the difference between the maximum value and the second maximum value is greater than a second threshold value or not to obtain a first judgment result.
And the third determining unit is used for determining the user to be identified as the authenticated user corresponding to the maximum value when the first judgment result shows that the difference between the maximum value and the second maximum value is greater than the second threshold value.
And the first result generation unit is used for generating a result which represents that the face recognition of the user to be recognized fails when the first judgment result represents that the difference between the maximum value and the second maximum value is less than or equal to the second threshold value.
The identification module 308 may further include:
and the acquisition unit is used for acquiring a plurality of background-left face feature vectors of the authenticated user corresponding to the maximum value.
And the second judging unit is used for judging whether the similarity between the plurality of bottom-left face feature vectors and the face feature vector of the user to be identified is larger than a third threshold value to obtain a second judging result.
The third determining unit may be specifically configured to:
and when the second judgment result shows that the similarity between the plurality of reserved face feature vectors and the face feature vector of the user to be identified is greater than the third threshold, determining the user to be identified as the authenticated user corresponding to the maximum value.
The identification module 308 may also include:
a fourth determining unit, configured to determine a difference between a maximum value and a second largest value among the first similarity and all the second similarities, and the maximum value.
And the prediction unit is used for inputting the difference between the maximum value and the second maximum value and the maximum value into a logistic regression model to obtain a prediction value output by the logistic regression model, wherein the prediction value represents the probability of passing the face recognition of the user to be recognized at this time.
And the third judging unit is used for judging whether the predicted value is greater than a fourth threshold value or not to obtain a third judging result.
And the fifth determining unit is used for determining the user to be identified as the authenticated user corresponding to the maximum value when the third judgment result shows that the predicted value is greater than the fourth threshold value.
And the second result generation unit is used for generating a result which represents that the face recognition of the user to be recognized fails when the third judgment result represents that the prediction result is less than or equal to the fourth threshold.
The identifying module 308 may further include:
and the fourth judging unit is used for judging whether the first similarity and all the second similarities contain a plurality of maximum values to obtain a fourth judging result.
A third result generating unit, configured to generate a result indicating that the face recognition of the user to be recognized fails when the fourth determination result indicates that the first similarity and all the second similarities include multiple maximum values;
the second determining unit or the fourth determining unit may be configured to determine a difference between a second largest value and the largest value in the first similarity and all the second similarities, when the fourth determination result indicates that the first similarity and all the second similarities do not include the largest values.
Based on the same idea, the embodiment of the present specification further provides a device corresponding to the above method.
Fig. 4 is a schematic structural diagram of a user identification device corresponding to fig. 2 provided in an embodiment of the present specification. As shown in fig. 4, the apparatus 400 may include:
at least one processor 410; and the number of the first and second groups,
a memory 430 communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory 430 stores instructions 420 executable by the at least one processor 410 to enable the at least one processor 410 to:
the method comprises the steps of obtaining a face feature vector of a user to be recognized at a target application, wherein the face feature vector is vector data obtained by performing feature extraction on a face image of the user to be recognized, which is acquired by target equipment;
determining a first target user from the authentication users of the target application, wherein the first similarity between the reserved human face feature vector of the first target user and the human face feature vector is greater than a first threshold value;
determining a second target user from the authentication users of the target application, wherein the second target user is a historical user of the target equipment;
and identifying the user to be identified according to the first target user and the second target user.
The device in fig. 4 identifies the user to be identified according to the first target user determined based on the first threshold and the historical user (i.e., the second target user) of the target device used by the user to be identified, so that personalized face identification comparison of the user to be identified can be realized, the user misidentification rate can be reduced, and the accuracy of user identification can be improved.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the electronic ticket sending device shown in fig. 3, since it is basically similar to the method embodiment, the description is simple, and the relevant points can be referred to the partial description of the method embodiment.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital character system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate a dedicated integrated circuit chip. Furthermore, nowadays, instead of manually making an integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Language Description Language), traffic, pl (core unified Programming Language), HDCal, JHDL (Java Hardware Description Language), langue, Lola, HDL, laspam, hardsradware (Hardware Description Language), vhjhd (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (18)

1. A user identification method, comprising:
the method comprises the steps of obtaining a face feature vector of a user to be recognized at a target application, wherein the face feature vector is vector data obtained by performing feature extraction on a face image of the user to be recognized, which is acquired by target equipment;
determining a first target user from the authentication users of the target application, wherein the first similarity between the reserved human face feature vector of the first target user and the human face feature vector is greater than a first threshold value;
determining a second target user from the authentication users of the target application, wherein the second target user is a historical user of the target equipment;
and identifying the user to be identified according to the first target user and the second target user.
2. The method according to claim 1, wherein the determining a second target user from the authenticated users of the target application specifically comprises:
and determining the authenticated user who uses the target application on the target equipment within a preset time period before the face feature vector is obtained from the authenticated users of the target application.
3. The method according to claim 2, wherein the determining, from the authenticated users of the target application, the authenticated user who has used the target application on the target device within a preset time period before the face feature vector is obtained specifically includes:
acquiring a request message which is sent by the target device in the preset time period and aims at a specified account of the target application;
and determining an authenticated user of the specified account of the target application based on the request message.
4. The method according to claim 1, wherein the obtaining of the face feature vector of the user to be recognized at the target application specifically includes:
acquiring a user identification request aiming at a designated account of a target application, which is sent by target equipment, wherein the user identification request carries a face feature vector of a user to be identified;
the determining a first target user from the authenticated users of the target application specifically includes:
obtaining a reserved human face feature vector of the authenticated user of the specified account;
calculating the similarity between the reserved human face feature vector of the authenticated user of the specified account and the human face feature vector of the user to be identified;
and when the similarity is larger than a first threshold value, determining the authenticated user of the specified account as a first target user.
5. The method according to claim 1, wherein the determining a first target user from the authenticated users of the target application specifically comprises:
determining similarity between the reserved human face feature vectors of the target application authentication users and the human face feature vectors of the users to be identified one by one;
and determining the authentication user corresponding to the similarity which is determined for the first time and is greater than a first threshold value as a first target user.
6. The method of claim 1, wherein before identifying the user to be identified according to the first target user and the second target user, the method further comprises:
aiming at other users except the first target user in the second target user, calculating a second similarity between the reserved human face feature vectors of the other users and the human face feature vector of the user to be identified;
the identifying the user to be identified according to the first target user and the second target user specifically includes:
and identifying the user to be identified according to the first similarity and all the second similarities.
7. The method according to claim 6, wherein the identifying the user to be identified according to the first similarity and all the second similarities specifically comprises:
determining the difference between the maximum value and the second maximum value in the first similarity and all the second similarities;
judging whether the difference between the maximum value and the second maximum value is greater than a second threshold value or not to obtain a first judgment result;
when the first judgment result shows that the difference between the maximum value and the second maximum value is larger than the second threshold value, determining the user to be identified as the authentication user corresponding to the maximum value;
and when the first judgment result shows that the difference between the maximum value and the second maximum value is less than or equal to the second threshold value, generating a result which shows that the face recognition of the user to be recognized fails.
8. The method of claim 7, before determining the user to be identified as the authenticated user corresponding to the maximum value, further comprising:
obtaining a plurality of bottom-left face feature vectors of the authenticated user corresponding to the maximum value;
judging whether the similarity between the plurality of bottom-left face feature vectors and the face feature vector of the user to be identified is larger than a third threshold value to obtain a second judgment result;
the determining the user to be identified as the authenticated user corresponding to the maximum value specifically includes:
and when the second judgment result shows that the similarity between the plurality of reserved face feature vectors and the face feature vector of the user to be identified is greater than the third threshold, determining the user to be identified as the authenticated user corresponding to the maximum value.
9. The method according to claim 6, wherein the identifying the user to be identified according to the first similarity and all the second similarities specifically comprises:
determining the difference between the maximum value and the second largest value of the first similarity and all the second similarities and the maximum value;
inputting the difference between the maximum value and the second maximum value and the maximum value into a logistic regression model to obtain a predicted value output by the logistic regression model, wherein the predicted value represents the probability of passing the face recognition of the user to be recognized;
judging whether the predicted value is larger than a fourth threshold value or not to obtain a third judgment result;
when the third judgment result shows that the predicted value is greater than the fourth threshold value, determining the user to be identified as the authentication user corresponding to the maximum value;
and when the third judgment result shows that the prediction result is less than or equal to the fourth threshold, generating a result showing that the face recognition of the user to be recognized fails.
10. The method according to any one of claims 7-9, before determining the difference between the maximum and the second largest of the first similarity and all of the second similarities, further comprising:
judging whether the first similarity and all the second similarities contain a plurality of maximum values to obtain a fourth judgment result;
when the fourth judgment result shows that the first similarity and all the second similarities comprise a plurality of maximum values, generating a result showing that the face recognition of the user to be recognized fails;
and when the fourth judgment result shows that the first similarity and all the second similarities do not contain a plurality of maximum values, determining the difference between the maximum value and the second maximum value in the first similarity and all the second similarities.
11. A user identification device comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a face feature vector of a user to be identified at a target application, and the face feature vector is vector data obtained by extracting features of a face image of the user to be identified, which is acquired by target equipment;
the first determining module is used for determining a first target user from the authentication users of the target application, and the first similarity between the background-left face feature vector of the first target user and the face feature vector is greater than a first threshold value;
a second determining module, configured to determine a second target user from the authenticated users of the target application, where the second target user is a historical user of the target device;
and the identification module is used for identifying the user to be identified according to the first target user and the second target user.
12. The apparatus of claim 11, the second determining means comprising:
a first determining unit, configured to determine, from the authenticated users of the target application, the authenticated user who has used the target application on the target device within a preset time period before the face feature vector is acquired.
13. The apparatus according to claim 12, wherein the first determining unit is specifically configured to:
acquiring a request message which is sent by the target device in the preset time period and aims at a specified account of the target application;
and determining an authenticated user of the specified account of the target application based on the request message.
14. The apparatus of claim 11, further comprising:
the calculation module is used for calculating a second similarity between the reserved human face feature vectors of other users and the human face feature vectors of the users to be recognized aiming at other users except the first target user in the second target user;
the identification module is specifically configured to:
and identifying the user to be identified according to the first similarity and all the second similarities.
15. The apparatus of claim 14, the identification module comprising:
a second determining unit, configured to determine a difference between a maximum value and a second largest value of the first similarity and all the second similarities;
the first judgment unit is used for judging whether the difference between the maximum value and the second maximum value is greater than a second threshold value or not to obtain a first judgment result;
a third determining unit, configured to determine, when the first determination result indicates that the difference between the maximum value and the second largest value is larger than the second threshold, the user to be identified as the authenticated user corresponding to the maximum value;
and the first result generation unit is used for generating a result which represents that the face recognition of the user to be recognized fails when the first judgment result represents that the difference between the maximum value and the second maximum value is less than or equal to the second threshold value.
16. The apparatus of claim 15, the identification module further comprising:
the obtaining unit is used for obtaining a plurality of bottom-left face feature vectors of the authenticated user corresponding to the maximum value;
the second judging unit is used for judging whether the similarity between the plurality of bottom-left face feature vectors and the face feature vector of the user to be identified is larger than a third threshold value to obtain a second judging result;
the third determining unit is specifically configured to:
and when the second judgment result shows that the similarity between the plurality of reserved face feature vectors and the face feature vector of the user to be identified is greater than the third threshold, determining the user to be identified as the authenticated user corresponding to the maximum value.
17. The apparatus of claim 14, the identification module comprising:
a fourth determining unit, configured to determine a difference between a maximum value and a second largest value among the first similarity and all the second similarities, and the maximum value;
the prediction unit is used for inputting the difference between the maximum value and the second maximum value and the maximum value into a logistic regression model to obtain a predicted value output by the logistic regression model, wherein the predicted value represents the probability of passing the face recognition of the user to be recognized at this time;
the third judgment unit is used for judging whether the predicted value is larger than a fourth threshold value or not to obtain a third judgment result;
a fifth determining unit, configured to determine, when the third determination result indicates that the predicted value is greater than the fourth threshold, the user to be identified as the authenticated user corresponding to the maximum value;
and the second result generation unit is used for generating a result which represents that the face recognition of the user to be recognized fails when the third judgment result represents that the prediction result is less than or equal to the fourth threshold.
18. A user identification device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
the method comprises the steps of obtaining a face feature vector of a user to be recognized at a target application, wherein the face feature vector is vector data obtained by performing feature extraction on a face image of the user to be recognized, which is acquired by target equipment;
determining a first target user from the authentication users of the target application, wherein the first similarity between the reserved human face feature vector of the first target user and the human face feature vector is greater than a first threshold value;
determining a second target user from the authentication users of the target application, wherein the second target user is a historical user of the target equipment;
and identifying the user to be identified according to the first target user and the second target user.
CN202010329567.9A 2020-04-24 2020-04-24 User identification method, device and equipment Pending CN111242105A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010329567.9A CN111242105A (en) 2020-04-24 2020-04-24 User identification method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010329567.9A CN111242105A (en) 2020-04-24 2020-04-24 User identification method, device and equipment

Publications (1)

Publication Number Publication Date
CN111242105A true CN111242105A (en) 2020-06-05

Family

ID=70865623

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010329567.9A Pending CN111242105A (en) 2020-04-24 2020-04-24 User identification method, device and equipment

Country Status (1)

Country Link
CN (1) CN111242105A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112200070A (en) * 2020-10-09 2021-01-08 支付宝(杭州)信息技术有限公司 User identification and service processing method, device, equipment and medium
CN112968819A (en) * 2021-01-18 2021-06-15 珠海格力电器股份有限公司 Household appliance control method and device based on TOF

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107517434A (en) * 2017-07-31 2017-12-26 上海斐讯数据通信技术有限公司 A kind of method and system of Weight-detecting device identification user
CN110059560A (en) * 2019-03-18 2019-07-26 阿里巴巴集团控股有限公司 The method, device and equipment of recognition of face
CN110490026A (en) * 2018-05-14 2019-11-22 阿里巴巴集团控股有限公司 The methods, devices and systems of identifying object

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107517434A (en) * 2017-07-31 2017-12-26 上海斐讯数据通信技术有限公司 A kind of method and system of Weight-detecting device identification user
CN110490026A (en) * 2018-05-14 2019-11-22 阿里巴巴集团控股有限公司 The methods, devices and systems of identifying object
CN110059560A (en) * 2019-03-18 2019-07-26 阿里巴巴集团控股有限公司 The method, device and equipment of recognition of face

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112200070A (en) * 2020-10-09 2021-01-08 支付宝(杭州)信息技术有限公司 User identification and service processing method, device, equipment and medium
CN112968819A (en) * 2021-01-18 2021-06-15 珠海格力电器股份有限公司 Household appliance control method and device based on TOF

Similar Documents

Publication Publication Date Title
CN110570200B (en) Payment method and device
CN109087106B (en) Wind control model training and wind control method, device and equipment for recognizing fraudulent use of secondary number-paying account
CN111401272B (en) Face feature extraction method, device and equipment
WO2021031528A1 (en) Method, apparatus, and device for identifying operation user
CN115862088A (en) Identity recognition method and device
CN111159697B (en) Key detection method and device and electronic equipment
CN111523431A (en) Face recognition method, device and equipment
CN111242105A (en) User identification method, device and equipment
CN113468520A (en) Data intrusion detection method applied to block chain service and big data server
CN112215613A (en) Password verification method, device, equipment and medium
CN110033092B (en) Data label generation method, data label training device, event recognition method and event recognition device
CN113177795A (en) Identity recognition method, device, equipment and medium
CN113674318A (en) Target tracking method, device and equipment
CN112836612B (en) Method, device and system for user real-name authentication
CN115358777A (en) Advertisement putting processing method and device of virtual world
CN112597459A (en) Identity verification method and device
CN110705439B (en) Information processing method, device and equipment
US10776472B2 (en) Authentication and authentication mode determination method, apparatus, and electronic device
CN114445207A (en) Tax administration system based on digital RMB
CN113837006A (en) Face recognition method and device, storage medium and electronic equipment
CN111784352A (en) Authentication risk identification method and device and electronic equipment
CN111291645A (en) Identity recognition method and device
CN111989693A (en) Biometric identification method and device
CN113239851B (en) Privacy image processing method, device and equipment based on privacy protection
CN109165488B (en) Identity authentication method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200605

RJ01 Rejection of invention patent application after publication