CN115455393A - User identity authentication method and device and server - Google Patents

User identity authentication method and device and server Download PDF

Info

Publication number
CN115455393A
CN115455393A CN202211142989.0A CN202211142989A CN115455393A CN 115455393 A CN115455393 A CN 115455393A CN 202211142989 A CN202211142989 A CN 202211142989A CN 115455393 A CN115455393 A CN 115455393A
Authority
CN
China
Prior art keywords
target
expression
target expression
video
type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211142989.0A
Other languages
Chinese (zh)
Inventor
杨徵穹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bank of China Ltd
Original Assignee
Bank of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bank of China Ltd filed Critical Bank of China Ltd
Priority to CN202211142989.0A priority Critical patent/CN115455393A/en
Publication of CN115455393A publication Critical patent/CN115455393A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection

Abstract

The specification provides a user identity authentication method, a user identity authentication device and a user identity authentication server, which are used for the field of network security. Based on the method, after a target data processing request initiated by a target user is received, a target expression type to be verified currently is determined; sending a target expression video acquisition request to a target terminal according to the target expression type; receiving a target expression video fed back by a target terminal, and detecting whether the expression type of the facial expression in the target expression video is the target expression type; under the condition that the expression type of the facial expression in the target expression video is determined to be the target expression type, carrying out living body detection on a character object in the target expression video by using an improved local binary operator; and under the condition that the living body detection of the character object in the target expression video is determined to pass, performing identity verification on the target user according to the target expression video. Therefore, the risk attack behavior based on the medium in the identity authentication process can be efficiently and accurately detected, and the detection error is reduced.

Description

User identity authentication method and device and server
Technical Field
The present specification belongs to the technical field of network security, and in particular, to a user identity authentication method, apparatus, and server.
Background
The development and popularization of the face recognition technology bring convenience to the life and work of users and bring risks. For example, some users may impersonate others to perform face recognition using photos or videos containing faces of others, and the above risk attack may pose a threat to data security of others.
Therefore, a user authentication method capable of efficiently and accurately identifying the media-based risk attack is needed.
Disclosure of Invention
The specification provides a user identity authentication method, a user identity authentication device and a user identity authentication server, which can efficiently and accurately detect media-based risk attack behaviors in an identity authentication process, reduce detection errors and better protect data security of users.
The present specification provides a user identity authentication method, which is applied to a server, and the method comprises the following steps:
receiving and responding a target data processing request initiated by a target user through a target terminal, and determining a target expression type to be verified currently;
sending a target expression video acquisition request to a target terminal according to the target expression type;
receiving a target expression video fed back by a target terminal, and detecting whether the expression type of the facial expression in the target expression video is the target expression type;
under the condition that the expression type of the facial expression in the target expression video is determined to be the target expression type, carrying out living body detection on a character object in the target expression video by using an improved local binary operator;
and under the condition that the living body detection of the character object in the target expression video is determined to pass, performing identity verification on the target user according to the target expression video.
In one embodiment, determining the target expression type to be currently verified includes:
randomly determining a preset expression type from a plurality of preset expression types as a target expression type; wherein the preset expression types include: blink, smile, nod, open mouth, shake head.
In one embodiment, detecting whether the expression type of the facial expression in the target expression video is the target expression type includes:
extracting one or more frames of images from the target expression video as key frame images according to a preset extraction rule;
according to the key frame image, performing expression type recognition to obtain a corresponding expression type recognition result;
and determining whether the expression type of the facial expression in the target expression video is the target expression type according to the expression type identification result.
In one embodiment, after determining whether the expression type of the facial expression in the target expression video is the target expression type according to the expression type recognition result, the method further includes:
determining that the target expression video has risks under the condition that the expression type of the facial expression in the target expression video is not the target expression type; generating corresponding first-type prompt information;
and sending the first type prompt information to a target terminal.
In one embodiment, the live body detection of the human object in the target expression video by using the improved local binary operator comprises the following steps:
converting the key frame image into a corresponding key frame gray image;
determining a threshold parameter of each local sub-region in the key frame gray level image by using an improved local binary operator;
determining improved local binary pattern parameters of pixel points in each local sub-region in the key frame image according to the threshold parameters of each local sub-region by using an improved local binary operator;
and carrying out living body detection on the character object in the target expression video according to the improved local binary pattern parameters of the pixel points in each local sub-area in the key frame image.
In one embodiment, determining an improved local binary pattern parameter of a pixel point in each local sub-region in a key frame image according to a threshold parameter of each local sub-region by using an improved local binary operator, includes:
determining improved local binary pattern parameters of a current pixel point in a current local sub-area by using an improved local binary operator according to the following mode:
determining the gray value of the current pixel point and the gray value of the adjacent pixel point in the preset neighborhood of the current pixel point;
comparing the sum of the gray value of the current pixel point and the threshold parameter of the current local subregion with the gray value of the adjacent pixel point respectively to obtain a corresponding comparison result;
and determining a corresponding array as an improved local binary pattern parameter of the current pixel point in the current local sub-area according to the comparison result.
In one embodiment, the live body detection of the human object in the target expression video according to the improved local binary pattern parameters of the pixel points in each local sub-region in the key frame image includes:
according to the improved local binary pattern parameters of the pixel points in each local sub-region in the key frame image, a distribution histogram of a plurality of local binary pattern parameters for each local sub-region is constructed;
combining the distribution histograms of the local binary pattern parameters to obtain a feature histogram for the key frame image;
and according to the characteristic histogram, carrying out living body detection on the human object in the target expression video.
In one embodiment, after the live body detection of the human object in the target expression video according to the feature histogram, the method further includes:
determining that the target expression video has risks under the condition that the living body detection of the character object in the target expression video fails; generating corresponding second type prompt information;
and sending the second type of prompt information to a target terminal.
In one embodiment, the authentication of the target user according to the target expression video comprises the following steps:
extracting human face features from the key frame image;
performing feature matching on the face features and a face feature template of a target user to obtain a feature matching result;
and determining whether the target user identity authentication is passed or not according to the feature matching result.
In one embodiment, the method further comprises:
and under the condition that the identity authentication of the target user is determined to pass according to the feature matching result, corresponding target data processing is carried out according to the target data processing request.
In one embodiment, the performing expression type recognition according to the key frame image to obtain a corresponding expression type recognition result includes:
and processing the key frame image by using a preset expression recognition model to obtain a corresponding expression type recognition result.
This specification also provides a user authentication apparatus, applied to a server, the apparatus including:
the receiving module is used for receiving and responding to a target data processing request initiated by a target user through a target terminal and determining the type of a target expression to be verified currently;
the sending module is used for sending a target expression video obtaining request to a target terminal according to the target expression type;
the first detection module is used for receiving a target expression video fed back by a target terminal and detecting whether the expression type of the facial expression in the target expression video is the target expression type;
the second detection module is used for carrying out living body detection on the character object in the target expression video by utilizing the improved local binary operator under the condition that the expression type of the facial expression in the target expression video is determined to be the target expression type;
and the verification module is used for performing identity verification on the target user according to the target expression video under the condition that the living body detection of the character object in the target expression video is determined to pass.
The present specification also provides a server comprising a processor and a memory for storing processor-executable instructions, the processor implementing the following steps when executing the instructions: receiving and responding a target data processing request initiated by a target user through a target terminal, and determining a target expression type to be verified currently; sending a target expression video acquisition request to a target terminal according to the target expression type; receiving a target expression video fed back by a target terminal, and detecting whether the expression type of the facial expression in the target expression video is the target expression type; under the condition that the expression type of the facial expression in the target expression video is determined to be the target expression type, carrying out living body detection on a character object in the target expression video by using an improved local binary operator; and under the condition that the living body detection of the character object in the target expression video is determined to pass, performing identity verification on the target user according to the target expression video.
The present specification also provides a computer readable storage medium having stored thereon computer instructions which, when executed by a processor, carry out the steps of the user authentication method.
The present specification also provides a computer program product comprising a computer program which, when executed by a processor, performs the steps associated with the method of user authentication.
Based on the user identity authentication method, device and server provided by the specification, after a target data processing request initiated by a target user through a target terminal is received, a target expression type to be currently authenticated can be determined; sending a target expression video acquisition request to a target terminal according to the target expression type; after a target expression video fed back by a target terminal is received, whether the expression type of a facial expression in the target expression video is a target expression type can be detected; under the condition that the expression type of the facial expression in the target expression video is determined to be the target expression type, carrying out living body detection on the character object in the target expression video by using an improved local binary operator; and under the condition that the living body detection of the character object in the target expression video is determined to pass, performing identity verification on the target user according to the target expression video. Therefore, the risk attack behavior based on the medium in the user identity authentication process can be efficiently and accurately detected, the detection error is reduced, the accuracy and credibility of the user identity authentication are ensured, and the data security of the user can be well protected.
Drawings
In order to more clearly illustrate the embodiments of the present description, the drawings needed for the embodiments will be briefly described below, the drawings in the following description are only some of the embodiments described in the present description, and other drawings may be obtained by those skilled in the art without inventive efforts.
Fig. 1 is a schematic flowchart of a user authentication method according to an embodiment of the present disclosure;
FIG. 2 is a diagram illustrating an embodiment of a method for authenticating a user according to an embodiment of the present specification, in an example scenario;
FIG. 3 is a diagram illustrating an embodiment of a method for authenticating a user according to an embodiment of the present specification, in an example scenario;
FIG. 4 is a diagram illustrating an embodiment of a method for authenticating a user according to an embodiment of the present specification, in an example scenario;
FIG. 5 is a diagram illustrating an embodiment of a method for authenticating a user according to an embodiment of the present specification, in an example scenario;
FIG. 6 is a schematic structural component diagram of a server provided in an embodiment of the present description;
fig. 7 is a schematic structural component diagram of a user authentication apparatus according to an embodiment of the present disclosure.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present specification, and not all of the embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments in the present specification without any inventive step should fall within the scope of protection of the present specification.
Referring to fig. 1, an embodiment of the present specification provides a user authentication method, where the method is specifically applied to a server side. In specific implementation, the method may include the following:
s101: receiving and responding a target data processing request initiated by a target user through a target terminal, and determining a target expression type to be verified currently;
s102: sending a target expression video acquisition request to a target terminal according to the target expression type;
s103: receiving a target expression video fed back by a target terminal, and detecting whether the expression type of the facial expression in the target expression video is the target expression type;
s104: under the condition that the expression type of the facial expression in the target expression video is determined to be the target expression type, carrying out living body detection on a character object in the target expression video by using an improved local binary operator;
s105: and under the condition that the living body detection of the character object in the target expression video is determined to pass, performing identity verification on the target user according to the target expression video.
In some embodiments, referring to fig. 2, the user authentication method described above may be specifically applied to the server side.
The server may specifically include a background server that is applied to a network platform side and is capable of implementing functions such as data transmission and data processing. Specifically, the server may be, for example, an electronic device having data operation, storage functions, and network interaction functions. Alternatively, the server may also be a software program that runs in the electronic device and provides support for data processing, storage, and network interaction. In the present embodiment, the number of servers is not particularly limited. The server may specifically be one server, or may also be several servers, or a server cluster formed by several servers.
In this embodiment, when a target user needs to request a server to perform related target data processing, a corresponding target data processing request may be initiated to the server through a target terminal.
The target terminal may specifically include a front end that is applied to a target user side and is capable of implementing functions such as data acquisition and data transmission. Specifically, the target terminal may be an electronic device such as a desktop computer, a tablet computer, a notebook computer, and a mobile phone. Alternatively, the target terminal may be a software application capable of running in the electronic device. For example, it may be some APP running on a cell phone, etc.
In some embodiments, the target data processing request may be specifically understood as request data for requesting target data processing. The target data processing may be specifically understood as data processing that requires user authentication.
Specifically, for example, the target data processing request may be a transaction request, an account login request, a data query request, or the like.
Further, the target data processing request may also carry a user identifier of the target user. The user identifier may be specifically understood as identification information capable of indicating a target user, for example, a user name, a user number, a registered mobile phone number, and the like of the target user.
It should be noted that, in this specification, the information data related to the user is obtained and used on the premise that the user knows and agrees. Moreover, the acquisition, storage, use, processing and the like of the information data all conform to relevant regulations of national laws and regulations.
In some embodiments, the determining of the target expression type to be currently verified may include, in specific implementation, the following: randomly determining a preset expression type from a plurality of preset expression types as a target expression type; the preset expression type may specifically include: blinking, smiling, nodding, mouth opening, head shaking, etc.
Specifically, the server may generate a random number using the receiving time or serial number of the target data processing request by using the random number generator; then, carrying out remainder operation on the random number to obtain a corresponding remainder; and then taking the preset expression type indicated by the remainder as a randomly selected target expression type.
Further, the server may generate a target expression video acquisition request corresponding to the target expression, and send the target expression video acquisition request to the target terminal. The target expression video acquisition request also carries corresponding guidance prompt information for guiding and prompting a target user to record and feed back the expression video conforming to the target expression type by using a target terminal.
Correspondingly, the target terminal receives and responds to the target expression video acquisition request, displays the guide prompt information to the target user so as to guide the target user to display the expression action of the target expression type, and records by using the target terminal so as to obtain the target expression video meeting the requirements of the server; and feeding back the target expression video to a server for detection.
Based on the embodiment, the server randomly selects one preset expression type as the target expression type to be verified, so that the user can be better prevented from falsifying the identity attempt of other people for identity verification by using a pre-prepared medium (such as an image or a video) containing the fixed expression of other people to a certain extent.
In some embodiments, referring to fig. 3, the detecting whether the expression type of the facial expression in the target expression video is the target expression type may include the following steps:
s1: extracting one or more frames of images from the target expression video as key frame images according to a preset extraction rule;
s2: according to the key frame image, performing expression type recognition to obtain a corresponding expression type recognition result;
s3: and determining whether the expression type of the facial expression in the target expression video is the target expression type according to the expression type identification result.
In specific implementation, according to a preset extraction rule, a target expression video can be divided into three sections of sub-videos; and extracting a frame of image positioned in the middle position from the three segments of sub-videos respectively to obtain multiple frames of images as key frame images.
Furthermore, expression type recognition can be performed on the key frame image to determine whether the expression type of the facial expression in the target expression video is the target expression type.
In some embodiments, the performing expression type recognition according to the key frame image to obtain a corresponding expression type recognition result may include: and processing the key frame image by using a preset expression recognition model to obtain a corresponding expression type recognition result.
Specifically, under the condition that the key frame image comprises a plurality of frames of images, the plurality of frames of images in the key frame image can be spliced according to the time sequence to obtain a spliced expression image; processing the spliced expression images by using a preset expression recognition model to obtain corresponding expression type recognition results; and determining the expression type of the facial expression in the target expression video according to the expression type recognition result.
The preset expression recognition model can be specifically understood as a classification model which is trained in advance and can recognize the expression type of the face in the image based on the input image.
In some embodiments, the server may compare the determined expression type of the facial expression in the target expression video based on the expression type recognition result with the target expression type. If the target expression video and the target expression video are the same, the expression of the facial expression in the target expression video can be determined to be the target expression type, and then subsequent data processing can be triggered. If the two are different, it can be determined that the expression of the facial expression in the target expression video is not the target expression type, at this time, it can be preliminarily determined that the current target user may only have a pre-prepared medium only containing a fixed expression, and the target expression video meeting the requirements and containing the facial expression of the target expression type determined randomly cannot be provided, so that it can be determined that the target expression video has a risk of falsely using the expression images or expression videos of other users than the target user, and in order to protect the data security of other users, subsequent data processing is not triggered.
In some embodiments, after determining whether the expression type of the facial expression in the target expression video is the target expression type according to the expression type recognition result, when the method is implemented, the following may be further included:
s1: determining that the target expression video has risks under the condition that the expression type of the facial expression in the target expression video is not the target expression type; generating corresponding first-type prompt information;
s2: and sending the first type of prompt information to a target terminal.
The first type of prompt information may be specifically used to prompt that the target expression video provided by the target user is not in compliance with the requirement, and the subsequent data processing cannot be continued for a while.
Further, the server can randomly determine a new target expression type again; and generating and sending a target expression video acquisition request to a target terminal according to the new target expression type so as to provide two detection opportunities for the target user.
In some embodiments, referring to fig. 3 and fig. 4, when the living body detection is performed on the human object in the target expression video by using the improved local binary operator, the following steps may be included:
s1: converting the key frame image into a corresponding key frame gray image;
s2: determining a threshold parameter of each local sub-region in the key frame gray level image by using an improved local binary operator;
s3: determining improved local binary pattern parameters of pixel points in each local sub-region in the key frame image according to the threshold parameters of each local sub-region by using the improved local binary operator;
s4: and carrying out living body detection on the character object in the target expression video according to the improved local binary pattern parameters of the pixel points in each local sub-area in the key frame image.
The improved local binary operator can be specifically understood as an improved operator which introduces differentiation of different local sub-regions and an adjustable threshold parameter for different local sub-regions on the basis of the local binary operator.
In some embodiments, the determining the threshold parameter of each local sub-region in the key frame gray scale map by using the improved local binary operator may include the following steps:
s1: dividing a plurality of local sub-regions in the key frame gray level image by using an improved local binary operator; wherein the plurality of local sub-regions are image regions of the same area size;
s2: determining the change degree of the gray value of each local subregion;
s3: and determining the threshold parameter of each local subregion according to the change degree of the gray value of the pixel point in each local subregion.
Specifically, for example, the improved local binary branches may be used to divide the keyframe grayscale map into image regions with a fixed area of 4*4 from top to bottom and from left to right, so as to obtain a plurality of local sub-regions. Wherein, the 4 may specifically refer to 4 pixels.
In a specific implementation, the determining the change degree of the gray value of each local sub-region may include: calculating the variance of the gray value of the pixel point of each local subregion for each local subregion in the plurality of local subregions; and determining the change degree of the gray value of the pixel points in each local subregion according to the variance of the gray value of the pixel points in each local subregion.
In specific implementation, when the threshold parameter (which may be recorded as t) of each local sub-region is determined according to the change degree of the gray value of the pixel point in each local sub-region, a relatively large numerical value may be set for the threshold parameter of the local sub-region with a large change degree of the gray value in combination with historical processing records and processing experience; the threshold parameter for the local sub-region with the smaller variation of the gray value may be set to a relatively small value, or even to 0.
Further, in order to determine the threshold parameter of each local sub-region in the key frame gray scale image more accurately, the determining the threshold parameter of each local sub-region in the key frame gray scale image by using the improved local binary operator may further include, in specific implementation, the following:
s1: determining the difference value of gray values between adjacent pixel points in the key frame gray image by using an improved local binary operator;
s2: dividing adjacent pixel points of which the difference values are smaller than a preset difference threshold into a local sub-area according to the difference values of the gray values between adjacent pixel points to obtain a plurality of local sub-areas;
s3: determining the change degree of the gray value of each local subregion;
s4: and determining the threshold parameter of each local subregion according to the change degree of the gray value of the pixel point in each local subregion.
The gray value change degrees of the pixel points in the same local sub-area are the same or similar, and the gray value change degrees of the pixel points in different local sub-areas can be different.
Based on the above embodiment, different local sub-regions can be finely distinguished by using an improved local binary operator, and appropriate and accurate threshold parameters are set for each local sub-region respectively.
In some embodiments, the determining, by using the improved local binary operator, the improved local binary pattern parameter of the pixel point in each local sub-region in the key frame image according to the threshold parameter of each local sub-region may include the following steps: determining improved local binary pattern parameters of a current pixel point in a current local sub-area by using an improved local binary operator according to the following mode:
s1: determining the gray value of the current pixel point and the gray value of the adjacent pixel point in the preset neighborhood of the current pixel point;
s2: comparing the sum of the gray value of the current pixel point and the threshold parameter of the current local sub-area with the gray value of the adjacent pixel point respectively to obtain corresponding comparison results;
s3: and determining a corresponding array as an improved local binary pattern parameter of the current pixel point in the current local sub-area according to the comparison result.
Specifically, for example, the neighborhood is a pixel region of 3*3 centered on the current pixel. Wherein, 3 may refer to 3 pixels. As can be seen in fig. 5. The current pixel point can be recorded as 1, and the adjacent pixel points in the preset neighborhood of the current pixel point can be respectively recorded as: 2,3,4,5,6,7,8,9. Adding the gray value of the point 1 and the threshold parameter of the local sub-area where the point 1 is located to obtain a reference gray value; then, sequentially comparing the reference gray value with a point 2,3,4,5,6,7,8,9 from top to bottom and from left to right; when the reference gray value is determined to be smaller than the gray value of a certain pixel point, the data bit corresponding to the pixel point may be marked as 1, and conversely, when the reference gray value is determined to be greater than or equal to the gray value of the certain pixel point, the data value of the data bit corresponding to the pixel point may be marked as 0. For example, when the reference grayscale value is determined to be less than the grayscale value for point 2, the first data bit corresponding to point 2 may be labeled 1; when it is determined that the reference gray scale value is less than the gray scale value of point 3, the data value of the second data bit corresponding to point 3 may be written as 0; by analogy, an array of data values comprising 8 data bits in sequence may be determined, respectively, to obtain an improved local binary pattern parameter for point 1, which may be written as an array, for example: [10001110].
According to the mode, the improved local binary pattern parameters of each pixel point can be determined by utilizing the improved local binary operator.
Based on the embodiment, the improved local binary operator is utilized, the change degree of the gray value of the pixel points in different local sub-regions is fully considered, and the corresponding threshold parameter is adopted to determine the improved local binary pattern parameter of the pixel points in different local sub-regions, so that the influence of image noise in different sub-regions can be effectively eliminated, the texture detail characteristics in the image can be reflected more accurately, and further, the improved local binary pattern parameter can be utilized to accurately carry out in-vivo detection, so that the detection error is reduced, and the detection precision is improved.
In some embodiments, referring to fig. 3, when the live detection is performed on the human object in the target expression video according to the improved local binary pattern parameter of the pixel point in each local sub-region in the key frame image, the specific implementation may include the following contents:
s1: according to the improved local binary pattern parameters of the pixel points in each local sub-region in the key frame image, a distribution histogram of a plurality of local binary pattern parameters for each local sub-region is constructed;
s2: combining the distribution histograms of the local binary pattern parameters to obtain a feature histogram for the key frame image;
s3: and according to the characteristic histogram, carrying out living body detection on the human object in the target expression video.
In some embodiments, the performing living body detection on the human object in the target expression video according to the feature histogram may include: calling a preset classification model to process the feature square chart to obtain a corresponding target classification result; and determining whether the human object in the target expression video passes the live body detection or not according to the target classification result. The preset classification model may be a living body classification detector.
Before specific implementation, a preset classification model can be obtained by training in the following way: acquiring sample data, and marking whether the character object in the sample data is a living body according to the facial expression in the sample data to obtain marked sample data; processing the labeled sample data by using an improved local binary operator to obtain a corresponding labeled feature histogram; then constructing an initial two-classification model; and training the initial two classification models by using the labeled feature histogram to obtain a preset classification model meeting the precision requirement.
In some embodiments, after performing live detection on the human object in the target expression video according to the feature histogram, when the method is implemented, the following may be further included:
s1: determining that the target expression video has risks under the condition that the living body detection of the character object in the target expression video fails; generating corresponding second type prompt information;
s2: and sending the second type of prompt information to a target terminal.
In this embodiment, when it is determined that the living body detection of the human object in the target expression video fails, it may be determined that a high probability target user pretends to be another user by using a medium containing an expression of another user, that is, there is a security risk in the target expression video provided by the target user, and at this time, in order to protect data security of the other user, corresponding target data processing may be rejected in response to a target data processing request.
The second type of prompt information may be specifically used to prompt that the target expression video provided by the target user does not meet the requirement, and the character object in the target expression video is not a living object, so that subsequent data processing cannot be continued for a while.
In some embodiments, the authentication of the target user according to the target expression video may be implemented specifically as follows:
s1: extracting human face features from the key frame images;
s2: performing feature matching on the face features and a face feature template of a target user to obtain a feature matching result;
s3: and determining whether the target user identity authentication is passed or not according to the feature matching result.
In specific implementation, the server can retrieve the user database of the platform according to the user identifier of the target user so as to obtain the face feature template of the target user; and then, carrying out feature matching on the face features extracted from the key frame images and a face feature template of the target user to obtain a corresponding feature matching result.
According to the feature matching result, the identity verification of the target user can be determined to be passed under the condition that the matching is determined to be successful. Conversely, in the case where it is determined that the matching fails, it may be determined that the target user authentication fails.
In some embodiments, when the method is implemented, the method may further include: and under the condition that the identity authentication of the target user is determined to pass according to the feature matching result, corresponding target data processing is carried out according to the target data processing request. For example, settlement processing is performed on the order associated with the transaction request; for another example, the target terminal is made to log in to the platform account of the target user; for another example, related data is queried according to the data query request, and a query result is fed back to the target terminal and the like.
On the contrary, under the condition that the identity authentication of the target user is determined to be failed according to the characteristic matching result, the response to the target data processing request can be refused, and the target data processing is refused; meanwhile, prompt information about that the target user does not have the right to perform target data processing can be generated and sent.
As can be seen from the above, based on the user identity authentication method provided in the embodiments of the present specification, after receiving a target data processing request initiated by a target user through a target terminal, a target expression type to be currently authenticated may be determined first; sending a target expression video acquisition request to a target terminal according to the target expression type; after receiving the target expression video fed back by the target terminal, whether the expression type of the facial expression in the target expression video is the target expression type can be detected; under the condition that the facial expression in the target expression video is determined to be the target expression type, carrying out living body detection on the character object in the target expression video by using an improved local binary operator; and under the condition that the living body detection of the character object in the target expression video is determined to pass, performing identity verification on the target user according to the target expression video. Therefore, the risk attack behavior based on the medium in the identity authentication process can be efficiently and accurately detected, the detection error is reduced, and the data security of the user can be well protected.
Embodiments of the present specification further provide a server, including a processor and a memory for storing processor-executable instructions, where the processor, when implemented, may perform the following steps according to the instructions: receiving and responding a target data processing request initiated by a target user through a target terminal, and determining a target expression type to be verified currently; sending a target expression video acquisition request to a target terminal according to the target expression type; receiving a target expression video fed back by a target terminal, and detecting whether the expression type of the facial expression in the target expression video is the target expression type; under the condition that the expression type of the facial expression in the target expression video is determined to be the target expression type, carrying out living body detection on the character object in the target expression video by using the improved local binary operator; and under the condition that the living body detection of the character object in the target expression video is determined to pass, performing identity verification on the target user according to the target expression video.
In order to more accurately complete the above instructions, referring to fig. 6, the present specification further provides another specific server, where the server includes a network communication port 601, a processor 602, and a memory 603, and the above structures are connected by an internal cable, so that the structures may perform specific data interaction.
The network communication port 601 may be specifically configured to receive and respond to a target data processing request initiated by a target user through a target terminal, and determine a target expression type to be currently verified.
The processor 602 may be specifically configured to send a target expression video acquisition request to a target terminal according to the target expression type; receiving a target expression video fed back by a target terminal, and detecting whether the expression type of the facial expression in the target expression video is the target expression type; under the condition that the expression type of the facial expression in the target expression video is determined to be the target expression type, carrying out living body detection on a character object in the target expression video by using an improved local binary operator; and under the condition that the living body detection of the character object in the target expression video is determined to pass, performing identity verification on the target user according to the target expression video.
The memory 603 may be specifically configured to store a corresponding instruction program.
In this embodiment, the network communication port 601 may be a virtual port bound with different communication protocols, so that different data can be sent or received. For example, the network communication port may be a port responsible for web data communication, a port responsible for FTP data communication, or a port responsible for mail data communication. In addition, the network communication port can also be a communication interface or a communication chip of an entity. For example, it may be a wireless mobile network communication chip, such as GSM, CDMA, etc.; it can also be a Wifi chip; it may also be a bluetooth chip.
In this embodiment, the processor 602 may be implemented in any suitable manner. For example, the processor may take the form of, for example, a microprocessor or processor and a computer-readable medium that stores computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, an embedded microcontroller, and so forth. The description is not intended to be limiting.
In this embodiment, the memory 603 may include multiple layers, and in a digital system, the memory may be any memory as long as binary data can be stored; in an integrated circuit, a circuit without a physical form and with a storage function is also called a memory, such as a RAM, a FIFO and the like; in the system, the storage device in physical form is also called a memory, such as a memory bank, a TF card and the like.
The embodiment of the present specification further provides a computer-readable storage medium based on the user identity authentication method, where the computer-readable storage medium stores computer program instructions, and when the computer program instructions are executed, the computer program instructions implement: receiving and responding a target data processing request initiated by a target user through a target terminal, and determining a target expression type to be verified currently; sending a target expression video acquisition request to a target terminal according to the target expression type; receiving a target expression video fed back by a target terminal, and detecting whether the expression type of the facial expression in the target expression video is the target expression type; under the condition that the expression type of the facial expression in the target expression video is determined to be the target expression type, carrying out living body detection on a character object in the target expression video by using an improved local binary operator; and under the condition that the living body detection of the character object in the target expression video is determined to pass, performing identity verification on the target user according to the target expression video.
In this embodiment, the storage medium includes, but is not limited to, a Random Access Memory (RAM), a Read-Only Memory (ROM), a Cache (Cache), a Hard Disk Drive (HDD), or a Memory Card (Memory Card). The memory may be used to store computer program instructions. The network communication unit may be an interface for performing network connection communication, which is set in accordance with a standard prescribed by a communication protocol.
In this embodiment, the functions and effects specifically realized by the program instructions stored in the computer-readable storage medium can be explained in comparison with other embodiments, and are not described herein again.
Embodiments of the present specification further provide a computer program product, which includes a computer program, and when the computer program is executed by a processor, the computer program implements the following steps: receiving and responding a target data processing request initiated by a target user through a target terminal, and determining a target expression type to be verified currently; sending a target expression video acquisition request to a target terminal according to the target expression type; receiving a target expression video fed back by a target terminal, and detecting whether the expression type of the facial expression in the target expression video is the target expression type; under the condition that the expression type of the facial expression in the target expression video is determined to be the target expression type, carrying out living body detection on a character object in the target expression video by using an improved local binary operator; and under the condition that the living body detection of the character object in the target expression video is determined to pass, performing identity verification on the target user according to the target expression video.
Referring to fig. 7, in a software level, an embodiment of the present specification further provides a user identity authentication apparatus, which may specifically include the following structural modules:
the receiving module 701 may be specifically configured to receive and respond to a target data processing request initiated by a target user through a target terminal, and determine a target expression type to be currently verified;
a sending module 702, configured to send a target expression video obtaining request to a target terminal according to the target expression type;
the first detection module 703 may be specifically configured to receive a target expression video fed back by a target terminal, and detect whether an expression type of a facial expression in the target expression video is a target expression type;
the second detection module 704 may be specifically configured to perform living body detection on a human object in the target expression video by using an improved local binary operator when it is determined that the expression type of the facial expression in the target expression video is the target expression type;
the verification module 705 may be specifically configured to perform identity verification on the target user according to the target expression video when it is determined that the living body detection of the human object in the target expression video passes.
In some embodiments, when the receiving module 701 is implemented specifically, the target expression type to be currently verified may be determined as follows: randomly determining a preset expression type from a plurality of preset expression types as a target expression type; wherein the preset expression types include: blinking, smiling, nodding, mouth opening, shaking.
In some embodiments, when the first detection module 703 is implemented specifically, it may detect whether the expression type of the facial expression in the target expression video is the target expression type according to the following manner: extracting one or more frames of images from the target expression video as key frame images according to a preset extraction rule; according to the key frame image, performing expression type recognition to obtain a corresponding expression type recognition result; and determining whether the expression type of the facial expression in the target expression video is the target expression type according to the expression type identification result.
In some embodiments, after determining whether the expression type of the facial expression in the target expression video is the target expression type according to the expression type recognition result, the apparatus may be further configured to determine that the target expression video has a risk when determining that the expression type of the facial expression in the target expression video is not the target expression type; generating corresponding first-type prompt information; and sending the first type of prompt information to a target terminal.
In some embodiments, when the second detection module 704 is implemented, living body detection may be performed on a human object in a target expression video by using a modified local binary operator in the following manner: converting the key frame image into a corresponding key frame gray image; determining a threshold parameter of each local sub-region in the key frame gray level image by using an improved local binary operator; determining improved local binary pattern parameters of pixel points in each local sub-region in the key frame image according to the threshold parameters of each local sub-region by using the improved local binary operator; and carrying out living body detection on the character object in the target expression video according to the improved local binary pattern parameters of the pixel points in each local sub-area in the key frame image.
In some embodiments, when the second detecting module 704 is implemented, the improved local binary pattern parameter of the current pixel point in the current local sub-region may be determined as follows: determining the gray value of the current pixel point and the gray value of the adjacent pixel point in the preset neighborhood of the current pixel point; comparing the sum of the gray value of the current pixel point and the threshold parameter of the current local sub-area with the gray value of the adjacent pixel point respectively to obtain corresponding comparison results; and determining a corresponding array as an improved local binary pattern parameter of the current pixel point in the current local sub-area according to the comparison result.
In some embodiments, when the second detecting module 704 is implemented, the living body detection of the human object in the target expression video may be performed according to the improved local binary pattern parameters of the pixel points in each local sub-region in the key frame image in the following manner: according to the improved local binary pattern parameters of the pixel points in each local sub-region in the key frame image, a distribution histogram of a plurality of local binary pattern parameters for each local sub-region is constructed; combining the distribution histograms of the local binary pattern parameters to obtain a feature histogram for the key frame image; and according to the characteristic histogram, carrying out living body detection on the human object in the target expression video.
In some embodiments, after the living body detection is performed on the human object in the target expression video according to the feature histogram, the apparatus may be further configured to determine that the target expression video is at risk when it is determined that the living body detection of the human object in the target expression video fails; generating corresponding second-type prompt information; and sending the second type of prompt information to a target terminal.
In some embodiments, when the apparatus is implemented, the target user may be authenticated according to the target expression video in the following manner: extracting human face features from the key frame images; performing feature matching on the face features and a face feature template of a target user to obtain a feature matching result; and determining whether the target user identity authentication is passed or not according to the feature matching result.
In some embodiments, when the device is implemented, the device may be further configured to perform corresponding target data processing according to the target data processing request, when it is determined that the target user passes the authentication according to the feature matching result.
In some embodiments, when the device is implemented, the expression type recognition may be performed according to the key frame image in the following manner to obtain a corresponding expression type recognition result: and processing the key frame image by using a preset expression recognition model to obtain a corresponding expression type recognition result.
It should be noted that, the units, devices, modules, etc. illustrated in the above embodiments may be implemented by a computer chip or an entity, or implemented by a product with certain functions. For convenience of description, the above devices are described as being divided into various modules by functions, and are described separately. It is to be understood that, in implementing the present specification, functions of each module may be implemented in one or more pieces of software and/or hardware, or a module that implements the same function may be implemented by a combination of a plurality of sub-modules or sub-units, or the like. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
As can be seen from the above, based on the user authentication device provided in the embodiments of the present specification, after receiving a target data processing request initiated by a target user through a target terminal, a target expression type to be currently authenticated may be determined first; sending a target expression video acquisition request to a target terminal according to the target expression type; after receiving a target expression video fed back by a target terminal, whether the expression type of a facial expression in the target expression video is a target expression type can be detected; under the condition that the facial expression in the target expression video is determined to be the target expression type, carrying out living body detection on the character object in the target expression video by using an improved local binary operator; and under the condition that the living body detection of the character object in the target expression video is passed, performing identity verification on the target user according to the target expression video. Therefore, the risk attack behavior based on the medium in the identity verification process can be efficiently and accurately detected, the detection error is reduced, and the data security of the user can be well protected.
Although the present specification provides method steps as described in the examples or flowcharts, additional or fewer steps may be included based on conventional or non-inventive means. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. When implemented in practice, an apparatus or client product may execute sequentially or in parallel (e.g., in a parallel processor or multithreaded processing environment, or even in a distributed data processing environment) in accordance with the embodiments or methods depicted in the figures. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the presence of additional identical or equivalent elements in a process, method, article, or apparatus that comprises the recited elements is not excluded. The terms first, second, etc. are used to denote names, but not any particular order.
Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may therefore be considered as a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
This description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, classes, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer-readable storage media including memory storage devices.
From the above description of the embodiments, it is clear to those skilled in the art that the present specification can be implemented by software plus necessary general hardware platform. With this understanding, the technical solutions in the present specification may be essentially embodied in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a mobile terminal, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments in the present specification.
The embodiments in the present specification are described in a progressive manner, and the same or similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. The description is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable electronic devices, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
While the specification has been described with examples, those skilled in the art will appreciate that there are numerous variations and permutations of the specification without departing from the spirit of the specification, and it is intended that the appended claims encompass such variations and modifications without departing from the spirit of the specification.

Claims (15)

1. A user identity authentication method is applied to a server, and comprises the following steps:
receiving and responding a target data processing request initiated by a target user through a target terminal, and determining a target expression type to be verified currently;
sending a target expression video acquisition request to a target terminal according to the target expression type;
receiving a target expression video fed back by a target terminal, and detecting whether the expression type of the facial expression in the target expression video is the target expression type;
under the condition that the expression type of the facial expression in the target expression video is determined to be the target expression type, carrying out living body detection on a character object in the target expression video by using an improved local binary operator;
and under the condition that the living body detection of the character object in the target expression video is determined to pass, performing identity verification on the target user according to the target expression video.
2. The method of claim 1, wherein determining a current target expression type to be verified comprises:
randomly determining a preset expression type from a plurality of preset expression types as a target expression type; wherein the preset expression types include: blinking, smiling, nodding, mouth opening, shaking.
3. The method of claim 1, wherein detecting whether the expression type of the facial expression in the target expression video is the target expression type comprises:
extracting one or more frames of images from the target expression video as key frame images according to a preset extraction rule;
according to the key frame image, performing expression type recognition to obtain a corresponding expression type recognition result;
and determining whether the expression type of the facial expression in the target expression video is the target expression type according to the expression type identification result.
4. The method of claim 3, wherein after determining whether the expression type of the facial expression in the target expression video is the target expression type according to the expression type recognition result, the method further comprises:
determining that the target expression video has risks under the condition that the expression type of the facial expression in the target expression video is not the target expression type; generating corresponding first-type prompt information;
and sending the first type of prompt information to a target terminal.
5. The method of claim 3, wherein live body detection of human objects in the target expression video using the refined local binary operator comprises:
converting the key frame image into a corresponding key frame gray image;
determining a threshold parameter of each local sub-region in the key frame gray level image by using an improved local binary operator;
determining improved local binary pattern parameters of pixel points in each local sub-region in the key frame image according to the threshold parameters of each local sub-region by using the improved local binary operator;
and carrying out living body detection on the character object in the target expression video according to the improved local binary pattern parameters of the pixel points in each local sub-area in the key frame image.
6. The method of claim 5, wherein determining improved local binary pattern parameters of pixel points in each local sub-region in the key frame image according to the threshold parameter of each local sub-region using an improved local binary operator comprises:
determining an improved local binary pattern parameter of a current pixel point in a current local sub-region by using an improved local binary operator according to the following mode:
determining the gray value of the current pixel point and the gray values of the adjacent pixel points in the preset neighborhood of the current pixel point;
comparing the sum of the gray value of the current pixel point and the threshold parameter of the current local sub-area with the gray value of the adjacent pixel point respectively to obtain corresponding comparison results;
and determining a corresponding array as an improved local binary pattern parameter of the current pixel point in the current local sub-area according to the comparison result.
7. The method of claim 5, wherein the live body detection of the human object in the target expression video according to the improved local binary pattern parameters of the pixel points in the local sub-regions of the key frame image comprises:
according to the improved local binary pattern parameters of the pixel points in each local sub-region in the key frame image, a distribution histogram of a plurality of local binary pattern parameters for each local sub-region is constructed;
combining the distribution histograms of the local binary pattern parameters to obtain a feature histogram for the key frame image;
and according to the characteristic histogram, carrying out living body detection on the human object in the target expression video.
8. The method of claim 7, wherein after the live body detection of the human object in the target expression video according to the feature histogram, the method further comprises:
determining that the target expression video has risks under the condition that the living body detection of the character object in the target expression video fails; generating corresponding second-type prompt information;
and sending the second type of prompt information to a target terminal.
9. The method of claim 3, wherein authenticating the target user according to the target expression video comprises:
extracting human face features from the key frame image;
performing feature matching on the face features and a face feature template of a target user to obtain a feature matching result;
and determining whether the target user identity authentication is passed or not according to the feature matching result.
10. The method of claim 9, further comprising:
and under the condition that the identity authentication of the target user is determined to pass according to the feature matching result, corresponding target data processing is carried out according to the target data processing request.
11. The method of claim 3, wherein performing expression type recognition according to the key frame image to obtain a corresponding expression type recognition result comprises:
and processing the key frame image by using a preset expression recognition model to obtain a corresponding expression type recognition result.
12. A user authentication apparatus, applied to a server, the apparatus comprising:
the receiving module is used for receiving and responding to a target data processing request initiated by a target user through a target terminal and determining the type of a target expression to be verified currently;
the sending module is used for sending a target expression video obtaining request to a target terminal according to the target expression type;
the first detection module is used for receiving a target expression video fed back by a target terminal and detecting whether the expression type of the facial expression in the target expression video is the target expression type;
the second detection module is used for carrying out living body detection on the character object in the target expression video by utilizing the improved local binary operator under the condition that the expression type of the facial expression in the target expression video is determined to be the target expression type;
and the verification module is used for performing identity verification on the target user according to the target expression video under the condition that the living body detection of the character object in the target expression video is determined to pass.
13. A server comprising a processor and a memory for storing processor-executable instructions which, when executed by the processor, implement the steps of the method of any one of claims 1 to 11.
14. A computer-readable storage medium having stored thereon computer instructions which, when executed by a processor, carry out the steps of the method of any one of claims 1 to 11.
15. A computer program product comprising a computer program which, when executed by a processor, carries out the steps of the method according to any one of claims 1 to 11.
CN202211142989.0A 2022-09-20 2022-09-20 User identity authentication method and device and server Pending CN115455393A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211142989.0A CN115455393A (en) 2022-09-20 2022-09-20 User identity authentication method and device and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211142989.0A CN115455393A (en) 2022-09-20 2022-09-20 User identity authentication method and device and server

Publications (1)

Publication Number Publication Date
CN115455393A true CN115455393A (en) 2022-12-09

Family

ID=84303878

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211142989.0A Pending CN115455393A (en) 2022-09-20 2022-09-20 User identity authentication method and device and server

Country Status (1)

Country Link
CN (1) CN115455393A (en)

Similar Documents

Publication Publication Date Title
US10664581B2 (en) Biometric-based authentication method, apparatus and system
WO2021139324A1 (en) Image recognition method and apparatus, computer-readable storage medium and electronic device
CN113366487A (en) Operation determination method and device based on expression group and electronic equipment
CN107545241A (en) Neural network model is trained and biopsy method, device and storage medium
US11126827B2 (en) Method and system for image identification
US20150227946A1 (en) Generating barcode and authenticating based on barcode
US9202035B1 (en) User authentication based on biometric handwriting aspects of a handwritten code
CN110795714A (en) Identity authentication method and device, computer equipment and storage medium
CN109194689B (en) Abnormal behavior recognition method, device, server and storage medium
CN109600336A (en) Store equipment, identifying code application method and device
KR20190122206A (en) Identification methods and devices, electronic devices, computer programs and storage media
CN112418167A (en) Image clustering method, device, equipment and storage medium
CN112330331A (en) Identity verification method, device and equipment based on face recognition and storage medium
CN111738199B (en) Image information verification method, device, computing device and medium
CN111191207A (en) Electronic file control method and device, computer equipment and storage medium
CN114861241A (en) Anti-peeping screen method based on intelligent detection and related equipment thereof
CN114386013A (en) Automatic student status authentication method and device, computer equipment and storage medium
JP2015041307A (en) Collation device and collation method and collation system and computer program
CN111178455B (en) Image clustering method, system, device and medium
CN113032047A (en) Face recognition system application method, electronic device and storage medium
CN112633200A (en) Human face image comparison method, device, equipment and medium based on artificial intelligence
CN108596127B (en) Fingerprint identification method, identity verification method and device and identity verification machine
CN115906028A (en) User identity verification method and device and self-service terminal
CN115455393A (en) User identity authentication method and device and server
US10867022B2 (en) Method and apparatus for providing authentication using voice and facial data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination