CN111914769B - User validity determination method, device, computer readable storage medium and equipment - Google Patents

User validity determination method, device, computer readable storage medium and equipment Download PDF

Info

Publication number
CN111914769B
CN111914769B CN202010783859.XA CN202010783859A CN111914769B CN 111914769 B CN111914769 B CN 111914769B CN 202010783859 A CN202010783859 A CN 202010783859A CN 111914769 B CN111914769 B CN 111914769B
Authority
CN
China
Prior art keywords
similarity
feature vector
preset
real
time image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010783859.XA
Other languages
Chinese (zh)
Other versions
CN111914769A (en
Inventor
田植良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010783859.XA priority Critical patent/CN111914769B/en
Publication of CN111914769A publication Critical patent/CN111914769A/en
Application granted granted Critical
Publication of CN111914769B publication Critical patent/CN111914769B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Abstract

The application provides a user validity judging method, a user validity judging device, a computer readable storage medium and electronic equipment; relates to the technical field of computers; acquiring a real-time image corresponding to a current user and calculating a first similarity between the real-time image and a legal portrait; determining a boundary for distinguishing a portrait area from a background area in the real-time image; calculating a second similarity between the background area and a preset background; and carrying out validity judgment on the current user according to the first similarity and the second similarity. Therefore, by implementing the technical scheme, the accuracy of the user validity judgment is improved.

Description

User validity determination method, device, computer readable storage medium and equipment
Technical Field
The present invention relates to the field of computer technology, and in particular, to a user validity determination method, a user validity determination device, a computer readable storage medium, and an electronic apparatus.
Background
With the rapid development of computing technology, the unlocking mode of the mobile terminal is not limited to password unlocking, gesture unlocking and the like, but is developed to unlocking through fingerprints and facial features. Generally, when a user needs to unlock the mobile terminal, the fingerprint identification module or the camera module can acquire fingerprint information or face information required by unlocking by lighting a screen, so as to verify the validity of the fingerprint information or the face information, and if the verification is successful, the current user can be judged as a legal user.
However, in both the fingerprint recognition and the face recognition unlocking methods, there is a risk of erroneous recognition, and thus legal user data stored in the mobile terminal is liable to be stolen. Therefore, how to improve the accuracy of the user validity determination becomes a current urgent problem to be solved.
It should be noted that the information disclosed in the foregoing background section is only for enhancing understanding of the background of the present application and thus may include information that does not form the prior art that is already known to those of ordinary skill in the art.
Disclosure of Invention
The invention aims to provide a user validity judging method, a user validity judging device, a computer readable storage medium and electronic equipment, which can improve the accuracy of user validity judgment.
Other features and advantages of the present application will be apparent from the following detailed description, or may be learned in part by the practice of the application.
According to an aspect of the present application, there is provided a user validity determination method, including:
acquiring a real-time image corresponding to a current user and calculating a first similarity between the real-time image and a legal portrait;
determining a boundary for distinguishing a portrait area from a background area in the real-time image;
Calculating a second similarity between the background area and a preset background;
and carrying out validity judgment on the current user according to the first similarity and the second similarity.
In one exemplary embodiment of the present application, calculating a first similarity between a real-time image and a legal person includes:
extracting features of the real-time image through a first neural network to obtain a first feature vector;
and obtaining legal feature vectors corresponding to the legal figures, and calculating first similarity between the legal feature vectors and the first feature vectors.
In an exemplary embodiment of the present application, a first neural network includes a convolution layer, an excitation layer, a pooling layer, and a full-connection layer, and feature extraction is performed on a real-time image through the first neural network to obtain a first feature vector, including:
extracting features of the real-time image through the convolution layer to obtain a first reference feature vector;
activating the first reference feature vector through the excitation layer to obtain a second reference feature vector;
sampling the second reference feature vector through the pooling layer to obtain a third reference feature vector;
and performing dimension reduction processing on the third reference feature vector through the full connection layer to obtain a first feature vector.
In an exemplary embodiment of the present application, before determining the boundary for distinguishing the portrait area from the background area in the real-time image, the method further includes:
preprocessing the real-time image; wherein the preprocessing includes gray scale processing or binarization processing.
In one exemplary embodiment of the present application, determining boundaries in real-time images for distinguishing portrait areas from background areas includes:
extracting features of the real-time image through a second neural network to obtain a second feature vector;
classifying each pixel point in the real-time image according to the second feature vector to obtain a class of pixels belonging to the portrait region and a class of pixels belonging to the background region;
and determining boundaries for distinguishing the portrait area and the background area according to the first-class pixels and the second-class pixels and marking the boundaries.
In an exemplary embodiment of the present application, after determining a boundary for distinguishing a portrait area from a background area according to a class one pixel and a class two pixel and performing boundary marking, the method further includes:
updating a sample set for training the second neural network according to the boundary markers;
and training a second neural network according to the updated sample set.
In an exemplary embodiment of the present application, the number of preset backgrounds is at least one, and calculating the second similarity between the background area and the preset background includes:
extracting features of the background area through a third neural network to obtain a third feature vector;
acquiring at least one preset feature vector corresponding to each preset background;
and calculating second similarity between each preset feature vector and the third feature vector.
In an exemplary embodiment of the present application, after performing feature extraction on the background area through a third neural network to obtain a third feature vector, the method further includes:
generating a sample image corresponding to a third feature vector through a third neural network;
and adjusting network parameters corresponding to the third neural network according to the sample image.
In an exemplary embodiment of the present application, the determining of validity of the current user according to the first similarity and the second similarity includes:
selecting the highest target similarity from the plurality of second similarities;
acquiring a weight value corresponding to the first similarity and the target similarity;
calculating a weighted sum of the first similarity and the target similarity according to the weight value;
And judging the validity of the current user according to the weighted sum.
In an exemplary embodiment of the present application, making a validity determination for a current user based on a weighted sum includes:
if the weighted sum is greater than or equal to the preset weighted sum, judging the current user as a legal user;
if the weighted sum is smaller than the preset weighted sum, the current user is judged to be an illegal user.
In an exemplary embodiment of the present application, after determining that the current user is a legal user, the method further includes:
if the target similarity is smaller than the preset similarity, updating the preset feature vector according to the background area.
In an exemplary embodiment of the present application, updating the preset feature vector according to the background area includes:
selecting a specific preset background with highest similarity with a background area from at least one preset background according to the vector similarity;
and carrying out average value calculation on the preset feature vector of the specific preset background and the third feature vector, and determining a calculation result as the preset feature vector after updating the specific preset background.
According to an aspect of the present application, there is provided a user validity determination apparatus including an image acquisition unit, a similarity calculation unit, a boundary determination unit, and a validity determination unit, wherein:
The image acquisition unit is used for acquiring a real-time image corresponding to the current user;
the similarity calculation unit is used for calculating first similarity between the real-time image and the legal image;
a boundary determination unit for determining a boundary for distinguishing a portrait area from a background area in the real-time image;
the similarity calculation unit is also used for calculating a second similarity between the background area and a preset background;
and the validity judging unit is used for judging the validity of the current user according to the first similarity and the second similarity.
In an exemplary embodiment of the present application, the similarity calculating unit calculates a first similarity between the real-time image and the legal person image, including:
extracting features of the real-time image through a first neural network to obtain a first feature vector;
and obtaining legal feature vectors corresponding to the legal figures, and calculating first similarity between the legal feature vectors and the first feature vectors.
In an exemplary embodiment of the present application, the first neural network includes a convolution layer, an excitation layer, a pooling layer, and a full connection layer, and the similarity calculation unit performs feature extraction on the real-time image through the first neural network to obtain a first feature vector, including:
Extracting features of the real-time image through the convolution layer to obtain a first reference feature vector;
activating the first reference feature vector through the excitation layer to obtain a second reference feature vector;
sampling the second reference feature vector through the pooling layer to obtain a third reference feature vector;
and performing dimension reduction processing on the third reference feature vector through the full connection layer to obtain a first feature vector.
In an exemplary embodiment of the present application, the above apparatus further includes an image preprocessing unit, wherein:
an image preprocessing unit for preprocessing the real-time image before the boundary determination unit determines the boundary for distinguishing the portrait area and the background area in the real-time image; wherein the preprocessing includes gray scale processing or binarization processing.
In an exemplary embodiment of the present application, the boundary determining unit determines a boundary for distinguishing a portrait area from a background area in a real-time image, including:
extracting features of the real-time image through a second neural network to obtain a second feature vector;
classifying each pixel point in the real-time image according to the second feature vector to obtain a class of pixels belonging to the portrait region and a class of pixels belonging to the background region;
And determining boundaries for distinguishing the portrait area and the background area according to the first-class pixels and the second-class pixels and marking the boundaries.
In an exemplary embodiment of the present application, the apparatus further includes a sample updating unit and a network training unit, wherein:
the sample updating unit is used for updating a sample set for training the second neural network according to the boundary mark after the boundary judging unit determines the boundary for distinguishing the portrait area and the background area according to the first-class pixels and the second-class pixels and carries out the boundary mark;
and the network training unit is used for training the second neural network according to the updated sample set.
In an exemplary embodiment of the present application, the number of preset backgrounds is at least one, and the similarity calculating unit calculates a second similarity between the background area and the preset backgrounds, including:
extracting features of the background area through a third neural network to obtain a third feature vector;
acquiring at least one preset feature vector corresponding to each preset background;
and calculating second similarity between each preset feature vector and the third feature vector.
In an exemplary embodiment of the present application, the above apparatus further comprises a sample generation unit, wherein:
The sample generation unit is used for carrying out feature extraction on the background area through a third neural network after the similarity calculation unit obtains a third feature vector, and generating a sample image corresponding to the third feature vector through the third neural network;
and the network training unit is also used for adjusting network parameters corresponding to the third neural network according to the sample image.
In an exemplary embodiment of the present application, the validity determination unit performs validity determination on the current user according to the first similarity and the second similarity, including:
selecting the highest target similarity from the plurality of second similarities;
acquiring a weight value corresponding to the first similarity and the target similarity;
calculating a weighted sum of the first similarity and the target similarity according to the weight value;
and judging the validity of the current user according to the weighted sum.
In an exemplary embodiment of the present application, the validity determination unit performs validity determination on the current user according to the weighted sum, including:
if the weighted sum is greater than or equal to the preset weighted sum, judging the current user as a legal user;
if the weighted sum is smaller than the preset weighted sum, the current user is judged to be an illegal user.
In an exemplary embodiment of the present application, the above apparatus further includes a vector update unit, wherein:
And the vector updating unit is used for updating the preset feature vector according to the background area if the target similarity is smaller than the preset similarity after the legitimacy judging unit judges that the current user is a legal user.
In an exemplary embodiment of the present application, the vector updating unit updates a preset feature vector according to a background area, including:
selecting a specific preset background with highest similarity with a background area from at least one preset background according to the vector similarity;
and carrying out average value calculation on the preset feature vector of the specific preset background and the third feature vector, and determining a calculation result as the preset feature vector after updating the specific preset background.
According to an aspect of the present application, there is provided an electronic apparatus including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the method of any of the above via execution of the executable instructions.
According to an aspect of the present application, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method according to any of the preceding claims.
Exemplary embodiments of the present application may have some or all of the following benefits:
in the method for determining user validity provided in an exemplary embodiment of the present application, a real-time image corresponding to a current user may be obtained and a first similarity between the real-time image and a legal portrait may be calculated; determining a boundary for distinguishing a portrait area from a background area in the real-time image; calculating a second similarity between the background area and a preset background; and carrying out validity judgment on the current user according to the first similarity and the second similarity. According to the scheme, on one hand, the user validity judgment can be carried out by combining the background similarity and the portrait similarity, so that the accuracy of the user validity judgment is improved. According to the method and the device, the accuracy of user legitimacy judgment can be improved, the data safety of legal users is guaranteed, and the risk of data theft of the legal users is reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application. It is apparent that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is a schematic diagram of an exemplary system architecture to which a user legitimacy determination method and a user legitimacy determination apparatus of embodiments of the present application may be applied;
FIG. 2 illustrates a schematic diagram of a computer system suitable for use in implementing embodiments of the present application;
FIG. 3 schematically illustrates a flow chart of a user legitimacy determination method according to one embodiment of the present application;
FIG. 4 schematically illustrates a real-time image contrast schematic before and after keypoint calibration according to one embodiment of the present application;
FIG. 5 schematically illustrates a structural schematic of a first neural network according to one embodiment of the present application;
FIG. 6 schematically illustrates a structural schematic of a second neural network according to one embodiment of the present application;
FIG. 7 schematically illustrates a structural schematic of a third neural network according to one embodiment of the present application;
FIG. 8 schematically illustrates a schematic diagram of a memory unit for storing preset feature vectors according to one embodiment of the present application;
FIG. 9 schematically illustrates a flow chart of a user legitimacy determination method according to one embodiment of the present application;
fig. 10 schematically shows a block diagram of the structure of the user validity determination apparatus in one embodiment according to the present application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present application. One skilled in the relevant art will recognize, however, that the aspects of the application may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known aspects have not been shown or described in detail to avoid obscuring aspects of the present application.
Furthermore, the drawings are only schematic illustrations of the present application and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
Fig. 1 is a schematic diagram of a system architecture of an exemplary application environment to which a user validity determination method and a user validity determination apparatus according to an embodiment of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include one or more of the terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others. The terminal devices 101, 102, 103 may be various electronic devices with display screens including, but not limited to, desktop computers, portable computers, smart phones, tablet computers, and the like. It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, the server 105 may be a server cluster formed by a plurality of servers.
The user validity determination method provided in the embodiment of the present application is generally performed by the terminal device 101, 102 or 103, and accordingly, the user validity determination means is generally provided in the terminal device 101, 102 or 103. However, it is easily understood by those skilled in the art that the method for determining the validity of the user provided in the embodiment of the present application may be performed by the server 105, and accordingly, the device for determining the validity of the user may be provided in the server 105, which is not particularly limited in the present exemplary embodiment. For example, in an exemplary embodiment, the terminal device 101, 102 or 103 may acquire a real-time image corresponding to the current user and calculate a first similarity between the real-time image and the legal person image; determining a boundary for distinguishing a portrait area from a background area in the real-time image; calculating a second similarity between the background area and a preset background; and carrying out validity judgment on the current user according to the first similarity and the second similarity.
Fig. 2 shows a schematic diagram of a computer system suitable for use in implementing the electronic device of the embodiments of the present application.
It should be noted that, the computer system 200 of the electronic device shown in fig. 2 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present application.
As shown in fig. 2, the computer system 200 includes a Central Processing Unit (CPU) 201, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 202 or a program loaded from a storage section 208 into a Random Access Memory (RAM) 203. In the RAM 203, various programs and data required for the system operation are also stored. The CPU 201, ROM 202, and RAM 203 are connected to each other through a bus 204. An input/output (I/O) interface 205 is also connected to bus 204.
The following components are connected to the I/O interface 205: an input section 206 including a keyboard, a mouse, and the like; an output portion 207 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker, and the like; a storage section 208 including a hard disk or the like; and a communication section 209 including a network interface card such as a LAN card, a modem, and the like. The communication section 209 performs communication processing via a network such as the internet. The drive 210 is also connected to the I/O interface 205 as needed. A removable medium 211 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed on the drive 210 as needed, so that a computer program read therefrom is installed into the storage section 208 as needed.
In particular, according to embodiments of the present application, the processes described below with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 209, and/or installed from the removable medium 211. The computer program, when executed by a Central Processing Unit (CPU) 201, performs the various functions defined in the methods and apparatus of the present application. The method of the present application may be implemented based on artificial intelligence. Artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision. The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
Currently, the unlocking modes of the mobile terminal include the following modes: 1. unlocking is carried out through an unlocking password preset by a user; 2. unlocking through a preset gesture; 3. unlocking through a pre-input fingerprint; 4. unlocking by face recognition technology. There is a high risk of cracking for modes 1 and 2, and a risk of misidentification for modes 3 and 4. If the illegal user is identified as the legal user, the data security of the legal user is threatened greatly.
Based on the above-described problems, the present exemplary embodiment provides a user validity determination method. The user validity determination method may be applied to the server 105 or one or more of the terminal devices 101, 102, 103, which is not particularly limited in the present exemplary embodiment. Referring to fig. 3, the user validity determination method may include the following steps S310 to S340:
step S310: and acquiring a real-time image corresponding to the current user and calculating a first similarity between the real-time image and the legal portrait.
Step S320: boundaries for distinguishing portrait areas from background areas are determined in the real-time image.
Step S330: and calculating a second similarity between the background area and the preset background.
Step S340: and carrying out validity judgment on the current user according to the first similarity and the second similarity.
By implementing the method shown in fig. 3, the user validity judgment can be performed by combining the background similarity and the image similarity, so that the accuracy of the user validity judgment is improved. In addition, the accuracy of user legitimacy judgment can be improved, the data security of legal users is guaranteed, and the risk of data theft of the legal users is reduced.
Next, the above steps of the present exemplary embodiment will be described in more detail.
In step S310, a real-time image corresponding to the current user is acquired and a first similarity between the real-time image and the legal image is calculated.
Specifically, the real-time image may be an image acquired by a front camera and/or a rear camera; the legal figures may be pre-stored images, the legal figures include legal facial features, the legal facial features may be used as a representation of a legal user, and the number of legal figures may be one or more, which is not limited in the embodiment of the present application; the first similarity may be used to characterize the degree of similarity between the real-time image and the legitimate person image, and the first similarity may also be used to characterize the probability that the current user is a legitimate user.
Additionally, optionally, before calculating the first similarity between the real-time image and the legal image, the method may further include: recognizing the face position in the real-time image, and performing key point calibration according to the face position so as to adjust the face gesture in the real-time image to be consistent with the face gesture in the legal image, thereby being beneficial to improving the accuracy of the legal judgment of the user; wherein, the key points at least comprise characteristic points for forming the five sense organs.
Additionally, optionally, before calculating the first similarity between the real-time image and the legal image, the method may further include: if the number of the faces in the real-time image is detected to be larger than 1, selecting a target face from the plurality of faces according to the area of the area where the faces are located; furthermore, the real-time image may be cut according to the target face to exclude other faces other than the target face.
Referring to fig. 4, fig. 4 schematically illustrates a real-time image contrast diagram before and after keypoint calibration according to an embodiment of the present application. As shown in fig. 4, the real-time image 402 may be obtained by recognizing the face position in the real-time image and performing the key point calibration on the real-time image 401 according to the face position. Further, by calculating the first similarity between the real-time image 402 and the legitimate person image, the accuracy of the legitimacy determination can be improved.
In addition, optionally, acquiring a real-time image corresponding to the current user includes: when an unlocking request is detected, acquiring a real-time image corresponding to the current user; or when the online payment request is detected, acquiring a real-time image corresponding to the current user; or when the identity verification request is detected, acquiring a real-time image corresponding to the current user.
The method for acquiring the real-time image corresponding to the current user may specifically be: triggering the front camera to start, and shooting at least one real-time image. Specifically, if the number of real-time images is plural, the manner of calculating the first similarity between the real-time images and the legal image may be specifically: and selecting a target real-time image from at least one real-time image, and calculating a first similarity between the target real-time image and the legal image. Further, the mode of selecting the target real-time image from the at least one real-time image may specifically be: and calculating the definition corresponding to the at least one real-time image, and selecting a target real-time image from the at least one real-time image according to the definition. It should be noted that the first similarity may be a vector similarity, the vector similarity may be represented by a cosine distance or a euclidean distance, and the second similarity is the same.
Additionally, optionally, after calculating the first similarity between the real-time image and the legal image, the method may further include: detecting whether the first similarity is larger than a preset similarity; if yes, go to step S320; if not, the prompt information for representing the illegal user of the current user is fed back. Further optionally, if the first similarity is less than or equal to the preset similarity, the following operations may be further performed: outputting an interactive window, displaying at least one security question through the interactive window, and detecting a security answer input by a user aiming at each security question; if the secret protection answer matched with the secret protection question exists, an image library for storing legal figures is updated according to the real-time image.
As an alternative embodiment, calculating the first similarity between the real-time image and the legal person image includes: extracting features of the real-time image through a first neural network to obtain a first feature vector; and obtaining legal feature vectors corresponding to the legal figures, and calculating first similarity between the legal feature vectors and the first feature vectors.
Specifically, the first neural network may be a neural network model based on a convolutional neural network (Convolutional Neural Network, CNN), and the second and third neural networks described below are the same; wherein CNN is a kind of feedforward type neural network. In addition, the first neural network, the second neural network and the third neural network respectively correspond to different network parameters, and the network parameters at least comprise weight values and bias items. The legal feature vector corresponding to the legal image may be a pre-computed vector, which is used as a representation of a plurality of facial features (such as eyes, nose, mouth, ears, etc.) in the legal image; the legal portrait is a portrait corresponding to a legal user, and similarly, the first feature vector is used for representing a plurality of facial features in the real-time image.
Additionally, optionally, the manner of calculating the first similarity between the legal feature vector and the first feature vector may be: calculating Euclidean distance between legal feature vectors and the first feature vectors as first similarity; or, calculating cosine distance between legal feature vector and first feature vector as similarity; or calculating Tanimoto coefficients according to legal feature vectors and the first feature vectors to characterize the similarity; alternatively, the pearson correlation coefficient is calculated to characterize the similarity according to the legal feature vector and the first feature vector, and the embodiments of the present disclosure are not limited.
Specifically, the euclidean distance is the true distance between two points in an m-dimensional space or the natural length of a vector, and the euclidean distance in two-dimensional and three-dimensional spaces is the true distance between two points; the pearson correlation coefficient is obtained by dividing the covariance by the standard deviation of the two variables; the cosine distance is a measure for measuring the difference between two individuals by taking the cosine value of the included angle of two vectors in the vector space; the Tanimoto coefficient is a generalized Jaccard similarity, and if x and y are both binary vectors, the Tanimoto coefficient is equivalent to the Jaccard Distance (Jaccard Distance), which is an index for measuring the difference between two sets.
Therefore, by implementing the alternative embodiment, the feature vector of the real-time image can be extracted through the neural network, so that legal user identification can be performed according to the feature vector, the identification accuracy is further improved, and the data security of the legal user is ensured.
As an optional embodiment, the first neural network includes a convolution layer, an excitation layer, a pooling layer, and a full-connection layer, and the feature extraction is performed on the real-time image through the first neural network to obtain a first feature vector, which includes: extracting features of the real-time image through the convolution layer to obtain a first reference feature vector; activating the first reference feature vector through the excitation layer to obtain a second reference feature vector; sampling the second reference feature vector through the pooling layer to obtain a third reference feature vector; and performing dimension reduction processing on the third reference feature vector through the full connection layer to obtain a first feature vector.
Specifically, the number of layers corresponding to the convolution layer, the excitation layer and the pooling layer included in the first neural network is not limited; in addition, the filters corresponding to the layers are different so as to extract the image features of different emphasis points; the first reference feature vector, the second reference feature vector, and the third reference feature vector may be represented by a feature map; excitation functions in the excitation layer may be The method comprises the following steps:tanh (x) =2σ (2 x) -1 or σ (z) =max (-, z).
In addition, optionally, the method for obtaining the third reference feature vector by sampling the second reference feature vector by the pooling layer may specifically be: and carrying out maximum pooling or average pooling on the second reference feature vector through the pooling layer so as to realize sampling of the second reference feature vector, thereby obtaining a third reference feature vector.
In addition, optionally, the method for performing the dimension reduction processing on the third reference feature vector through the full connection layer to obtain the first feature vector may specifically be: the preset convolution kernel and the third reference feature vector are convolved through a plurality of neurons in the full-connection layer to realize dimension reduction of the third reference feature vector, so that a first feature vector is obtained, and the distributed features in the real-time image can be mapped to a sample marking space. For example, the dimension corresponding to the third reference feature vector may be 7×7×512, the number of the plurality of neurons in the full link layer is 4096, and the predetermined convolution kernel is 7×7×512×4096, so the dimension corresponding to the first feature vector may be 1×4096.
For example, referring to fig. 5, fig. 5 schematically illustrates a schematic structural diagram of a first neural network according to an embodiment of the present application. As shown in fig. 5, the first neural network 500 may include a convolutional layer 501, an excitation layer 502, a pooling layer 503, and a fully-connected layer 504. Specifically, after inputting the real-time image into the convolution layer 501, the convolution layer 501 may perform feature extraction on the real-time image, obtain a first reference feature vector, and use the first reference feature vector as an input of the excitation layer 502; furthermore, the excitation layer 502 may perform activation processing on the first reference feature vector, to obtain a second reference feature vector, and use the second reference feature vector as an input of the pooling layer 503; furthermore, the pooling layer 503 may perform average pooling or global pooling on the second reference feature vector, so as to obtain a third reference feature vector, and use the third reference feature vector as an input of the full connection layer 504; furthermore, the full-connection layer 504 may perform dimension reduction processing on the third reference feature vector to obtain a first feature vector, and further determine, according to the first feature vector, a probability of including a facial feature in the real-time image, as a recognition result. Wherein the fully-connected layer 504 may include a plurality of neurons in one dimension for dimension reduction of the third reference feature vector. It should be noted that the structure shown in fig. 5 is only exemplary, and the number of the convolution layer 501, the excitation layer 502, the pooling layer 503, and the full-connection layer 504 is not limited in the practical application.
It can be seen that, by implementing this alternative embodiment, feature vectors for characterizing facial features can be determined by extracting features of a real-time image, so that legal user determination can be facilitated according to the feature vectors, so as to improve the accuracy of determination.
In step S320, a boundary for distinguishing a portrait area from a background area is determined in the real-time image.
Specifically, the real-time image is composed of a portrait area and a background area. In addition, optionally, after determining the boundary for distinguishing the portrait area and the background area in the real-time image, the method may further include: changing the boundary pixel value according to the boundary judging result so as to emphasize the display boundary or highlight the display boundary; the manner of changing the boundary pixel value according to the boundary determination result may be: and changing R, G, B corresponding to the boundary pixel point and values corresponding to all the channels alpha, wherein alpha is used for representing the transparency of the boundary pixel point, and R, G, B is used for representing the values of the red, yellow and blue channels respectively.
In addition, optionally, after determining the boundary for distinguishing the portrait area and the background area in the real-time image, the method may further include: marking the real-time image according to the boundary judgment result, and outputting the marked real-time image; wherein the marked real-time image is used for highlighting the boundary for distinguishing the portrait area and the background area.
As an alternative embodiment, before determining the boundary for distinguishing the portrait area from the background area in the real-time image, the method further includes: preprocessing the real-time image; wherein the preprocessing includes gray scale processing or binarization processing.
Specifically, the preprocessing may further include noise processing, switching operation, and the like, which are not limited in the embodiments of the present application. Optionally, the noise processing manner for the real-time image may specifically be: and determining a neighborhood pixel average value or a weighted average value of each pixel point through a preset filter as an output result of the pixel point, and traversing each pixel point in the real-time image to realize denoising of the real-time image. Alternatively, the noise processing mode for the real-time image may specifically be: and processing the real-time image in a median filtering mode to realize denoising of the real-time image.
Therefore, by implementing the alternative embodiment, the accuracy of feature extraction can be improved through preprocessing the real-time image, so that the accuracy of user legitimacy judgment can be improved.
As an alternative embodiment, determining a boundary for distinguishing a portrait area from a background area in a real-time image includes: extracting features of the real-time image through a second neural network to obtain a second feature vector; classifying each pixel point in the real-time image according to the second feature vector to obtain a class of pixels belonging to the portrait region and a class of pixels belonging to the background region; and determining boundaries for distinguishing the portrait area and the background area according to the first-class pixels and the second-class pixels and marking the boundaries.
In particular, the second feature vector is used to highlight the boundary in the real-time image, and the dimensions of the second feature vector and the first feature vector may be the same or different.
In addition, optionally, the manner of classifying each pixel point in the real-time image according to the second feature vector to obtain the first class of pixels belonging to the portrait area and the second class of pixels belonging to the background area may specifically be: and inputting the second feature vector into a support vector machine (svm) so that the support vector machine classifies all pixel points in the real-time image to obtain one type of pixels belonging to the portrait area and two types of pixels belonging to the background area. It should be noted that, the method of supporting the vector machine is based on the VC dimension theory of the statistical learning theory and the minimum structural risk theory, and the best compromise is sought between the complexity and learning ability of the model according to the limited sample information, so as to obtain the best generalization ability. The support vector machine may be used to solve the two-classification problem, the multiple-classification problem, and the regression problem, and may be expressed as a functional expression.
In addition, optionally, the manner of determining the boundary for distinguishing the portrait area and the background area according to the first-class pixels and the second-class pixels and performing boundary marking may be as follows: and determining boundary pixel points in the real-time image according to the first-class pixels and the second-class pixels and marking the boundary pixel points.
Referring to fig. 6, fig. 6 schematically illustrates a schematic structural diagram of a second neural network according to an embodiment of the present application. As shown in fig. 6, the second neural network 610 may include at least: convolution layer 613, pooling layer 612 and full connection layer 611. Specifically, the real-time image may be input into the second neural network 610, the real-time image is convolved by the convolution layer 613, that is, the feature is extracted, and the convolved result is input into the pooling layer 612, so that the pooling layer 612 performs global pooling/average pooling on the convolved result, and further, the full connection layer 611 performs dimension reduction on the sampling result, and the feature vector obtained by the dimension reduction is input into the support vector machine 620, so that the support vector machine 620 may classify each pixel point in the real-time image according to the feature vector obtained by the dimension reduction, to obtain a classification result, that is, one type of pixel belonging to the portrait area and two types of pixels belonging to the background area. Further, the boundary for distinguishing the portrait area from the background area can be determined and the boundary marking can be performed based on the first-class pixels and the second-class pixels.
Therefore, by implementing the alternative embodiment, the background area can be obtained by judging the boundary area, so that the similarity comparison of the background area is facilitated, and the accuracy of the legitimacy judgment of the user is further improved.
As an alternative embodiment, after determining the boundary for distinguishing the portrait area and the background area according to the first-class pixels and the second-class pixels and making the boundary marking, the method further includes: updating a sample set for training the second neural network according to the boundary markers; and training a second neural network according to the updated sample set.
Specifically, the sample set includes a plurality of noted sample images, and training the second neural network through the noted sample images can improve accuracy of feature extraction. Wherein, the way to update the sample set for training the second neural network according to the boundary markers may be: the boundary markers are processed on the real-time image to obtain a new sample image, and the new sample image is added into a sample set for training the second neural network to realize updating of the sample set.
It can be seen that implementing this alternative embodiment, the accuracy of the computation of the second feature vector can be improved by updating the sample set, thereby improving the accuracy of the boundary marking.
In step S330, a second similarity between the background area and the preset background is calculated.
Specifically, the number of preset contexts may be one or more, which is not limited in the embodiments of the present application. In addition, the method can further comprise the following steps: after the successful verification of the user's legitimacy is detected, acquiring an environment image according to the duration of unit time until the screen-off operation/shutdown operation and the like are detected; the environment image can be acquired through a front camera and/or a rear camera; furthermore, a background library for storing preset backgrounds can be updated according to the environment images, so that the subsequent calculation accuracy of the second similarity is improved, the use experience of a user is improved, and the data safety is guaranteed.
As an alternative embodiment, the number of preset backgrounds is at least one, and calculating the second similarity between the background area and the preset backgrounds includes: extracting features of the background area through a third neural network to obtain a third feature vector; acquiring at least one preset feature vector corresponding to each preset background; and calculating second similarity between each preset feature vector and the third feature vector.
Specifically, the third neural network may be configured to perform feature extraction on a background area in the real-time image, obtain a third feature vector, and generate a sample image according to the third feature vector, so as to train the third neural network according to the sample image. In addition, the preset feature vector may be used as a representation of the preset background.
In addition, optionally, the method for obtaining the preset feature vector corresponding to the at least one preset background respectively may specifically be: at least one preset background is read from the storage unit, and at least one preset feature vector corresponding to the preset background is read. Specifically, the manner of reading at least one preset background from the storage unit and reading the preset feature vectors corresponding to the at least one preset background respectively may specifically be: and determining the category (such as supermarkets, home, schools and the like) to which the background area belongs, and reading preset feature vectors corresponding to all preset backgrounds in the category.
Therefore, by implementing the alternative embodiment, the validity of the user can be further verified through extracting the characteristics of the background area, so that the data security is guaranteed.
As an optional embodiment, after performing feature extraction on the background area through a third neural network to obtain a third feature vector, the method further includes: generating a sample image corresponding to a third feature vector through a third neural network; and adjusting network parameters corresponding to the third neural network according to the sample image.
In particular, the sample image may be used to train a third neural network. Optionally, the generating, by the third neural network, the sample image corresponding to the third feature vector may specifically be: and performing further feature extraction on the third feature vector through a convolution layer in the third neural network, performing global pooling/average pooling on the feature extraction result through a pooling layer, and performing dimension reduction on the pooling result through a full-connection layer, so as to generate a sample image.
In addition, optionally, the method for adjusting the network parameter corresponding to the third neural network according to the sample image may specifically be: calculating a loss function between the sample image and the marked real-time image; furthermore, the network parameters corresponding to the third neural network may be adjusted according to the loss function until the loss function is less than a preset loss function value. Wherein, the loss function may be: regression loss function, square error loss, absolute error loss, huber loss, classification loss function, classification cross entropy, range loss, multi-classification loss function, multi-classification cross entropy loss, or KL divergence (Kullback Leibler Divergence Loss), embodiments of the application are not limited.
Referring to fig. 7, fig. 7 schematically illustrates a structural schematic diagram of a third neural network according to an embodiment of the present application. As shown in fig. 7, the third neural network 700 may include: convolution layer 701, pooling layer 702, full connection layer 703, convolution layer 704, pooling layer 705, and full connection layer 706. Specifically, the marked real-time image may be input into the third neural network 700, so that the convolution layer 701 performs feature extraction on the marked real-time image, and then the pooling layer 702 may perform global pooling/average pooling on the feature extraction result, and then the full connection layer 703 may perform dimension reduction on the pooling result, so as to obtain a third feature vector for characterizing the background area. Further, the convolution layer 704 may perform further feature extraction on the third feature vector, and further, perform global pooling/average pooling on the feature extraction result through the pooling layer 705, and further, perform dimension reduction on the pooled result through the full connection layer 706, so as to generate a sample image for training the third neural network 700.
Referring to fig. 8, fig. 8 schematically illustrates a schematic diagram of a memory unit for storing a preset feature vector according to an embodiment of the present application, based on a third feature vector obtained by the third neural network shown in fig. 7. As shown in fig. 8, the storage unit 800 is configured to store a preset feature vector 1 801, a preset feature vector 2 802, preset feature vectors 3, … …, and a preset feature vector n 804, where n is a positive integer greater than or equal to 4. It should be noted that, the preset feature vector 1 801, the preset feature vector 2 802, the preset feature vectors 3 803 and … …, and the preset feature vector n 804 correspond to different preset backgrounds, respectively. Specifically, when the third feature vector for characterizing the background area is acquired, all or part of the preset feature vector in the storage unit 800 may be read. Further, a second similarity between the read preset feature vector and the third feature vector, respectively, may be calculated.
Therefore, by implementing the alternative embodiment, the parameter adjustment can be performed on the third neural network according to the background feature extraction for the user validity judgment every time, so as to improve the feature extraction accuracy of the third neural network and improve the accuracy of the user validity judgment.
In step S340, a validity determination is made for the current user based on the first similarity and the second similarity.
As an alternative embodiment, the determining of validity of the current user according to the first similarity and the second similarity includes: selecting the highest target similarity from the plurality of second similarities; acquiring a weight value corresponding to the first similarity and the target similarity; calculating a weighted sum of the first similarity and the target similarity according to the weight value; and judging the validity of the current user according to the weighted sum.
Specifically, the target similarity is used for representing the maximum similarity between the background area and a preset background; the weight value may be a preset value for balancing the specific gravity of the first similarity and the target similarity in the weighted sum. For example, the manner of calculating the weighted sum of the first similarity and the target similarity according to the weight value may specifically be: and calculating a weighted sum of 0.8x0.7+0.9x0.3 according to the weight value 0.7 corresponding to the first similarity 0.8 and the weight value 0.3 corresponding to the target similarity 0.9.
In addition, optionally, the manner of calculating the weighted sum of the first similarity and the target similarity according to the weight value may specifically be: and calculating the weighted sum of the first similarity and the target similarity according to the weight value corresponding to the first similarity and/or the target similarity. Further, the manner of performing validity determination on the current user according to the weighted sum may specifically be: and carrying out average value calculation on the weighted sum according to the first similarity and the target similarity to obtain a weighted average value, and further carrying out validity judgment on the current user according to the weighted sum.
Therefore, by implementing the optional embodiment, the validity of the current user can be judged according to the portrait similarity and the background similarity, and compared with the mode of judging the validity only through the portrait similarity in the prior art, the judging accuracy of the method is higher, the data security of the legal user can be ensured, and the illegal user is prevented from stealing the data after identity authentication through illegal means.
As an alternative embodiment, making a validity determination for the current user based on the weighted sum, comprises: if the weighted sum is greater than or equal to the preset weighted sum, judging the current user as a legal user; if the weighted sum is smaller than the preset weighted sum, the current user is judged to be an illegal user.
Specifically, a weighted sum is preset for a decision threshold as a legitimate user. In addition, optionally, after determining that the current user is an illegal user, the method may further include the following steps: and outputting prompt information for indicating identity authentication failure. Further, the method may further include: if the frequency of identity authentication failure in unit time is higher than the preset frequency, the identity authentication function is closed in the preset time period, and abnormal feedback is carried out on at least one trusted device, so that the aim of reminding legal users is fulfilled; the trusted device (such as a bracelet, a tablet computer and the like) is a device which is preset and establishes connection with the terminal.
Therefore, by implementing the alternative embodiment, whether the current user is a legal user can be judged according to the preset weighted sum, so that the judgment accuracy of the user legitimacy can be improved.
As an optional embodiment, after determining that the current user is a legal user, the method further includes: if the target similarity is smaller than the preset similarity, updating the preset feature vector according to the background area.
Specifically, if the target similarity is smaller than the preset similarity, the background area is indicated as a very common background, and since the current user is determined as a legal user, the background area in the real-time image can be added to the database, so that the preset background is enriched, and the accuracy of the user validity determination is improved.
It can be seen that implementing this alternative embodiment enables a continuous update of the preset feature vector to continuously improve the accuracy of the user legitimacy determination.
As an alternative embodiment, updating the preset feature vector according to the background area includes: selecting a specific preset background with highest similarity with a background area from at least one preset background according to the vector similarity; and carrying out average value calculation on the preset feature vector of the specific preset background and the third feature vector, and determining a calculation result as the preset feature vector after updating the specific preset background.
Specifically, the specific preset background may be represented by an annotated image. In addition, optionally, the mean value calculation method for the preset feature vector and the third feature vector of the specific preset background may specifically be: determining a preset feature vector of a specific preset background, and carrying out one-to-one corresponding mean value calculation on the preset feature vector and elements in a third feature vector so as to ensure that the dimension of the updated preset feature vector is unchanged.
It can be seen that by implementing this alternative embodiment, the preset feature vectors can be updated continuously without increasing the number of preset feature vectors in the storage unit, so that the occupation of storage resources can be reduced, thereby being beneficial to optimizing the resource utilization rate.
Please refer to fig. 9. Fig. 9 schematically shows a flow chart of a user legitimacy determination method according to an embodiment of the present application. As shown in fig. 9, the user validity determination method includes: step S900 to step S928, wherein:
step S900: and acquiring a real-time image corresponding to the current user, and extracting features of the real-time image through a first neural network to obtain a first feature vector.
Step S902: and obtaining legal feature vectors corresponding to the legal figures, and calculating first similarity between the legal feature vectors and the first feature vectors.
Step S904: and extracting features of the real-time image through a second neural network to obtain a second feature vector.
Step S906: and classifying each pixel point in the real-time image according to the second feature vector to obtain a class of pixels belonging to the portrait region and a class of pixels belonging to the background region.
Step S908: and determining boundaries for distinguishing the portrait area and the background area according to the first-class pixels and the second-class pixels and marking the boundaries.
Step S910: and extracting the characteristics of the background area through a third neural network to obtain a third characteristic vector.
Step S912: and obtaining at least one preset feature vector corresponding to the preset background respectively.
Step S914: and calculating second similarity between each preset feature vector and the third feature vector.
Step S916: and generating a sample image corresponding to the third feature vector through the third neural network, and adjusting network parameters corresponding to the third neural network according to the sample image.
Step S918: the highest target similarity is selected from the plurality of second similarities.
Step S920: acquiring weight values corresponding to the first similarity and the target similarity, and calculating a weighted sum of the first similarity and the target similarity according to the weight values; if the weighted sum is greater than or equal to the preset weighted sum, step S922 is executed; if the weighted sum is smaller than the preset weighted sum, step S924 is performed.
Step S922: and judging the current user as a legal user, and further executing step S926.
Step S924: and judging the current user as an illegal user.
Step S926: if the target similarity is smaller than the preset similarity, selecting a specific preset background with highest similarity with the background area from at least one preset background according to the vector similarity.
Step S928: and carrying out average value calculation on the preset feature vector of the specific preset background and the third feature vector, and determining a calculation result as the preset feature vector after updating the specific preset background.
It should be noted that, steps S900 to S928 correspond to the steps and embodiments shown in fig. 3, and for the specific implementation of steps S900 to S928, please refer to the steps and embodiments shown in fig. 3, and the description thereof is omitted here.
Therefore, by implementing the method shown in fig. 9, the user validity determination can be performed by combining the background similarity and the image similarity, so that the accuracy of the user validity determination is improved. In addition, the accuracy of user legitimacy judgment can be improved, the data security of legal users is guaranteed, and the risk of data theft of the legal users is reduced.
Further, in this example embodiment, a user validity determination apparatus is also provided. Referring to fig. 10, the user validity determination apparatus 1000 may include: an image acquisition unit 1001, a similarity calculation unit 1002, a boundary determination unit 1003, and a validity determination unit 1004, wherein:
an image acquisition unit 1001, configured to acquire a real-time image corresponding to a current user;
a similarity calculating unit 1002, configured to calculate a first similarity between the real-time image and the legal person;
a boundary determination unit 1003 for determining a boundary for distinguishing a portrait area and a background area in a real-time image;
The similarity calculating unit 1002 is further configured to calculate a second similarity between the background area and a preset background;
the validity determining unit 1004 is configured to perform validity determination on the current user according to the first similarity and the second similarity.
Therefore, by implementing the device shown in fig. 10, the user validity determination can be performed by combining the background similarity and the image similarity, so that the accuracy of the user validity determination is improved. In addition, the accuracy of user legitimacy judgment can be improved, the data security of legal users is guaranteed, and the risk of data theft of the legal users is reduced.
In an exemplary embodiment of the present application, the similarity calculating unit 1002 calculates a first similarity between a real-time image and a legal person image, including:
extracting features of the real-time image through a first neural network to obtain a first feature vector;
and obtaining legal feature vectors corresponding to the legal figures, and calculating first similarity between the legal feature vectors and the first feature vectors.
Therefore, by implementing the alternative embodiment, the feature vector of the real-time image can be extracted through the neural network, so that legal user identification can be performed according to the feature vector, the identification accuracy is further improved, and the data security of the legal user is ensured.
In an exemplary embodiment of the present application, the first neural network includes a convolution layer, an excitation layer, a pooling layer, and a full-connection layer, and the similarity calculating unit 1002 performs feature extraction on the real-time image through the first neural network to obtain a first feature vector, including:
extracting features of the real-time image through the convolution layer to obtain a first reference feature vector;
activating the first reference feature vector through the excitation layer to obtain a second reference feature vector;
sampling the second reference feature vector through the pooling layer to obtain a third reference feature vector;
and performing dimension reduction processing on the third reference feature vector through the full connection layer to obtain a first feature vector.
It can be seen that, by implementing this alternative embodiment, feature vectors for characterizing facial features can be determined by extracting features of a real-time image, so that legal user determination can be facilitated according to the feature vectors, so as to improve the accuracy of determination.
In an exemplary embodiment of the present application, the above apparatus further includes an image preprocessing unit (not shown), wherein:
an image preprocessing unit for preprocessing a real-time image before the boundary determination unit 1003 determines a boundary for distinguishing a portrait area and a background area in the real-time image; wherein the preprocessing includes gray scale processing or binarization processing.
Therefore, by implementing the alternative embodiment, the accuracy of feature extraction can be improved through preprocessing the real-time image, so that the accuracy of user legitimacy judgment can be improved.
In an exemplary embodiment of the present application, the boundary determination unit 1003 determines a boundary for distinguishing a portrait area and a background area in a real-time image, including:
extracting features of the real-time image through a second neural network to obtain a second feature vector;
classifying each pixel point in the real-time image according to the second feature vector to obtain a class of pixels belonging to the portrait region and a class of pixels belonging to the background region;
and determining boundaries for distinguishing the portrait area and the background area according to the first-class pixels and the second-class pixels and marking the boundaries.
Therefore, by implementing the alternative embodiment, the background area can be obtained by judging the boundary area, so that the similarity comparison of the background area is facilitated, and the accuracy of the legitimacy judgment of the user is further improved.
In an exemplary embodiment of the present application, the apparatus further includes a sample updating unit (not shown) and a network training unit (not shown), wherein:
a sample updating unit configured to update a sample set for training the second neural network according to the boundary markers after the boundary determining unit 1003 determines the boundary for distinguishing the portrait region and the background region according to the first class pixels and the second class pixels and performs the boundary markers;
And the network training unit is used for training the second neural network according to the updated sample set.
It can be seen that implementing this alternative embodiment, the accuracy of the computation of the second feature vector can be improved by updating the sample set, thereby improving the accuracy of the boundary marking.
In an exemplary embodiment of the present application, the number of preset backgrounds is at least one, and the similarity calculating unit 1002 calculates a second similarity between the background area and the preset backgrounds, including:
extracting features of the background area through a third neural network to obtain a third feature vector;
acquiring at least one preset feature vector corresponding to each preset background;
and calculating second similarity between each preset feature vector and the third feature vector.
Therefore, by implementing the alternative embodiment, the validity of the user can be further verified through extracting the characteristics of the background area, so that the data security is guaranteed.
In an exemplary embodiment of the present application, the above apparatus further comprises a sample generation unit (not shown), wherein:
a sample generation unit, configured to perform feature extraction on the background area through a third neural network in the similarity calculation unit 1002, obtain a third feature vector, and generate a sample image corresponding to the third feature vector through the third neural network;
And the network training unit is also used for adjusting network parameters corresponding to the third neural network according to the sample image.
Therefore, by implementing the alternative embodiment, the parameter adjustment can be performed on the third neural network according to the background feature extraction for the user validity judgment every time, so as to improve the feature extraction accuracy of the third neural network and improve the accuracy of the user validity judgment.
In an exemplary embodiment of the present application, the validity determination unit 1004 performs validity determination on the current user according to the first similarity and the second similarity, including:
selecting the highest target similarity from the plurality of second similarities;
acquiring a weight value corresponding to the first similarity and the target similarity;
calculating a weighted sum of the first similarity and the target similarity according to the weight value;
and judging the validity of the current user according to the weighted sum.
Therefore, by implementing the optional embodiment, the validity of the current user can be judged according to the portrait similarity and the background similarity, and compared with the mode of judging the validity only through the portrait similarity in the prior art, the judging accuracy of the method is higher, the data security of the legal user can be ensured, and the illegal user is prevented from stealing the data after identity authentication through illegal means.
In an exemplary embodiment of the present application, the validity determination unit 1004 performs validity determination on the current user according to the weighted sum, including:
if the weighted sum is greater than or equal to the preset weighted sum, judging the current user as a legal user;
if the weighted sum is smaller than the preset weighted sum, the current user is judged to be an illegal user.
Therefore, by implementing the alternative embodiment, whether the current user is a legal user can be judged according to the preset weighted sum, so that the judgment accuracy of the user legitimacy can be improved.
In an exemplary embodiment of the present application, the above apparatus further includes a vector update unit (not shown), wherein:
the vector updating unit is configured to update the preset feature vector according to the background area if the target similarity is smaller than the preset similarity after the validity determining unit 1004 determines that the current user is a valid user.
It can be seen that implementing this alternative embodiment enables a continuous update of the preset feature vector to continuously improve the accuracy of the user legitimacy determination.
In an exemplary embodiment of the present application, the vector updating unit updates a preset feature vector according to a background area, including:
Selecting a specific preset background with highest similarity with a background area from at least one preset background according to the vector similarity;
and carrying out average value calculation on the preset feature vector of the specific preset background and the third feature vector, and determining a calculation result as the preset feature vector after updating the specific preset background.
It can be seen that by implementing this alternative embodiment, the preset feature vectors can be updated continuously without increasing the number of preset feature vectors, so that the occupation of storage resources can be reduced, which is beneficial to optimizing the resource utilization rate.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit, in accordance with embodiments of the present application. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Since each functional module of the user validity determination apparatus according to the exemplary embodiment of the present application corresponds to a step of the exemplary embodiment of the user validity determination method described above, for details not disclosed in the embodiment of the apparatus of the present application, reference is made to the embodiment of the user validity determination method described above.
As another aspect, the present application also provides a computer-readable medium that may be contained in the electronic device described in the above embodiment; or may exist alone without being incorporated into the electronic device. The computer-readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to implement the methods described in the above embodiments.
As yet another aspect, the present application also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions are read from the computer-readable storage medium by a processor of a computer device, and executed by the processor, cause the computer device to perform the methods provided in the various alternative implementations described above.
It should be noted that the computer readable medium shown in the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present application may be implemented by means of software, or may be implemented by means of hardware, and the described units may also be provided in a processor. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the present application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (13)

1. A user validity determination method, comprising:
acquiring a real-time image corresponding to a current user and calculating a first similarity between the real-time image and a legal portrait;
determining a boundary for distinguishing a portrait area from a background area in the real-time image;
calculating a second similarity between a third feature vector corresponding to the background area and a preset feature vector of a preset background; the number of the preset backgrounds is at least one;
Carrying out validity judgment on the current user according to the first similarity and the second similarity;
after the current user is judged to be a legal user, selecting the highest target similarity from a plurality of second similarities, and selecting a specific preset background with the highest similarity with the background area from at least one preset background according to vector similarity if the target similarity is smaller than the preset similarity;
determining preset feature vectors of a specific preset background, carrying out one-to-one corresponding mean value calculation on the preset feature vectors of the specific preset background and elements in a third feature vector so that the dimension of the updated preset feature vector is unchanged, and determining a calculation result as the updated preset feature vector of the specific preset background.
2. The method of claim 1, wherein calculating a first similarity between the real-time image and a legal person image comprises:
extracting features of the real-time image through a first neural network to obtain a first feature vector;
and obtaining legal feature vectors corresponding to the legal figures, and calculating first similarity between the legal feature vectors and the first feature vectors.
3. The method of claim 1, wherein the first neural network comprises a convolution layer, an excitation layer, a pooling layer, and a full-connection layer, wherein the feature extraction is performed on the real-time image through the first neural network to obtain a first feature vector, and wherein the step of obtaining the first feature vector comprises:
extracting features of the real-time image through the convolution layer to obtain a first reference feature vector;
activating the first reference feature vector through the excitation layer to obtain a second reference feature vector;
sampling the second reference feature vector through the pooling layer to obtain a third reference feature vector;
and performing dimension reduction processing on the third reference feature vector through the full connection layer to obtain the first feature vector.
4. The method of claim 1, wherein prior to determining the boundary in the real-time image that distinguishes between portrait areas and background areas, the method further comprises:
preprocessing the real-time image; wherein the preprocessing includes gray scale processing or binarization processing.
5. The method of claim 1, wherein determining a boundary in the real-time image that distinguishes between portrait areas and background areas comprises:
Extracting features of the real-time image through a second neural network to obtain a second feature vector;
classifying each pixel point in the real-time image according to the second feature vector to obtain a class of pixels belonging to the portrait area and a class of pixels belonging to the background area;
and determining boundaries for distinguishing the portrait area and the background area according to the first class pixels and the second class pixels and marking the boundaries.
6. The method of claim 5, wherein after performing boundary determination and boundary marking on the real-time image based on the class of pixels and the class of pixels, the method further comprises:
updating a sample set for training the second neural network according to the boundary markers;
training the second neural network according to the updated sample set.
7. The method of claim 1, wherein calculating a second similarity between a third feature vector corresponding to the background region and a preset feature vector of a preset background comprises:
extracting features of the background area through a third neural network to obtain a third feature vector;
acquiring at least one preset feature vector corresponding to each preset background;
And calculating second similarity between each preset feature vector and the third feature vector.
8. The method of claim 1, wherein after the feature extraction of the background region by a third neural network, the method further comprises:
generating, by the third neural network, a sample image corresponding to the third feature vector;
and adjusting network parameters corresponding to the third neural network according to the sample image.
9. The method of claim 7, wherein the determining of the validity of the current user based on the first similarity and the second similarity comprises:
selecting the highest target similarity from the plurality of second similarities;
acquiring weight values corresponding to the first similarity and the target similarity;
calculating a weighted sum of the first similarity and the target similarity according to the weight value;
and carrying out validity judgment on the current user according to the weighted sum.
10. The method of claim 9, wherein making a validity determination for the current user based on the weighted sum comprises:
If the weighted sum is greater than or equal to a preset weighted sum, judging that the current user is a legal user;
and if the weighted sum is smaller than the preset weighted sum, judging that the current user is an illegal user.
11. A user validity determination apparatus, comprising:
the image acquisition unit is used for acquiring a real-time image corresponding to the current user;
the similarity calculation unit is used for calculating first similarity between the real-time image and the legal portrait;
a boundary determination unit configured to determine a boundary for distinguishing a portrait area from a background area in the real-time image;
the similarity calculation unit is further configured to calculate a second similarity between a third feature vector corresponding to the background area and a preset feature vector of a preset background; the number of the preset backgrounds is at least one;
the validity judging unit is used for judging the validity of the current user according to the first similarity and the second similarity;
the device further comprises a vector updating unit, wherein the vector updating unit is used for selecting the highest target similarity from the plurality of second similarities after the legitimacy judging unit judges that the current user is a legal user, and selecting a specific preset background with highest similarity with the background area from the at least one preset background according to the vector similarity if the target similarity is smaller than the preset similarity; determining preset feature vectors of a specific preset background, carrying out one-to-one corresponding mean value calculation on the preset feature vectors of the specific preset background and elements in a third feature vector so that the dimension of the updated preset feature vector is unchanged, and determining a calculation result as the updated preset feature vector of the specific preset background.
12. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method of any of claims 1-10.
13. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of claims 1-10 via execution of the executable instructions.
CN202010783859.XA 2020-08-06 2020-08-06 User validity determination method, device, computer readable storage medium and equipment Active CN111914769B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010783859.XA CN111914769B (en) 2020-08-06 2020-08-06 User validity determination method, device, computer readable storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010783859.XA CN111914769B (en) 2020-08-06 2020-08-06 User validity determination method, device, computer readable storage medium and equipment

Publications (2)

Publication Number Publication Date
CN111914769A CN111914769A (en) 2020-11-10
CN111914769B true CN111914769B (en) 2024-01-26

Family

ID=73288328

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010783859.XA Active CN111914769B (en) 2020-08-06 2020-08-06 User validity determination method, device, computer readable storage medium and equipment

Country Status (1)

Country Link
CN (1) CN111914769B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112906671B (en) * 2021-04-08 2024-03-15 平安科技(深圳)有限公司 Method and device for identifying false face-examination picture, electronic equipment and storage medium
CN115115843B (en) * 2022-06-02 2023-08-22 马上消费金融股份有限公司 Data processing method and device

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105427421A (en) * 2015-11-16 2016-03-23 苏州市公安局虎丘分局 Entrance guard control method based on face recognition
CN106295522A (en) * 2016-07-29 2017-01-04 武汉理工大学 A kind of two-stage anti-fraud detection method based on multi-orientation Face and environmental information
CN106960180A (en) * 2017-02-24 2017-07-18 深圳市普波科技有限公司 A kind of intelligent control method, apparatus and system
CN107423690A (en) * 2017-06-26 2017-12-01 广东工业大学 A kind of face identification method and device
CN107862247A (en) * 2017-10-13 2018-03-30 平安科技(深圳)有限公司 A kind of human face in-vivo detection method and terminal device
CN108229362A (en) * 2017-12-27 2018-06-29 杭州悉尔科技有限公司 A kind of binocular recognition of face biopsy method based on access control system
CN108629305A (en) * 2018-04-27 2018-10-09 朱旭辉 A kind of face recognition method
CN108734057A (en) * 2017-04-18 2018-11-02 北京旷视科技有限公司 The method, apparatus and computer storage media of In vivo detection
CN108830062A (en) * 2018-05-29 2018-11-16 努比亚技术有限公司 Face identification method, mobile terminal and computer readable storage medium
CN108875484A (en) * 2017-09-22 2018-11-23 北京旷视科技有限公司 Face unlocking method, device and system and storage medium for mobile terminal
CN108960145A (en) * 2018-07-04 2018-12-07 北京蜂盒科技有限公司 Facial image detection method, device, storage medium and electronic equipment
CN109035299A (en) * 2018-06-11 2018-12-18 平安科技(深圳)有限公司 Method for tracking target, device, computer equipment and storage medium
KR101954763B1 (en) * 2017-09-04 2019-03-06 동국대학교 산학협력단 Face recognition access control apparatus and operation method thereof
CN109635625A (en) * 2018-10-16 2019-04-16 平安科技(深圳)有限公司 Smart identity checking method, equipment, storage medium and device
CN110097570A (en) * 2019-04-30 2019-08-06 腾讯科技(深圳)有限公司 A kind of image processing method and device
CN110163884A (en) * 2019-05-17 2019-08-23 温州大学 A kind of single image dividing method based on full connection deep learning neural network
CN110751041A (en) * 2019-09-19 2020-02-04 平安科技(深圳)有限公司 Certificate authenticity verification method, system, computer equipment and readable storage medium
WO2020038136A1 (en) * 2018-08-24 2020-02-27 深圳前海达闼云端智能科技有限公司 Facial recognition method and apparatus, electronic device and computer-readable medium
CN110889377A (en) * 2019-11-28 2020-03-17 深圳市丰巢科技有限公司 Method and device for identifying abnormality of advertising object, server device and storage medium
CN110969067A (en) * 2018-09-30 2020-04-07 北京小米移动软件有限公司 User registration and authentication method and device
CN111242097A (en) * 2020-02-27 2020-06-05 腾讯科技(深圳)有限公司 Face recognition method and device, computer readable medium and electronic equipment
CN111488943A (en) * 2020-04-16 2020-08-04 上海芯翌智能科技有限公司 Face recognition method and device

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105427421A (en) * 2015-11-16 2016-03-23 苏州市公安局虎丘分局 Entrance guard control method based on face recognition
CN106295522A (en) * 2016-07-29 2017-01-04 武汉理工大学 A kind of two-stage anti-fraud detection method based on multi-orientation Face and environmental information
CN106960180A (en) * 2017-02-24 2017-07-18 深圳市普波科技有限公司 A kind of intelligent control method, apparatus and system
CN108734057A (en) * 2017-04-18 2018-11-02 北京旷视科技有限公司 The method, apparatus and computer storage media of In vivo detection
CN107423690A (en) * 2017-06-26 2017-12-01 广东工业大学 A kind of face identification method and device
KR101954763B1 (en) * 2017-09-04 2019-03-06 동국대학교 산학협력단 Face recognition access control apparatus and operation method thereof
CN108875484A (en) * 2017-09-22 2018-11-23 北京旷视科技有限公司 Face unlocking method, device and system and storage medium for mobile terminal
CN107862247A (en) * 2017-10-13 2018-03-30 平安科技(深圳)有限公司 A kind of human face in-vivo detection method and terminal device
CN108229362A (en) * 2017-12-27 2018-06-29 杭州悉尔科技有限公司 A kind of binocular recognition of face biopsy method based on access control system
CN108629305A (en) * 2018-04-27 2018-10-09 朱旭辉 A kind of face recognition method
CN108830062A (en) * 2018-05-29 2018-11-16 努比亚技术有限公司 Face identification method, mobile terminal and computer readable storage medium
CN109035299A (en) * 2018-06-11 2018-12-18 平安科技(深圳)有限公司 Method for tracking target, device, computer equipment and storage medium
CN108960145A (en) * 2018-07-04 2018-12-07 北京蜂盒科技有限公司 Facial image detection method, device, storage medium and electronic equipment
WO2020038136A1 (en) * 2018-08-24 2020-02-27 深圳前海达闼云端智能科技有限公司 Facial recognition method and apparatus, electronic device and computer-readable medium
CN110969067A (en) * 2018-09-30 2020-04-07 北京小米移动软件有限公司 User registration and authentication method and device
CN109635625A (en) * 2018-10-16 2019-04-16 平安科技(深圳)有限公司 Smart identity checking method, equipment, storage medium and device
CN110097570A (en) * 2019-04-30 2019-08-06 腾讯科技(深圳)有限公司 A kind of image processing method and device
CN110163884A (en) * 2019-05-17 2019-08-23 温州大学 A kind of single image dividing method based on full connection deep learning neural network
CN110751041A (en) * 2019-09-19 2020-02-04 平安科技(深圳)有限公司 Certificate authenticity verification method, system, computer equipment and readable storage medium
CN110889377A (en) * 2019-11-28 2020-03-17 深圳市丰巢科技有限公司 Method and device for identifying abnormality of advertising object, server device and storage medium
CN111242097A (en) * 2020-02-27 2020-06-05 腾讯科技(深圳)有限公司 Face recognition method and device, computer readable medium and electronic equipment
CN111488943A (en) * 2020-04-16 2020-08-04 上海芯翌智能科技有限公司 Face recognition method and device

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
EchoFace: Acoustic Sensor-Based Media Attack Detection for Face Authentication;Huangxun Chen等;《IEEE INTERNET OF THINGS JOURNAL》;第7卷(第3期);第2152-2159页 *
一种基于深度学习的移动端人脸验证系统;刘程等;《计算机与现代化》(第2期);第107-111页 *
基于活体检测的智能防护系统;牛红闯;《中国优秀硕士学位论文全文数据库 信息科技辑》(第(2020)07期);I138-985 *
掌纹预处理研究及Android终端掌纹认证系统开发;陈琪;《中国优秀硕士学位论文全文数据库 信息科技辑》(第(2018)11期);I138-72 *
面向人脸认证的活体检测方法研究;郑欣洋;《中国优秀硕士学位论文全文数据库 信息科技辑》(第(2019)05期);I138-152 *

Also Published As

Publication number Publication date
CN111914769A (en) 2020-11-10

Similar Documents

Publication Publication Date Title
CN111241989B (en) Image recognition method and device and electronic equipment
CN108491805B (en) Identity authentication method and device
WO2022161286A1 (en) Image detection method, model training method, device, medium, and program product
CN111475797A (en) Method, device and equipment for generating confrontation image and readable storage medium
CN109284683A (en) Feature extraction and matching and template renewal for biological identification
WO2018082011A1 (en) Living fingerprint recognition method and device
CN106203387A (en) Face verification method and system
CN111914769B (en) User validity determination method, device, computer readable storage medium and equipment
CN110163111A (en) Method, apparatus of calling out the numbers, electronic equipment and storage medium based on recognition of face
CN113435583A (en) Countermeasure generation network model training method based on federal learning and related equipment thereof
CN113792853B (en) Training method of character generation model, character generation method, device and equipment
CN112560753A (en) Face recognition method, device and equipment based on feature fusion and storage medium
CN113792659B (en) Document identification method and device and electronic equipment
US11321553B2 (en) Method, device, apparatus and storage medium for facial matching
US20220004652A1 (en) Providing images with privacy label
Kuznetsov et al. Biometric authentication using convolutional neural networks
Aizi et al. Remote multimodal biometric identification based on the fusion of the iris and the fingerprint
CN114067394A (en) Face living body detection method and device, electronic equipment and storage medium
Szczepanik et al. Security lock system for mobile devices based on fingerprint recognition algorithm
CN112819486A (en) Method and system for identity certification
CN112541446A (en) Biological feature library updating method and device and electronic equipment
CN111353139A (en) Continuous authentication method and device, electronic equipment and storage medium
CN117079336B (en) Training method, device, equipment and storage medium for sample classification model
CN110096954B (en) Fingerprint identification method based on neural network
Kumar et al. A brief review of image quality enhancement techniques based multi-modal biometric fusion systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant