CN111914769A - User validity judging method, device, computer readable storage medium and equipment - Google Patents

User validity judging method, device, computer readable storage medium and equipment Download PDF

Info

Publication number
CN111914769A
CN111914769A CN202010783859.XA CN202010783859A CN111914769A CN 111914769 A CN111914769 A CN 111914769A CN 202010783859 A CN202010783859 A CN 202010783859A CN 111914769 A CN111914769 A CN 111914769A
Authority
CN
China
Prior art keywords
similarity
feature vector
real
preset
time image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010783859.XA
Other languages
Chinese (zh)
Other versions
CN111914769B (en
Inventor
田植良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010783859.XA priority Critical patent/CN111914769B/en
Publication of CN111914769A publication Critical patent/CN111914769A/en
Application granted granted Critical
Publication of CN111914769B publication Critical patent/CN111914769B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a user validity judging method, a user validity judging device, a computer readable storage medium and an electronic device; relates to the technical field of computers; acquiring a real-time image corresponding to a current user and calculating a first similarity between the real-time image and a legal portrait; determining a boundary for distinguishing a portrait region from a background region in a real-time image; calculating a second similarity between the background area and a preset background; and judging the validity of the current user according to the first similarity and the second similarity. Therefore, by the technical scheme, the accuracy of the user validity judgment is improved.

Description

User validity judging method, device, computer readable storage medium and equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a user validity determination method, a user validity determination apparatus, a computer-readable storage medium, and an electronic device.
Background
With the rapid development of computing technology, the unlocking mode of the mobile terminal is not limited to password unlocking, gesture unlocking and the like, but is developed to unlocking through fingerprints and facial features. Generally speaking, when a user needs to unlock the mobile terminal, the screen can be lighted up, so that the fingerprint identification module or the camera module acquires fingerprint information or facial information required for unlocking, the validity of the fingerprint information or the facial information is verified, and if the verification is successful, the current user can be judged to be a legal user.
However, both the unlocking methods of fingerprint identification and face identification have the risk of wrong identification, which easily causes the risk of stealing the legal user data stored in the mobile terminal. Therefore, how to improve the accuracy of the user validity determination becomes a problem that needs to be solved urgently at present.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present application and therefore may include information that does not constitute prior art known to a person of ordinary skill in the art.
Disclosure of Invention
The present application aims to provide a user validity determination method, a user validity determination apparatus, a computer-readable storage medium, and an electronic device, which can improve the accuracy of user validity determination.
Other features and advantages of the present application will be apparent from the following detailed description, or may be learned by practice of the application.
According to an aspect of the present application, there is provided a user validity determination method including:
acquiring a real-time image corresponding to a current user and calculating a first similarity between the real-time image and a legal portrait;
determining a boundary for distinguishing a portrait region from a background region in a real-time image;
calculating a second similarity between the background area and a preset background;
and judging the validity of the current user according to the first similarity and the second similarity.
In an exemplary embodiment of the present application, calculating a first similarity between a live image and a legal portrait includes:
performing feature extraction on the real-time image through a first neural network to obtain a first feature vector;
and acquiring a legal feature vector corresponding to the legal portrait, and calculating a first similarity between the legal feature vector and the first feature vector.
In an exemplary embodiment of the present application, the first neural network includes a convolutional layer, an excitation layer, a pooling layer, and a full-link layer, and the extracting features of the real-time image through the first neural network to obtain a first feature vector includes:
performing feature extraction on the real-time image through the convolutional layer to obtain a first reference feature vector;
activating the first reference characteristic vector through the excitation layer to obtain a second reference characteristic vector;
sampling the second reference characteristic vector through the pooling layer to obtain a third reference characteristic vector;
and performing dimensionality reduction processing on the third reference characteristic vector through the full connection layer to obtain a first characteristic vector.
In an exemplary embodiment of the present application, before determining the boundary for distinguishing the portrait area from the background area in the real-time image, the method further includes:
preprocessing a real-time image; wherein the preprocessing comprises gray processing or binarization processing.
In an exemplary embodiment of the present application, determining a boundary for distinguishing a portrait area from a background area in a real-time image includes:
performing feature extraction on the real-time image through a second neural network to obtain a second feature vector;
classifying each pixel point in the real-time image according to the second feature vector to obtain a class of pixels belonging to a portrait area and a class of pixels belonging to a background area;
and determining a boundary for distinguishing the portrait area from the background area according to the first-class pixels and the second-class pixels and marking the boundary.
In an exemplary embodiment of the present application, after determining a boundary for distinguishing the portrait area from the background area according to the first type of pixels and the second type of pixels and performing boundary marking, the method further includes:
updating a sample set used for training a second neural network according to the boundary markers;
and training the second neural network according to the updated sample set.
In an exemplary embodiment of the present application, the number of the preset backgrounds is at least one, and the calculating a second similarity between the background area and the preset background includes:
extracting the features of the background area through a third neural network to obtain a third feature vector;
acquiring preset characteristic vectors corresponding to at least one preset background respectively;
and calculating a second similarity between each preset feature vector and the third feature vector.
In an exemplary embodiment of the present application, after feature extraction is performed on the background region through a third neural network to obtain a third feature vector, the method further includes:
generating a sample image corresponding to the third feature vector through a third neural network;
and adjusting network parameters corresponding to the third neural network according to the sample image.
In an exemplary embodiment of the present application, the determining the validity of the current user according to the first similarity and the second similarity includes:
selecting the highest target similarity from the plurality of second similarities;
acquiring a first similarity and a weight value corresponding to the target similarity;
calculating a weighted sum of the first similarity and the target similarity according to the weight value;
and judging the validity of the current user according to the weighted sum.
In an exemplary embodiment of the present application, the determining the validity of the current user according to the weighted sum includes:
if the weighted sum is more than or equal to the preset weighted sum, judging that the current user is a legal user;
and if the weighted sum is smaller than the preset weighted sum, judging that the current user is an illegal user.
In an exemplary embodiment of the present application, after determining that the current user is a valid user, the method further includes:
and if the target similarity is smaller than the preset similarity, updating the preset feature vector according to the background area.
In an exemplary embodiment of the present application, updating the preset feature vector according to the background area includes:
selecting a specific preset background with highest similarity with the background region from at least one preset background according to the vector similarity;
and performing mean value calculation on the preset feature vector and the third feature vector of the specific preset background, and determining a calculation result as the updated preset feature vector of the specific preset background.
According to an aspect of the present application, there is provided a user validity determination device including an image acquisition unit, a similarity calculation unit, a boundary determination unit, and a validity determination unit, wherein:
the image acquisition unit is used for acquiring a real-time image corresponding to a current user;
the similarity calculation unit is used for calculating a first similarity between the real-time image and the legal portrait;
a boundary determination unit for determining a boundary for distinguishing a portrait region from a background region in a real-time image;
the similarity calculation unit is also used for calculating a second similarity between the background area and the preset background;
and the legality judging unit is used for judging the legality of the current user according to the first similarity and the second similarity.
In an exemplary embodiment of the present application, the similarity calculation unit calculates a first similarity between the real-time image and the legal person image, including:
performing feature extraction on the real-time image through a first neural network to obtain a first feature vector;
and acquiring a legal feature vector corresponding to the legal portrait, and calculating a first similarity between the legal feature vector and the first feature vector.
In an exemplary embodiment of the present application, the first neural network includes a convolutional layer, an excitation layer, a pooling layer, and a full-link layer, and the similarity calculation unit performs feature extraction on the real-time image through the first neural network to obtain a first feature vector, including:
performing feature extraction on the real-time image through the convolutional layer to obtain a first reference feature vector;
activating the first reference characteristic vector through the excitation layer to obtain a second reference characteristic vector;
sampling the second reference characteristic vector through the pooling layer to obtain a third reference characteristic vector;
and performing dimensionality reduction processing on the third reference characteristic vector through the full connection layer to obtain a first characteristic vector.
In an exemplary embodiment of the present application, the apparatus further includes an image preprocessing unit, wherein:
the image preprocessing unit is used for preprocessing the real-time image before the boundary judging unit determines the boundary for distinguishing the portrait area from the background area in the real-time image; wherein the preprocessing comprises gray processing or binarization processing.
In an exemplary embodiment of the present application, a boundary determining unit determines a boundary for distinguishing a portrait area from a background area in a real-time image, including:
performing feature extraction on the real-time image through a second neural network to obtain a second feature vector;
classifying each pixel point in the real-time image according to the second feature vector to obtain a class of pixels belonging to a portrait area and a class of pixels belonging to a background area;
and determining a boundary for distinguishing the portrait area from the background area according to the first-class pixels and the second-class pixels and marking the boundary.
In an exemplary embodiment of the present application, the apparatus further includes a sample updating unit and a network training unit, wherein:
the sample updating unit is used for updating a sample set used for training the second neural network according to the boundary mark after the boundary judging unit determines the boundary used for distinguishing the portrait area from the background area according to the first-class pixel and the second-class pixel and carries out the boundary mark;
and the network training unit is used for training the second neural network according to the updated sample set.
In an exemplary embodiment of the present application, the number of the preset backgrounds is at least one, and the similarity calculation unit calculates a second similarity between the background area and the preset background, including:
extracting the features of the background area through a third neural network to obtain a third feature vector;
acquiring preset characteristic vectors corresponding to at least one preset background respectively;
and calculating a second similarity between each preset feature vector and the third feature vector.
In an exemplary embodiment of the present application, the apparatus further includes a sample generation unit, wherein:
the sample generation unit is used for performing feature extraction on the background area through a third neural network at the similarity calculation unit to obtain a third feature vector, and then generating a sample image corresponding to the third feature vector through the third neural network;
and the network training unit is also used for adjusting the network parameters corresponding to the third neural network according to the sample image.
In an exemplary embodiment of the present application, the validity determination unit performs validity determination on the current user according to the first similarity and the second similarity, including:
selecting the highest target similarity from the plurality of second similarities;
acquiring a first similarity and a weight value corresponding to the target similarity;
calculating a weighted sum of the first similarity and the target similarity according to the weight value;
and judging the validity of the current user according to the weighted sum.
In an exemplary embodiment of the present application, the validity determination unit performs validity determination on the current user according to the weighted sum, including:
if the weighted sum is more than or equal to the preset weighted sum, judging that the current user is a legal user;
and if the weighted sum is smaller than the preset weighted sum, judging that the current user is an illegal user.
In an exemplary embodiment of the present application, the apparatus further includes a vector updating unit, wherein:
and the vector updating unit is used for updating the preset feature vector according to the background area if the target similarity is smaller than the preset similarity after the validity judging unit judges that the current user is a valid user.
In an exemplary embodiment of the present application, the updating the preset feature vector according to the background region by the vector updating unit includes:
selecting a specific preset background with highest similarity with the background region from at least one preset background according to the vector similarity;
and performing mean value calculation on the preset feature vector and the third feature vector of the specific preset background, and determining a calculation result as the updated preset feature vector of the specific preset background.
According to an aspect of the present application, there is provided an electronic device including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the method of any one of the above via execution of the executable instructions.
According to an aspect of the application, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, is adapted to carry out the method of any of the above.
The exemplary embodiments of the present application may have some or all of the following advantages:
in the user validity determination method provided in an exemplary embodiment of the present application, a real-time image corresponding to a current user may be acquired and a first similarity between the real-time image and a legal portrait may be calculated; determining a boundary for distinguishing a portrait region from a background region in a real-time image; calculating a second similarity between the background area and a preset background; and judging the validity of the current user according to the first similarity and the second similarity. According to the scheme description, on one hand, the user validity can be judged by combining the background similarity and the portrait similarity, so that the accuracy of user validity judgment is improved. According to the method and the device, the improvement of the accuracy can be judged based on the legality of the user, the data safety of the legal user is guaranteed, and the risk that the data of the legal user is stolen is reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a schematic diagram illustrating an exemplary system architecture to which a user validity determination method and a user validity determination apparatus according to an embodiment of the present application may be applied;
FIG. 2 illustrates a schematic structural diagram of a computer system suitable for use in implementing an electronic device of an embodiment of the present application;
FIG. 3 schematically illustrates a flow chart of a user legitimacy determination method according to an embodiment of the present application;
FIG. 4 schematically illustrates a real-time image contrast diagram before and after keypoint calibration according to an embodiment of the present application;
FIG. 5 schematically illustrates a structural schematic of a first neural network according to one embodiment of the present application;
FIG. 6 schematically shows a structural schematic of a second neural network according to one embodiment of the present application;
FIG. 7 schematically illustrates a structural schematic of a third neural network according to one embodiment of the present application;
FIG. 8 schematically shows a memory unit for storing preset feature vectors according to an embodiment of the present application;
FIG. 9 schematically illustrates a flow diagram of a user legitimacy determination method according to one embodiment of the present application;
fig. 10 schematically shows a block diagram of a user validity determination apparatus in one embodiment according to the present application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the application. One skilled in the relevant art will recognize, however, that the subject matter of the present application can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present application.
Furthermore, the drawings are merely schematic illustrations of the present application and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
Fig. 1 is a schematic diagram illustrating a system architecture of an exemplary application environment to which a user validity determination method and a user validity determination apparatus according to an embodiment of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include one or more of terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few. The terminal devices 101, 102, 103 may be various electronic devices having a display screen, including but not limited to desktop computers, portable computers, smart phones, tablet computers, and the like. It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, server 105 may be a server cluster comprised of multiple servers, or the like.
The user validity determination method provided by the embodiment of the present application is generally executed by the terminal device 101, 102, or 103, and accordingly, the user validity determination apparatus is generally disposed in the terminal device 101, 102, or 103. However, it is easily understood by those skilled in the art that the user validity determining method provided in the embodiment of the present application may also be executed by the server 105, and accordingly, a user validity determining apparatus may also be disposed in the server 105, which is not particularly limited in the exemplary embodiment. For example, in an exemplary embodiment, the terminal device 101, 102, or 103 may obtain a real-time image corresponding to a current user and calculate a first similarity between the real-time image and a legal portrait; determining a boundary for distinguishing a portrait region from a background region in a real-time image; calculating a second similarity between the background area and a preset background; and judging the validity of the current user according to the first similarity and the second similarity.
FIG. 2 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
It should be noted that the computer system 200 of the electronic device shown in fig. 2 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 2, the computer system 200 includes a Central Processing Unit (CPU)201 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)202 or a program loaded from a storage section 208 into a Random Access Memory (RAM) 203. In the RAM 203, various programs and data necessary for system operation are also stored. The CPU 201, ROM 202, and RAM 203 are connected to each other via a bus 204. An input/output (I/O) interface 205 is also connected to bus 204.
The following components are connected to the I/O interface 205: an input portion 206 including a keyboard, a mouse, and the like; an output section 207 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 208 including a hard disk and the like; and a communication section 209 including a network interface card such as a LAN card, a modem, or the like. The communication section 209 performs communication processing via a network such as the internet. A drive 210 is also connected to the I/O interface 205 as needed. A removable medium 211, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like, is mounted on the drive 210 as necessary, so that a computer program read out therefrom is installed into the storage section 208 as necessary.
In particular, according to embodiments of the present application, the processes described below with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 209 and/or installed from the removable medium 211. The computer program, when executed by a Central Processing Unit (CPU)201, performs various functions defined in the methods and apparatus of the present application. The method of the present application may be implemented based on artificial intelligence. Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making. The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Currently, the unlocking methods for the mobile terminal include the following steps: 1. unlocking through an unlocking password preset by a user; 2. unlocking through a preset gesture; 3. unlocking through a pre-recorded fingerprint; 4. and unlocking through a face recognition technology. There is a higher risk of deciphering for modes 1 and 2, and a risk of misidentification for modes 3 and 4. If the illegal user is identified as a legal user, a great threat is caused to the data security of the legal user.
In view of the above, the present exemplary embodiment provides a user validity determination method. The user validity determination method may be applied to the server 105, or may be applied to one or more of the terminal devices 101, 102, and 103, which is not particularly limited in this exemplary embodiment. Referring to fig. 3, the user validity determination method may include the following steps S310 to S340:
step S310: and acquiring a real-time image corresponding to the current user and calculating a first similarity between the real-time image and a legal portrait.
Step S320: a boundary is determined in the real-time image for distinguishing the portrait area from the background area.
Step S330: and calculating a second similarity between the background area and the preset background.
Step S340: and judging the validity of the current user according to the first similarity and the second similarity.
By implementing the method shown in fig. 3, the user validity determination can be performed by combining the background similarity and the portrait similarity, so that the accuracy of the user validity determination is improved. In addition, the accuracy can be judged based on the legality of the user, the data security of the legal user is guaranteed, and the risk that the data of the legal user is stolen is reduced.
The above steps of the present exemplary embodiment will be described in more detail below.
In step S310, a real-time image corresponding to the current user is obtained and a first similarity between the real-time image and a legal portrait is calculated.
Specifically, the real-time image may be an image collected by a front camera and/or a rear camera; the legal portrait can be a pre-stored image, the legal portrait contains legal facial features, the legal facial features can be used as the representation of a legal user, the number of the legal portrait can be one or more, and the embodiment of the application is not limited; the first similarity may be used to represent a degree of similarity between the real-time image and the legal person image, and the first similarity may also be used to represent a probability that the current user is a legal user.
In addition, optionally, before calculating the first similarity between the real-time image and the legal person, the method may further include: identifying the face position in the real-time image, and performing key point calibration according to the face position to adjust the face posture in the real-time image to be consistent with the face posture in the legal person image, so that the accuracy of judging the legality of a user can be improved; wherein the key points include at least characteristic points for constituting the five sense organs.
In addition, optionally, before calculating the first similarity between the real-time image and the legal person, the method may further include: if the number of the detected faces in the real-time image is larger than 1, selecting a target face from the multiple faces according to the area of the area where the face is located; furthermore, the real-time image can be cut according to the target face so as to exclude other faces except the target face.
Referring to fig. 4, fig. 4 schematically illustrates a real-time image contrast before and after keypoint calibration according to an embodiment of the present application. As shown in fig. 4, a real-time image 402 can be obtained by identifying the face position in the real-time image and performing the key point calibration on the real-time image 401 according to the face position. Furthermore, by calculating the first similarity between the real-time image 402 and the legal portrait, the accuracy of the validity determination can be improved.
In addition, optionally, the obtaining of the real-time image corresponding to the current user includes: when an unlocking request is detected, acquiring a real-time image corresponding to a current user; or when an online payment request is detected, acquiring a real-time image corresponding to the current user; or when the identity authentication request is detected, acquiring a real-time image corresponding to the current user.
The method for acquiring the real-time image corresponding to the current user may specifically be: triggering the front camera to start, and shooting at least one real-time image. Specifically, if the number of the real-time images is multiple, the way of calculating the first similarity between the real-time images and the legal portrait may specifically be: and selecting a target real-time image from at least one real-time image, and calculating a first similarity between the target real-time image and a legal portrait. Further, the manner of selecting the target real-time image from the at least one real-time image may specifically be: and calculating the definition corresponding to the at least one real-time image, and selecting a target real-time image from the at least one real-time image according to the definition. It should be noted that the first similarity may be a vector similarity, the vector similarity may be represented by a cosine distance or a euclidean distance, and the second similarity is similar to the above.
In addition, optionally, after calculating the first similarity between the real-time image and the legal person, the method may further include: detecting whether the first similarity is greater than a preset similarity; if yes, go to step S320; if not, the prompt information used for representing the current user and the non-legal user is fed back. Further optionally, if the first similarity is less than or equal to the preset similarity, the following operations may be further performed: outputting an interactive window, displaying at least one secret protection question through the interactive window, and further detecting secret protection answers aiming at each secret protection question input by a user; and if the secret protection answer matched with the secret protection question exists, updating an image library for storing legal portrait according to the real-time image.
As an alternative embodiment, calculating a first similarity between the real-time image and the legal portrait includes: performing feature extraction on the real-time image through a first neural network to obtain a first feature vector; and acquiring a legal feature vector corresponding to the legal portrait, and calculating a first similarity between the legal feature vector and the first feature vector.
Specifically, the first Neural Network may be a Neural Network model based on a Convolutional Neural Network (CNN), and the following second Neural Network and third Neural Network are the same; among them, CNN is a kind of feedforward type neural network. In addition, network parameters corresponding to the first neural network, the second neural network and the third neural network are different from each other, and the network parameters at least comprise weight values and bias items. The legal feature vector corresponding to the legal portrait may be a pre-computed vector used as a representation of a plurality of facial features (e.g., eyes, nose, mouth, ears, etc.) in the legal portrait; and in the same way, the first feature vector is used as the representation of a plurality of facial features in the real-time image.
In addition, optionally, the manner of calculating the first similarity between the legal feature vector and the first feature vector may be: calculating Euclidean distance between the legal feature vector and the first feature vector to serve as first similarity; or calculating the cosine distance between the legal feature vector and the first feature vector as the similarity; or calculating a Tanimoto coefficient according to the legal feature vector and the first feature vector to represent similarity; alternatively, the pearson correlation coefficient is calculated according to the legal feature vector and the first feature vector to characterize the similarity, and the embodiment of the disclosure is not limited.
Specifically, the euclidean distance is a true distance between two points in an m-dimensional space or a natural length of a vector, and the euclidean distance in a two-dimensional and a three-dimensional space is an actual distance between the two points; the pearson correlation coefficient is obtained by dividing the covariance by the standard deviation of the two variables; cosine distance is a measure for measuring the difference between two individuals by using a cosine value of an included angle between two vectors in a vector space; the Tanimoto coefficient is a generalized Jacard similarity, and if x and y are binary vectors, the Tanimoto coefficient is equivalent to Jacard Distance (Jaccard Distance), which is an index for measuring the difference between two sets.
Therefore, by implementing the optional embodiment, the feature vector of the real-time image can be extracted through the neural network, so that the legal user can be identified according to the feature vector, the identification accuracy is further improved, and the data safety of the legal user is guaranteed.
As an optional embodiment, the first neural network includes a convolutional layer, an excitation layer, a pooling layer, and a full-link layer, and the feature extraction of the real-time image by the first neural network to obtain the first feature vector includes: performing feature extraction on the real-time image through the convolutional layer to obtain a first reference feature vector; activating the first reference characteristic vector through the excitation layer to obtain a second reference characteristic vector; sampling the second reference characteristic vector through the pooling layer to obtain a third reference characteristic vector; and performing dimensionality reduction processing on the third reference characteristic vector through the full connection layer to obtain a first characteristic vector.
Specifically, the embodiments of the present application are not limited to the number of layers respectively corresponding to the convolutional layer, the excitation layer, and the pooling layer included in the first neural network; in addition, the filters corresponding to each layer are different so as to extract the image characteristics of different emphasis points; the first reference feature vector, the second reference feature vector and the third reference feature vector may be represented by a feature map; the excitation function in the excitation layer may be:
Figure BDA0002621206880000141
tan (x) -2 σ (2x) -1 or σ (z) -max (—, z).
In addition, optionally, the sampling processing on the second reference feature vector through the pooling layer to obtain the third reference feature vector may specifically be: and performing maximum pooling or average pooling on the second reference feature vector through a pooling layer to realize sampling of the second reference feature vector, thereby obtaining a third reference feature vector.
In addition, optionally, the method of obtaining the first feature vector by performing the dimension reduction processing on the third reference feature vector through the full connection layer may specifically be: the preset convolution kernel and the third reference feature vector are convolved through a plurality of neurons in the full connection layer to realize dimension reduction of the third reference feature vector, so that the first feature vector is obtained, and the distributed features in the real-time image can be mapped to a sample mark space. For example, the dimension corresponding to the third reference feature vector may be 7 × 512, the number of the neurons in the fully-connected layer is 4096, and the predetermined convolution kernel is 7 × 512 × 4096, so the dimension corresponding to the first feature vector may be 1 × 4096.
For example, referring to fig. 5, fig. 5 schematically illustrates a structural diagram of a first neural network according to an embodiment of the present application. As shown in fig. 5, the first neural network 500 may include a convolutional layer 501, an excitation layer 502, a pooling layer 503, and a fully-connected layer 504. Specifically, after the real-time image is input into the convolutional layer 501, the convolutional layer 501 may perform feature extraction on the real-time image to obtain a first reference feature vector, and use the first reference feature vector as an input of the excitation layer 502; further, the excitation layer 502 may perform activation processing on the first reference feature vector to obtain a second reference feature vector, and the second reference feature vector is used as an input of the pooling layer 503; furthermore, the pooling layer 503 may perform average pooling or global pooling on the second reference feature vector to obtain a third reference feature vector, and use the third reference feature vector as an input of the full-link layer 504; furthermore, the full link layer 504 may perform dimension reduction processing on the third reference feature vector to obtain a first feature vector, and then may determine, according to the first feature vector, a probability that the real-time image includes the facial feature, as an identification result. Wherein, a plurality of one-dimensional neurons may be included in the fully-connected layer 504 for reducing the dimension of the third reference feature vector. It should be noted that the structure shown in fig. 5 is only an exemplary illustration, and in practical applications, the number of the convolutional layer 501, the excitation layer 502, the pooling layer 503, and the fully-connected layer 504 is not limited in the present application.
Therefore, by implementing the optional embodiment, the feature vector used for representing the facial features can be determined by extracting the features of the real-time image, so that the legal user judgment can be favorably carried out according to the feature vector, and the judgment accuracy is improved.
In step S320, a boundary for distinguishing the portrait area from the background area is determined in the real-time image.
Specifically, the live image is composed of a portrait area and a background area. In addition, optionally, after determining a boundary for distinguishing the portrait area from the background area in the real-time image, the method may further include: changing the boundary pixel value according to the boundary judgment result to emphasize the display boundary or highlight the display boundary; the method of changing the boundary pixel value according to the boundary determination result may be: r, G, B corresponding to the boundary pixel point and values corresponding to channels of alpha are changed, wherein alpha is used for representing the transparency of the boundary pixel point, and R, G, B is used for representing the values of red, yellow and blue channels.
In addition, optionally, after determining a boundary for distinguishing the portrait area from the background area in the real-time image, the method may further include: marking the real-time image according to the boundary judgment result, and outputting the marked real-time image; wherein the marked real-time image is used for highlighting the boundary for distinguishing the portrait area from the background area.
As an alternative embodiment, before determining the boundary for distinguishing the portrait area from the background area in the real-time image, the method further includes: preprocessing a real-time image; wherein the preprocessing comprises gray processing or binarization processing.
Specifically, the preprocessing may further include noise processing, on/off operation, and the like, and the embodiment of the present application is not limited. Optionally, the noise processing method for the real-time image may specifically be: and determining the neighborhood pixel average value or the weighted average value of each pixel point as an output result of the pixel point through a preset filter, and traversing each pixel point in the real-time image so as to realize the denoising of the real-time image. Or, the method of performing noise processing on the real-time image may specifically be: and processing the real-time image in a median filtering mode to realize the denoising of the real-time image.
Therefore, by implementing the optional embodiment, the accuracy of feature extraction can be improved by preprocessing the real-time image, and the accuracy of user validity judgment can be improved.
As an alternative embodiment, determining a boundary for distinguishing a portrait region from a background region in a real-time image includes: performing feature extraction on the real-time image through a second neural network to obtain a second feature vector; classifying each pixel point in the real-time image according to the second feature vector to obtain a class of pixels belonging to a portrait area and a class of pixels belonging to a background area; and determining a boundary for distinguishing the portrait area from the background area according to the first-class pixels and the second-class pixels and marking the boundary.
Specifically, the second feature vector is used to highlight the boundary in the real-time image, and the dimensions of the second feature vector and the first feature vector may be the same or different.
In addition, optionally, the manner of classifying each pixel point in the real-time image according to the second feature vector to obtain a first-class pixel belonging to the portrait area and a second-class pixel belonging to the background area may specifically be: and inputting the second feature vector into a support vector machine (svm) so that the support vector machine classifies each pixel point in the real-time image to obtain a class of pixels belonging to a human image area and a class of pixels belonging to a background area. It should be noted that the method of supporting the vector machine is based on the VC dimension theory of the statistical learning theory and the principle of minimizing the structural risk, and seeks the best compromise between the complexity of the model and the learning ability according to the limited sample information in order to obtain the best generalization ability. Support vector machines, which can be expressed as functional expressions, can be used to solve the two-classification problem, the multiple classification problem, and the regression problem.
In addition, optionally, the determining and boundary marking of the boundary for distinguishing the portrait area from the background area according to the first-class pixel and the second-class pixel may be: and determining boundary pixel points in the real-time image according to the first-class pixels and the second-class pixels and marking the boundary pixel points.
Referring to fig. 6, fig. 6 schematically shows a structural schematic diagram of a second neural network according to an embodiment of the present application. As shown in fig. 6, the second neural network 610 may include at least: convolutional layer 613, pooling layer 612, and fully-connected layer 611. Specifically, the real-time image may be input to the second neural network 610, the convolution layer 613 is used to perform convolution on the real-time image, that is, feature extraction, and the convolution result is input to the pooling layer 612, so that the pooling layer 612 performs global pooling/average pooling on the convolution result, further, the full-link layer 611 performs dimensionality reduction on the sampling result, and the feature vector obtained through dimensionality reduction is input to the support vector machine 620, so that the support vector machine 620 may classify each pixel point in the real-time image according to the feature vector obtained through dimensionality reduction, and obtain classification results, that is, a type of pixel belonging to the human image area and a type of pixel belonging to the background area. Furthermore, the boundary for distinguishing the portrait area from the background area can be determined according to the first-class pixels and the second-class pixels, and the boundary is marked.
Therefore, by implementing the optional embodiment, the background area can be determined for the determination area of the boundary, so that the similarity comparison of the background area is facilitated, and the accuracy of the user validity determination is further improved.
As an alternative embodiment, after determining the boundary for distinguishing the portrait area from the background area according to the first-type pixel and the second-type pixel and performing boundary marking, the method further includes: updating a sample set used for training a second neural network according to the boundary markers; and training the second neural network according to the updated sample set.
Specifically, the sample set comprises a plurality of labeled sample images, and the accuracy rate of feature extraction can be improved by training the second neural network through the labeled sample images. The manner of updating the sample set for training the second neural network according to the boundary markers may be: and processing the real-time image by the boundary marker to obtain a new sample image, and adding the new sample image into a sample set for training a second neural network to realize the update of the sample set.
It can be seen that, by implementing this alternative embodiment, the calculation accuracy of the second feature vector can be improved by updating the sample set, thereby improving the boundary labeling accuracy.
In step S330, a second similarity between the background area and the preset background is calculated.
Specifically, the number of the preset backgrounds may be one or more, and the embodiments of the present application are not limited. In addition, the above method may further include the steps of: after detecting that the user validity verification is successful, acquiring an environment image according to unit time duration until screen-off operation/power-off operation and the like are detected; the environment image can be acquired by a front camera and/or a rear camera; furthermore, a background library used for storing the preset background can be updated according to the environment image so as to improve the subsequent calculation accuracy of the second similarity, further improve the use experience of the user and guarantee the data security.
As an alternative embodiment, the number of the preset backgrounds is at least one, and the calculating the second similarity between the background area and the preset background includes: extracting the features of the background area through a third neural network to obtain a third feature vector; acquiring preset characteristic vectors corresponding to at least one preset background respectively; and calculating a second similarity between each preset feature vector and the third feature vector.
Specifically, the third neural network may be configured to perform feature extraction on a background region in the real-time image to obtain a third feature vector, and generate a sample image according to the third feature vector, so as to train the third neural network according to the sample image. In addition, the preset feature vector can be used as a representation of the preset background.
In addition, optionally, the manner of obtaining the preset feature vectors respectively corresponding to the at least one preset background may specifically be: and reading at least one preset background from the storage unit and reading preset characteristic vectors corresponding to the at least one preset background respectively. Specifically, the manner of reading the at least one preset background from the storage unit and reading the preset feature vectors corresponding to the at least one preset background respectively may specifically be: determining the category (such as supermarket, home, school, etc.) of the background region, and reading the preset feature vectors corresponding to all the preset backgrounds in the category.
Therefore, by implementing the optional embodiment, the legality of the user can be further verified by extracting the features of the background area, and the data security can be further guaranteed.
As an optional embodiment, after feature extraction is performed on the background region by using a third neural network to obtain a third feature vector, the method further includes: generating a sample image corresponding to the third feature vector through a third neural network; and adjusting network parameters corresponding to the third neural network according to the sample image.
In particular, the sample images may be used to train a third neural network. Optionally, the manner of generating the sample image corresponding to the third feature vector through the third neural network may specifically be: and further extracting the features of the third feature vector through a convolutional layer in a third neural network, further performing global pooling/average pooling on the feature extraction result through a pooling layer, and further performing dimensionality reduction on the pooling result through a full-connection layer, thereby generating a sample image.
In addition, optionally, the manner of adjusting the network parameter corresponding to the third neural network according to the sample image may specifically be: calculating a loss function between the sample image and the marked real-time image; furthermore, the network parameter corresponding to the third neural network can be adjusted according to the loss function until the loss function is smaller than the preset loss function value. Wherein the loss function may be: a regression Loss function, a square error Loss, an absolute error Loss, a Huber Loss, a binary Loss function, a binary cross entropy, a Hinge Loss, a multi-class Loss function, a multi-class cross entropy Loss, or a KL Divergence (Kullback Leibler Divergence Loss), which is not limited in the embodiments of the present application.
Referring to fig. 7, fig. 7 schematically illustrates a structural diagram of a third neural network according to an embodiment of the present application. As shown in fig. 7, the third neural network 700 may include: convolutional layer 701, pooled layer 702, fully-connected layer 703, convolutional layer 704, pooled layer 705, and fully-connected layer 706. Specifically, the marked real-time image may be input into the third neural network 700, so that the convolutional layer 701 performs feature extraction on the marked real-time image, and then the pooling layer 702 may perform global pooling/average pooling on the feature extraction result, and then the full connection layer 703 may perform dimension reduction on the pooling result, thereby obtaining a third feature vector for representing the background region. Further, the convolutional layer 704 may further perform feature extraction on the third feature vector, further perform global pooling/average pooling on the feature extraction result through the pooling layer 705, and further perform dimension reduction on the pooled result through the fully-connected layer 706, thereby generating a sample image for training the third neural network 700.
Referring to fig. 8, fig. 8 schematically illustrates a storage unit for storing preset feature vectors according to an embodiment of the present application based on a third feature vector obtained by the third neural network shown in fig. 7. As shown in fig. 8, the storage unit 800 is configured to store a preset feature vector 1801, a preset feature vector 2802, preset feature vectors 3803, … …, and a preset feature vector n 804, where n is a positive integer greater than or equal to 4. It should be noted that the preset feature vector 1801, the preset feature vector 2802, the preset feature vectors 3803, … …, and the preset feature vector n 804 respectively correspond to different preset backgrounds. Specifically, when the third feature vector for characterizing the background region is acquired, all or part of the preset feature vectors in the storage unit 800 may be read. Furthermore, a second similarity between the read predetermined feature vectors and the third feature vectors can be calculated.
Therefore, by implementing the optional embodiment, the parameter of the third neural network can be adjusted according to the background feature extraction of the user validity judgment each time, so that the feature extraction accuracy of the third neural network is improved, and the accuracy of the user validity judgment is improved.
In step S340, a validity determination is performed on the current user according to the first similarity and the second similarity.
As an optional embodiment, the determining the validity of the current user according to the first similarity and the second similarity includes: selecting the highest target similarity from the plurality of second similarities; acquiring a first similarity and a weight value corresponding to the target similarity; calculating a weighted sum of the first similarity and the target similarity according to the weight value; and judging the validity of the current user according to the weighted sum.
Specifically, the target similarity is used for representing the maximum similarity between the background area and a preset background; the weight value may be a preset value for balancing the first similarity and the target similarity in the weighted sum. For example, the way of calculating the weighted sum of the first similarity and the target similarity according to the weight value may specifically be: and calculating a weighted sum 0.8 x 0.7+0.9 x 0.3 according to the weighted value 0.7 corresponding to the first similarity 0.8 and the weighted value 0.3 corresponding to the target similarity 0.9.
In addition, optionally, the manner of calculating the weighted sum of the first similarity and the target similarity according to the weight value may specifically be: and calculating the weighted sum of the first similarity and the target similarity according to the weighted values corresponding to the first similarity and/or the target similarity. Further, the method for performing validity determination on the current user according to the weighted sum may specifically be: and performing mean value calculation on the weighted sum according to the first similarity and the target similarity to obtain a weighted mean value, and further performing validity judgment on the current user according to the weighted sum.
Therefore, by implementing the optional embodiment, the validity of the current user can be judged according to the portrait similarity and the background similarity, compared with the method for judging the validity only through the portrait similarity in the prior art, the method and the device for judging the validity of the current user have higher judgment accuracy, can ensure the data security of the legal user, and avoid the illegal user from stealing data after performing identity authentication through an illegal means.
As an alternative embodiment, the determining the validity of the current user according to the weighted sum includes: if the weighted sum is more than or equal to the preset weighted sum, judging that the current user is a legal user; and if the weighted sum is smaller than the preset weighted sum, judging that the current user is an illegal user.
Specifically, a weighted sum is preset for a decision threshold as a legitimate user. In addition, optionally, after determining that the current user is an illegal user, the method may further include the following steps: and outputting prompt information for indicating the failure of identity authentication. Further, the method can also comprise the following steps: if the frequency of the identity authentication failure in unit time is higher than the preset frequency, the identity authentication function is closed within the preset time length, and abnormal feedback is carried out on at least one trusted device, so that the purpose of reminding a legal user is achieved; the trusted device (e.g., a bracelet, a tablet computer, etc.) is a device which is preset and is connected with the terminal.
Therefore, whether the current user is a legal user can be judged according to the sum of the preset weight and the preset weight, and the judgment accuracy of the user legality can be improved.
As an optional embodiment, after determining that the current user is a valid user, the method further includes: and if the target similarity is smaller than the preset similarity, updating the preset feature vector according to the background area.
Specifically, if the target similarity is smaller than the preset similarity, it indicates that the background area is not a commonly used background, and since the current user is determined as a valid user, the background area in the real-time image can be added to the database, so that the preset background is enriched, and the accuracy of determining the validity of the user is improved.
Therefore, by implementing the optional embodiment, the preset feature vector can be continuously updated, so that the accuracy of the user validity judgment is continuously improved.
As an alternative embodiment, updating the preset feature vector according to the background area includes: selecting a specific preset background with highest similarity with the background region from at least one preset background according to the vector similarity; and performing mean value calculation on the preset feature vector and the third feature vector of the specific preset background, and determining a calculation result as the updated preset feature vector of the specific preset background.
Specifically, the specific preset background may be represented by an annotated image. In addition, optionally, the mean value calculation method for the preset feature vector and the third feature vector of the specific preset background may specifically be: and determining a preset feature vector of a specific preset background, and performing one-to-one corresponding mean calculation on the preset feature vector and elements in the third feature vector to ensure that the dimensionality of the updated preset feature vector is unchanged.
Therefore, by implementing the optional embodiment, the continuous updating of the preset feature vectors can be realized without increasing the number of the preset feature vectors in the storage unit, so that the occupation of storage resources can be reduced, and the optimization of the resource utilization rate is facilitated.
Please refer to fig. 9. Fig. 9 schematically shows a flowchart of a user validity determination method according to an embodiment of the present application. As shown in fig. 9, the user validity determination method includes: step S900 to step S928, wherein:
step S900: and acquiring a real-time image corresponding to the current user, and performing feature extraction on the real-time image through a first neural network to obtain a first feature vector.
Step S902: and acquiring a legal feature vector corresponding to the legal portrait, and calculating a first similarity between the legal feature vector and the first feature vector.
Step S904: and performing feature extraction on the real-time image through a second neural network to obtain a second feature vector.
Step S906: and classifying each pixel point in the real-time image according to the second feature vector to obtain a class of pixels belonging to the portrait area and a class of pixels belonging to the background area.
Step S908: and determining a boundary for distinguishing the portrait area from the background area according to the first-class pixels and the second-class pixels and marking the boundary.
Step S910: and performing feature extraction on the background area through a third neural network to obtain a third feature vector.
Step S912: and acquiring preset characteristic vectors corresponding to at least one preset background respectively.
Step S914: and calculating a second similarity between each preset feature vector and the third feature vector.
Step S916: and generating a sample image corresponding to the third feature vector through a third neural network, and adjusting network parameters corresponding to the third neural network according to the sample image.
Step S918: and selecting the highest target similarity from the plurality of second similarities.
Step S920: acquiring a weight value corresponding to the first similarity and the target similarity, and calculating a weighted sum of the first similarity and the target similarity according to the weight value; if the weighted sum is greater than or equal to the preset weighted sum, executing step S922; if the weighted sum is smaller than the preset weighted sum, step S924 is performed.
Step S922: the current user is determined to be a valid user, and step S926 is performed.
Step S924: and judging the current user as an illegal user.
Step S926: and if the target similarity is smaller than the preset similarity, selecting a specific preset background with the highest similarity with the background region from at least one preset background according to the vector similarity.
Step S928: and performing mean value calculation on the preset feature vector and the third feature vector of the specific preset background, and determining a calculation result as the updated preset feature vector of the specific preset background.
It should be noted that steps S900 to S928 correspond to the steps and the embodiment shown in fig. 3, and for the specific implementation of steps S900 to S928, please refer to the steps and the embodiment shown in fig. 3, which is not described herein again.
Therefore, by implementing the method shown in fig. 9, the user validity determination can be performed by combining the background similarity and the portrait similarity, so that the accuracy of the user validity determination is improved. In addition, the accuracy can be judged based on the legality of the user, the data security of the legal user is guaranteed, and the risk that the data of the legal user is stolen is reduced.
Further, in the present exemplary embodiment, a user validity determination apparatus is also provided. Referring to fig. 10, the user validity determination apparatus 1000 may include: image acquisition section 1001, similarity calculation section 1002, boundary determination section 1003, and validity determination section 1004, wherein:
an image obtaining unit 1001 configured to obtain a real-time image corresponding to a current user;
a similarity calculation unit 1002, configured to calculate a first similarity between the real-time image and a legal portrait;
a boundary determination unit 1003 for determining a boundary for distinguishing a portrait area from a background area in a real-time image;
the similarity calculation unit 1002 is further configured to calculate a second similarity between the background region and the preset background;
a validity determination unit 1004, configured to perform validity determination on the current user according to the first similarity and the second similarity.
Therefore, by implementing the device shown in fig. 10, the user validity determination can be performed by combining the background similarity and the portrait similarity, so that the accuracy of the user validity determination is improved. In addition, the accuracy can be judged based on the legality of the user, the data security of the legal user is guaranteed, and the risk that the data of the legal user is stolen is reduced.
In an exemplary embodiment of the present application, the similarity calculation unit 1002 calculates a first similarity between the real-time image and the legal person, including:
performing feature extraction on the real-time image through a first neural network to obtain a first feature vector;
and acquiring a legal feature vector corresponding to the legal portrait, and calculating a first similarity between the legal feature vector and the first feature vector.
Therefore, by implementing the optional embodiment, the feature vector of the real-time image can be extracted through the neural network, so that the legal user can be identified according to the feature vector, the identification accuracy is further improved, and the data safety of the legal user is guaranteed.
In an exemplary embodiment of the present application, the first neural network includes a convolutional layer, an excitation layer, a pooling layer, and a full-link layer, and the similarity calculation unit 1002 performs feature extraction on the real-time image through the first neural network to obtain a first feature vector, including:
performing feature extraction on the real-time image through the convolutional layer to obtain a first reference feature vector;
activating the first reference characteristic vector through the excitation layer to obtain a second reference characteristic vector;
sampling the second reference characteristic vector through the pooling layer to obtain a third reference characteristic vector;
and performing dimensionality reduction processing on the third reference characteristic vector through the full connection layer to obtain a first characteristic vector.
Therefore, by implementing the optional embodiment, the feature vector used for representing the facial features can be determined by extracting the features of the real-time image, so that the legal user judgment can be favorably carried out according to the feature vector, and the judgment accuracy is improved.
In an exemplary embodiment of the present application, the apparatus further includes an image preprocessing unit (not shown), wherein:
an image preprocessing unit configured to preprocess the real-time image before the boundary determination unit 1003 determines a boundary for distinguishing the portrait area from the background area in the real-time image; wherein the preprocessing comprises gray processing or binarization processing.
Therefore, by implementing the optional embodiment, the accuracy of feature extraction can be improved by preprocessing the real-time image, and the accuracy of user validity judgment can be improved.
In an exemplary embodiment of the present application, the boundary determining unit 1003 determines a boundary for distinguishing the portrait area from the background area in the real-time image, including:
performing feature extraction on the real-time image through a second neural network to obtain a second feature vector;
classifying each pixel point in the real-time image according to the second feature vector to obtain a class of pixels belonging to a portrait area and a class of pixels belonging to a background area;
and determining a boundary for distinguishing the portrait area from the background area according to the first-class pixels and the second-class pixels and marking the boundary.
Therefore, by implementing the optional embodiment, the background area can be determined for the determination area of the boundary, so that the similarity comparison of the background area is facilitated, and the accuracy of the user validity determination is further improved.
In an exemplary embodiment of the present application, the apparatus further includes a sample updating unit (not shown) and a network training unit (not shown), wherein:
a sample updating unit, configured to update a sample set used for training the second neural network according to the boundary labels after the boundary determining unit 1003 determines the boundary used for distinguishing the portrait area from the background area according to the first-class pixel and the second-class pixel and performs the boundary labels;
and the network training unit is used for training the second neural network according to the updated sample set.
It can be seen that, by implementing this alternative embodiment, the calculation accuracy of the second feature vector can be improved by updating the sample set, thereby improving the boundary labeling accuracy.
In an exemplary embodiment of the present application, the number of the preset backgrounds is at least one, and the similarity calculation unit 1002 calculates a second similarity between the background area and the preset background, including:
extracting the features of the background area through a third neural network to obtain a third feature vector;
acquiring preset characteristic vectors corresponding to at least one preset background respectively;
and calculating a second similarity between each preset feature vector and the third feature vector.
Therefore, by implementing the optional embodiment, the legality of the user can be further verified by extracting the features of the background area, and the data security can be further guaranteed.
In an exemplary embodiment of the present application, the apparatus further includes a sample generation unit (not shown), wherein:
the sample generation unit is used for performing feature extraction on the background region through a third neural network at the similarity calculation unit 1002 to obtain a third feature vector, and then generating a sample image corresponding to the third feature vector through the third neural network;
and the network training unit is also used for adjusting the network parameters corresponding to the third neural network according to the sample image.
Therefore, by implementing the optional embodiment, the parameter of the third neural network can be adjusted according to the background feature extraction of the user validity judgment each time, so that the feature extraction accuracy of the third neural network is improved, and the accuracy of the user validity judgment is improved.
In an exemplary embodiment of the present application, the validity determination unit 1004 performs validity determination on the current user according to the first similarity and the second similarity, including:
selecting the highest target similarity from the plurality of second similarities;
acquiring a first similarity and a weight value corresponding to the target similarity;
calculating a weighted sum of the first similarity and the target similarity according to the weight value;
and judging the validity of the current user according to the weighted sum.
Therefore, by implementing the optional embodiment, the validity of the current user can be judged according to the portrait similarity and the background similarity, compared with the method for judging the validity only through the portrait similarity in the prior art, the method and the device for judging the validity of the current user have higher judgment accuracy, can ensure the data security of the legal user, and avoid the illegal user from stealing data after performing identity authentication through an illegal means.
In an exemplary embodiment of the present application, the validity determination unit 1004 performs validity determination on the current user according to the weighted sum, including:
if the weighted sum is more than or equal to the preset weighted sum, judging that the current user is a legal user;
and if the weighted sum is smaller than the preset weighted sum, judging that the current user is an illegal user.
Therefore, whether the current user is a legal user can be judged according to the sum of the preset weight and the preset weight, and the judgment accuracy of the user legality can be improved.
In an exemplary embodiment of the present application, the apparatus further includes a vector updating unit (not shown), wherein:
and a vector updating unit, configured to update the preset feature vector according to the background area if the target similarity is smaller than the preset similarity after the validity determining unit 1004 determines that the current user is a valid user.
Therefore, by implementing the optional embodiment, the preset feature vector can be continuously updated, so that the accuracy of the user validity judgment is continuously improved.
In an exemplary embodiment of the present application, the updating the preset feature vector according to the background region by the vector updating unit includes:
selecting a specific preset background with highest similarity with the background region from at least one preset background according to the vector similarity;
and performing mean value calculation on the preset feature vector and the third feature vector of the specific preset background, and determining a calculation result as the updated preset feature vector of the specific preset background.
Therefore, by implementing the optional embodiment, the preset feature vectors can be continuously updated without increasing the number of the preset feature vectors, so that the occupation of storage resources can be reduced, and the optimization of the resource utilization rate is facilitated.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the application. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Since each functional module of the user validity determination apparatus of the exemplary embodiment of the present application corresponds to the steps of the exemplary embodiment of the user validity determination method described above, for details that are not disclosed in the embodiment of the apparatus of the present application, please refer to the embodiment of the user validity determination method described above of the present application.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method described in the above embodiments.
As yet another aspect, the present application also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided in the various alternative implementations described above.
It should be noted that the computer readable medium shown in the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (15)

1. A method for determining user validity, comprising:
acquiring a real-time image corresponding to a current user and calculating a first similarity between the real-time image and a legal portrait;
determining a boundary for distinguishing a portrait region from a background region in the real-time image;
calculating a second similarity between the background area and a preset background;
and judging the validity of the current user according to the first similarity and the second similarity.
2. The method of claim 1, wherein computing a first similarity between the live image and a legal portrait comprises:
performing feature extraction on the real-time image through a first neural network to obtain a first feature vector;
and acquiring a legal feature vector corresponding to the legal portrait, and calculating a first similarity between the legal feature vector and the first feature vector.
3. The method of claim 1, wherein the first neural network comprises a convolutional layer, an excitation layer, a pooling layer, and a fully-connected layer, and the extracting features of the real-time image through the first neural network to obtain a first feature vector comprises:
performing feature extraction on the real-time image through the convolutional layer to obtain a first reference feature vector;
activating the first reference characteristic vector through the excitation layer to obtain a second reference characteristic vector;
sampling the second reference characteristic vector through the pooling layer to obtain a third reference characteristic vector;
and performing dimensionality reduction processing on the third reference characteristic vector through the full connection layer to obtain the first characteristic vector.
4. The method of claim 1, wherein prior to determining a boundary in the real-time image for distinguishing between a portrait area and a background area, the method further comprises:
preprocessing the real-time image; wherein the preprocessing comprises gray processing or binarization processing.
5. The method of claim 1, wherein determining a boundary in the real-time image for distinguishing between a portrait area and a background area comprises:
performing feature extraction on the real-time image through a second neural network to obtain a second feature vector;
classifying each pixel point in the real-time image according to the second feature vector to obtain a class of pixels belonging to the portrait area and a class of pixels belonging to the background area;
and determining the boundary for distinguishing the portrait area from the background area according to the first-class pixel and the second-class pixel and marking the boundary.
6. The method of claim 5, wherein after performing boundary determination and boundary labeling on the real-time image according to the first type of pixels and the second type of pixels, the method further comprises:
updating a sample set used to train the second neural network according to boundary labels;
training the second neural network according to the updated sample set.
7. The method according to claim 1, wherein the number of the preset backgrounds is at least one, and the calculating of the second similarity between the background area and the preset background comprises:
extracting the features of the background area through a third neural network to obtain a third feature vector;
acquiring preset characteristic vectors corresponding to at least one preset background respectively;
and calculating a second similarity between each preset feature vector and the third feature vector.
8. The method of claim 1, wherein after feature extraction is performed on the background region through a third neural network to obtain a third feature vector, the method further comprises:
generating, by the third neural network, a sample image corresponding to the third feature vector;
and adjusting network parameters corresponding to the third neural network according to the sample image.
9. The method of claim 7, wherein determining the validity of the current user according to the first similarity and the second similarity comprises:
selecting the highest target similarity from the plurality of second similarities;
acquiring the first similarity and a weight value corresponding to the target similarity;
calculating a weighted sum of the first similarity and the target similarity according to the weight value;
and judging the validity of the current user according to the weighted sum.
10. The method of claim 9, wherein determining the validity of the current user based on the weighted sum comprises:
if the weighted sum is more than or equal to a preset weighted sum, judging that the current user is a legal user;
and if the weighted sum is smaller than the preset weighted sum, judging that the current user is an illegal user.
11. The method of claim 10, wherein after determining that the current user is a valid user, the method further comprises:
and if the target similarity is smaller than a preset similarity, updating the preset feature vector according to the background area.
12. The method of claim 11, wherein updating the preset feature vector according to the background region comprises:
selecting a specific preset background with the highest similarity with the background region from the at least one preset background according to the vector similarity;
and performing mean value calculation on the preset feature vector of the specific preset background and the third feature vector, and determining a calculation result as the updated preset feature vector of the specific preset background.
13. A user validity determination device, comprising:
the image acquisition unit is used for acquiring a real-time image corresponding to a current user;
the similarity calculation unit is used for calculating a first similarity between the real-time image and a legal portrait;
a boundary determination unit for determining a boundary for distinguishing a portrait region from a background region in the real-time image;
the similarity calculation unit is further used for calculating a second similarity between the background area and a preset background;
and the validity judging unit is used for judging the validity of the current user according to the first similarity and the second similarity.
14. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1-12.
15. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of claims 1-12 via execution of the executable instructions.
CN202010783859.XA 2020-08-06 2020-08-06 User validity determination method, device, computer readable storage medium and equipment Active CN111914769B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010783859.XA CN111914769B (en) 2020-08-06 2020-08-06 User validity determination method, device, computer readable storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010783859.XA CN111914769B (en) 2020-08-06 2020-08-06 User validity determination method, device, computer readable storage medium and equipment

Publications (2)

Publication Number Publication Date
CN111914769A true CN111914769A (en) 2020-11-10
CN111914769B CN111914769B (en) 2024-01-26

Family

ID=73288328

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010783859.XA Active CN111914769B (en) 2020-08-06 2020-08-06 User validity determination method, device, computer readable storage medium and equipment

Country Status (1)

Country Link
CN (1) CN111914769B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112906671A (en) * 2021-04-08 2021-06-04 平安科技(深圳)有限公司 Face examination false picture identification method and device, electronic equipment and storage medium
CN115115843A (en) * 2022-06-02 2022-09-27 马上消费金融股份有限公司 Data processing method and device

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105427421A (en) * 2015-11-16 2016-03-23 苏州市公安局虎丘分局 Entrance guard control method based on face recognition
CN106295522A (en) * 2016-07-29 2017-01-04 武汉理工大学 A kind of two-stage anti-fraud detection method based on multi-orientation Face and environmental information
CN106960180A (en) * 2017-02-24 2017-07-18 深圳市普波科技有限公司 A kind of intelligent control method, apparatus and system
CN107423690A (en) * 2017-06-26 2017-12-01 广东工业大学 A kind of face identification method and device
CN107862247A (en) * 2017-10-13 2018-03-30 平安科技(深圳)有限公司 A kind of human face in-vivo detection method and terminal device
CN108229362A (en) * 2017-12-27 2018-06-29 杭州悉尔科技有限公司 A kind of binocular recognition of face biopsy method based on access control system
CN108629305A (en) * 2018-04-27 2018-10-09 朱旭辉 A kind of face recognition method
CN108734057A (en) * 2017-04-18 2018-11-02 北京旷视科技有限公司 The method, apparatus and computer storage media of In vivo detection
CN108830062A (en) * 2018-05-29 2018-11-16 努比亚技术有限公司 Face identification method, mobile terminal and computer readable storage medium
CN108875484A (en) * 2017-09-22 2018-11-23 北京旷视科技有限公司 Face unlocking method, device and system and storage medium for mobile terminal
CN108960145A (en) * 2018-07-04 2018-12-07 北京蜂盒科技有限公司 Facial image detection method, device, storage medium and electronic equipment
CN109035299A (en) * 2018-06-11 2018-12-18 平安科技(深圳)有限公司 Method for tracking target, device, computer equipment and storage medium
KR101954763B1 (en) * 2017-09-04 2019-03-06 동국대학교 산학협력단 Face recognition access control apparatus and operation method thereof
CN109635625A (en) * 2018-10-16 2019-04-16 平安科技(深圳)有限公司 Smart identity checking method, equipment, storage medium and device
CN110097570A (en) * 2019-04-30 2019-08-06 腾讯科技(深圳)有限公司 A kind of image processing method and device
CN110163884A (en) * 2019-05-17 2019-08-23 温州大学 A kind of single image dividing method based on full connection deep learning neural network
CN110751041A (en) * 2019-09-19 2020-02-04 平安科技(深圳)有限公司 Certificate authenticity verification method, system, computer equipment and readable storage medium
WO2020038136A1 (en) * 2018-08-24 2020-02-27 深圳前海达闼云端智能科技有限公司 Facial recognition method and apparatus, electronic device and computer-readable medium
CN110889377A (en) * 2019-11-28 2020-03-17 深圳市丰巢科技有限公司 Method and device for identifying abnormality of advertising object, server device and storage medium
CN110969067A (en) * 2018-09-30 2020-04-07 北京小米移动软件有限公司 User registration and authentication method and device
CN111242097A (en) * 2020-02-27 2020-06-05 腾讯科技(深圳)有限公司 Face recognition method and device, computer readable medium and electronic equipment
CN111488943A (en) * 2020-04-16 2020-08-04 上海芯翌智能科技有限公司 Face recognition method and device

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105427421A (en) * 2015-11-16 2016-03-23 苏州市公安局虎丘分局 Entrance guard control method based on face recognition
CN106295522A (en) * 2016-07-29 2017-01-04 武汉理工大学 A kind of two-stage anti-fraud detection method based on multi-orientation Face and environmental information
CN106960180A (en) * 2017-02-24 2017-07-18 深圳市普波科技有限公司 A kind of intelligent control method, apparatus and system
CN108734057A (en) * 2017-04-18 2018-11-02 北京旷视科技有限公司 The method, apparatus and computer storage media of In vivo detection
CN107423690A (en) * 2017-06-26 2017-12-01 广东工业大学 A kind of face identification method and device
KR101954763B1 (en) * 2017-09-04 2019-03-06 동국대학교 산학협력단 Face recognition access control apparatus and operation method thereof
CN108875484A (en) * 2017-09-22 2018-11-23 北京旷视科技有限公司 Face unlocking method, device and system and storage medium for mobile terminal
CN107862247A (en) * 2017-10-13 2018-03-30 平安科技(深圳)有限公司 A kind of human face in-vivo detection method and terminal device
CN108229362A (en) * 2017-12-27 2018-06-29 杭州悉尔科技有限公司 A kind of binocular recognition of face biopsy method based on access control system
CN108629305A (en) * 2018-04-27 2018-10-09 朱旭辉 A kind of face recognition method
CN108830062A (en) * 2018-05-29 2018-11-16 努比亚技术有限公司 Face identification method, mobile terminal and computer readable storage medium
CN109035299A (en) * 2018-06-11 2018-12-18 平安科技(深圳)有限公司 Method for tracking target, device, computer equipment and storage medium
CN108960145A (en) * 2018-07-04 2018-12-07 北京蜂盒科技有限公司 Facial image detection method, device, storage medium and electronic equipment
WO2020038136A1 (en) * 2018-08-24 2020-02-27 深圳前海达闼云端智能科技有限公司 Facial recognition method and apparatus, electronic device and computer-readable medium
CN110969067A (en) * 2018-09-30 2020-04-07 北京小米移动软件有限公司 User registration and authentication method and device
CN109635625A (en) * 2018-10-16 2019-04-16 平安科技(深圳)有限公司 Smart identity checking method, equipment, storage medium and device
CN110097570A (en) * 2019-04-30 2019-08-06 腾讯科技(深圳)有限公司 A kind of image processing method and device
CN110163884A (en) * 2019-05-17 2019-08-23 温州大学 A kind of single image dividing method based on full connection deep learning neural network
CN110751041A (en) * 2019-09-19 2020-02-04 平安科技(深圳)有限公司 Certificate authenticity verification method, system, computer equipment and readable storage medium
CN110889377A (en) * 2019-11-28 2020-03-17 深圳市丰巢科技有限公司 Method and device for identifying abnormality of advertising object, server device and storage medium
CN111242097A (en) * 2020-02-27 2020-06-05 腾讯科技(深圳)有限公司 Face recognition method and device, computer readable medium and electronic equipment
CN111488943A (en) * 2020-04-16 2020-08-04 上海芯翌智能科技有限公司 Face recognition method and device

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
HUANGXUN CHEN等: "EchoFace: Acoustic Sensor-Based Media Attack Detection for Face Authentication", 《IEEE INTERNET OF THINGS JOURNAL》, vol. 7, no. 3, pages 2152 - 2159, XP011778499, DOI: 10.1109/JIOT.2019.2959203 *
刘程等: "一种基于深度学习的移动端人脸验证系统", 《计算机与现代化》, no. 2, pages 107 - 111 *
牛红闯: "基于活体检测的智能防护系统", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 2020, pages 138 - 985 *
郑欣洋: "面向人脸认证的活体检测方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 2019, pages 138 - 152 *
陈琪: "掌纹预处理研究及Android终端掌纹认证系统开发", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 2018, pages 138 - 72 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112906671A (en) * 2021-04-08 2021-06-04 平安科技(深圳)有限公司 Face examination false picture identification method and device, electronic equipment and storage medium
CN112906671B (en) * 2021-04-08 2024-03-15 平安科技(深圳)有限公司 Method and device for identifying false face-examination picture, electronic equipment and storage medium
CN115115843A (en) * 2022-06-02 2022-09-27 马上消费金融股份有限公司 Data processing method and device
CN115115843B (en) * 2022-06-02 2023-08-22 马上消费金融股份有限公司 Data processing method and device

Also Published As

Publication number Publication date
CN111914769B (en) 2024-01-26

Similar Documents

Publication Publication Date Title
CN114913565B (en) Face image detection method, model training method, device and storage medium
CN111241989A (en) Image recognition method and device and electronic equipment
CN109284683A (en) Feature extraction and matching and template renewal for biological identification
CN111475797A (en) Method, device and equipment for generating confrontation image and readable storage medium
US11126827B2 (en) Method and system for image identification
WO2022105118A1 (en) Image-based health status identification method and apparatus, device and storage medium
CN106203387A (en) Face verification method and system
CN111914769B (en) User validity determination method, device, computer readable storage medium and equipment
CN115050064A (en) Face living body detection method, device, equipment and medium
CN113515988A (en) Palm print recognition method, feature extraction model training method, device and medium
KR20180006284A (en) An adaptive quantization method for iris image encoding
JP2018055231A (en) Biometric authentication device
CN111738199A (en) Image information verification method, image information verification device, image information verification computing device and medium
US11321553B2 (en) Method, device, apparatus and storage medium for facial matching
Chiu et al. A micro-control capture images technology for the finger vein recognition based on adaptive image segmentation
CN113792659B (en) Document identification method and device and electronic equipment
CN110674480A (en) Behavior data processing method, device and equipment and readable storage medium
Viedma et al. Relevant features for gender classification in NIR periocular images
CN111783677B (en) Face recognition method, device, server and computer readable medium
Youmaran et al. Measuring biometric sample quality in terms of biometric feature information in iris images
Diarra et al. Study of deep learning methods for fingerprint recognition
CN111753618A (en) Image recognition method and device, computer equipment and computer readable storage medium
Rajagopal et al. Performance evaluation of multimodal multifeature authentication system using KNN classification
Aizi et al. Remote multimodal biometric identification based on the fusion of the iris and the fingerprint
Prakash et al. Fusion of multimodal biometrics using feature and score level fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant