CN110490076B - Living body detection method, living body detection device, computer equipment and storage medium - Google Patents

Living body detection method, living body detection device, computer equipment and storage medium Download PDF

Info

Publication number
CN110490076B
CN110490076B CN201910650995.9A CN201910650995A CN110490076B CN 110490076 B CN110490076 B CN 110490076B CN 201910650995 A CN201910650995 A CN 201910650995A CN 110490076 B CN110490076 B CN 110490076B
Authority
CN
China
Prior art keywords
face image
image
face
detected
living
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910650995.9A
Other languages
Chinese (zh)
Other versions
CN110490076A (en
Inventor
杨晟
程检萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910650995.9A priority Critical patent/CN110490076B/en
Publication of CN110490076A publication Critical patent/CN110490076A/en
Application granted granted Critical
Publication of CN110490076B publication Critical patent/CN110490076B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the field of artificial intelligence and provides a living body detection method, a living body detection device, computer equipment and a storage medium. The living body detection method comprises the following steps: acquiring a face image to be detected; extracting features of the face image to be detected to obtain corresponding image features; inputting the obtained image characteristics into a pre-trained generator in a first generation reactance network which is obtained based on the training of the living body face image set, and obtaining a first generation image corresponding to the face image to be detected; inputting the obtained image characteristics into a pre-trained generator in a second generation countermeasure network which is obtained based on non-living face image set training, obtaining a second generation image corresponding to the face image to be detected, and respectively calculating the similarity among the first generation image, the second generation image and the face image to be detected to obtain a first similarity and a second similarity; and determining a living body detection result corresponding to the face image to be detected according to the first similarity and the second similarity.

Description

Living body detection method, living body detection device, computer equipment and storage medium
Technical Field
The present application relates to the field of artificial intelligence, and in particular, to a living body detection method, apparatus, computer device, and storage medium.
Background
With the development of artificial intelligence technology, a face recognition technology appears, however, the current face recognition technology can recognize the identity of a face image but cannot accurately distinguish the authenticity of an input face. In order to automatically and efficiently distinguish the authenticity of an image against a spoofing attack to ensure the safety of a system, a living body detection technology is proposed, and so-called living body detection is commonly known as judging whether a face corresponds to a living person or not in the face recognition process.
In the conventional living body detection technology, a user is required to make a specified action, and the video is recorded for analysis to judge, so that the efficiency of living body detection is low because the time for analyzing the video is relatively long.
Disclosure of Invention
In view of the above, it is necessary to provide a living body detection method, apparatus, computer device, and storage medium capable of improving living body detection efficiency.
A method of in vivo detection, the method comprising:
acquiring a face image to be detected;
extracting features of the face image to be detected to obtain corresponding image features;
inputting the obtained image characteristics into a generator in a pre-trained first generation contrast network to obtain a first generation image corresponding to the face image to be detected, wherein the first generation contrast network is obtained by training based on a living face image set, and the living face image set is an image set obtained by shooting a living face;
Inputting the obtained image characteristics into a generator in a pre-trained second generation countermeasure network to obtain a second generation image corresponding to the face image to be detected, wherein the second generation countermeasure network is obtained by training based on a non-living face image set, and the non-living face image is an image set obtained by shooting a non-living face;
calculating the similarity between the first generated image and the face image to be detected to obtain a first similarity;
calculating the similarity between the second generated image and the face image to be detected to obtain a second similarity;
and determining a living body detection result corresponding to the face image to be detected according to the first similarity and the second similarity.
In one embodiment, the calculating the similarity between the first generated image and the face image to be detected, to obtain a first similarity includes:
scaling the first generated image and the image to be detected to a preset size respectively;
respectively carrying out gray scale processing on the scaled first generated image and the image to be detected;
sequentially calculating the average value of each row of pixel points in the first generated image after gray processing, and calculating the variance of all the obtained average values to obtain a first characteristic value corresponding to the first generated image;
Sequentially calculating the average value of each row of pixel points in the image to be detected after gray processing, and calculating variances of all the obtained average values to obtain a second characteristic value corresponding to the image to be detected;
and calculating a difference value between the first characteristic value and the second characteristic value, and obtaining a first similarity based on the difference value.
In one embodiment, the generating of the first generated reactive network includes:
acquiring a living body face image set;
extracting features of the living body face images in the living body face image set to obtain first training features corresponding to the living body face images, wherein the first training features comprise spatial information and frequency domain features;
inputting the obtained first training characteristics into a first initial generator to obtain a first generated face image;
adjusting parameters of the first initial generator based on the similarity between the obtained first generated face image and the living body face image;
respectively inputting the obtained first generated face image and the living body face image into a first initial discriminator to obtain a first discrimination result and a second discrimination result, wherein the first discrimination result and the second discrimination result are respectively used for representing whether the obtained first generated face image and the living body face image are real face images or not;
Adjusting parameters of the first initial generator and the first initial discriminator based on a first difference between the first discrimination result and a no discrimination result for characterizing that the image input to the first initial discriminator is not a real face image, and a second difference between the second discrimination result and a yes discrimination result for characterizing that the image input to the first initial discriminator is a real face image;
determining the first initial generator and the first initial arbiter as a generator and an arbiter in the first generation reactance network, respectively.
In one embodiment, the second generating an antagonizing network includes:
acquiring a non-living face image set;
extracting features of non-living face images in the non-living face image set to obtain second training features corresponding to the non-living face images, wherein the second training features do not contain space information and high-frequency components in the frequency domain features are less than those of the first training features;
inputting the obtained second training characteristics into a second initial generator to obtain a second generated face image;
Adjusting parameters of the second initial generator based on the obtained similarity between the second generated face image and the non-living face image;
respectively inputting the obtained second generated face image and the non-living face image into a second initial discriminator to obtain a third discrimination result and a fourth discrimination result, wherein the third discrimination result and the fourth discrimination result are respectively used for representing whether the obtained second generated face image and the non-living face image are real face images or not;
adjusting parameters of the second initial generator and the second initial discriminator based on a third difference between the third discrimination result and a no discrimination result for characterizing that the image input to the second initial discriminator is not a real face image, and a fourth difference between the fourth discrimination result and a yes discrimination result for characterizing that the image input to the second initial discriminator is a real face image;
determining the second initial generator and the second initial arbiter as a generator and an arbiter in the second generation countermeasure network, respectively.
In one embodiment, feature extraction is performed on the face image to be detected to obtain corresponding image features, including:
respectively inputting the face image to be detected into a first feature extraction model and a second feature extraction model which are trained in advance to obtain a first image feature and a second image feature, wherein the first feature extraction model is obtained by training based on a living face image set, and the second feature extraction model is obtained by training based on a non-living face image set;
inputting the obtained image features into a generator in a pre-trained first generation reactance network to obtain a first generation image corresponding to the face image to be detected, wherein the method comprises the following steps:
inputting the obtained first image features into a generator in a first generation reactance network trained in advance to obtain a first generation image corresponding to the face image to be detected;
inputting the obtained image features into a generator in a pre-trained second generation countermeasure network to obtain a second generation image corresponding to the face image to be detected, wherein the method comprises the following steps:
inputting the obtained second image features into a generator in a pre-trained second generation countermeasure network to obtain a second generation image corresponding to the face image to be detected.
In one embodiment, the method further comprises:
when the living body detection result corresponding to the face image to be detected is not a living body face, acquiring a user identification corresponding to the face image to be detected, and calculating the living body detection failure times of a user corresponding to the user identification in a preset time period;
and when the number of times of the living body detection failure is larger than a preset threshold value, acquiring an associated account corresponding to the user identifier, and sending warning information to the associated account.
A living body detection apparatus, the apparatus comprising:
the face image acquisition module to be detected acquires a face image to be detected;
the feature extraction module is used for extracting features of the face image to be detected to obtain corresponding image features;
the first generation image acquisition module is used for inputting the obtained image characteristics into a generator in a pre-trained first generation reactance network to obtain a first generation image corresponding to the face image to be detected, wherein the first generation reactance network is obtained by training based on a living face image set, and the living face image set is an image set obtained by shooting a living face;
The second generation image obtaining module inputs the obtained image characteristics into a generator in a pre-trained second generation countermeasure network to obtain a second generation image corresponding to the face image to be detected, wherein the second generation countermeasure network is obtained by training based on a non-living face image set, and the non-living face image is an image set obtained by shooting a non-living face;
the first similarity calculation module is used for calculating the similarity between the first generated image and the face image to be detected to obtain first similarity;
the second similarity calculation module calculates the similarity between the second generated image and the face image to be detected to obtain second similarity;
and the living body detection result determining module is used for determining a living body detection result corresponding to the face image to be detected according to the first similarity and the second similarity.
In one embodiment, the first similarity calculation module is further configured to:
scaling the first generated image and the image to be detected to a preset size respectively;
respectively carrying out gray scale processing on the scaled first generated image and the image to be detected;
Sequentially calculating the average value of each row of pixel points in the first generated image after gray processing, and calculating the variance of all the obtained average values to obtain a first characteristic value corresponding to the first generated image;
sequentially calculating the average value of each row of pixel points in the image to be detected after gray processing, and calculating variances of all the obtained average values to obtain a second characteristic value corresponding to the image to be detected;
and calculating a difference value between the first characteristic value and the second characteristic value, and obtaining a first similarity based on the difference value.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the living body detection method according to any of the embodiments described above when the processor executes the computer program.
A computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the biopsy method described in any of the embodiments above.
According to the living body detection method, the living body detection device, the computer equipment and the storage medium, the image features are extracted from the image to be detected, the extracted image features are respectively input into the generator of the first generation contrast network and the generator of the second generation contrast network, the first generation image and the second generation image are obtained, then the similarity between the first generation image and the second generation image and the image to be detected is calculated, finally the living body detection result corresponding to the face image to be detected is determined by comparing the similarity, and the living body detection result is judged without recording videos for analysis, so that the living body detection time is greatly shortened, and the living body detection efficiency is improved.
Drawings
FIG. 1 is an application scenario diagram of a living body detection method in one embodiment;
FIG. 2 is a flow diagram of a method of in-vivo detection in one embodiment;
FIG. 3 is a flow chart of a living body detection step in one embodiment;
FIG. 4 is a flow chart of a method of in-vivo detection in another embodiment;
FIG. 5 is a block diagram showing the structure of a living body detecting device in one embodiment;
fig. 6 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The living body detection method provided by the application can be applied to an application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The terminal 102 firstly collects face images of users, sends the collected face images to the server 104, the server 104 determines the received face images as face images to be detected, performs feature extraction on the face images to be detected to obtain image features, then respectively inputs the obtained image features into a first generation antibody network obtained by training based on the face images of living bodies and a second generation antibody network obtained by training based on the face images of non-living bodies to respectively obtain two different generation images, respectively calculates the similarity between the two generated images and the images to be detected, determines the living body detection result corresponding to the images to be detected by comparing the similarity between the two generated images and the images to be detected, and determines that the living body detection result corresponding to the face images to be detected is the face images of living bodies when the similarity between the images generated in the first generation antibody network and the images to be detected is greater than the similarity between the images generated in the second generation antibody network and the images to be detected, otherwise, determines that the living body detection result corresponding to the face images to be detected is not the face images of living bodies.
The terminal 102 may be, but not limited to, various personal computers, notebook computers, smartphones, tablet computers, and portable wearable devices, and the server 104 may be implemented by a stand-alone server or a server cluster composed of a plurality of servers.
In one embodiment, as shown in fig. 2, a living body detection method is provided, and the method is applied to the server in fig. 1 for illustration, and includes the following steps:
step S202, a face image to be detected is acquired.
Specifically, the server acquires a face image to be detected from the terminal. In one embodiment, the terminal may control the built-in camera to capture a face image of the user, and send the captured face image to the server, where the server determines the received face image as the face image to be detected; in another embodiment, the face image to be detected may be a face image that the terminal captures a user in advance and stores the captured face image locally, and when performing living detection, the terminal obtains a face image from the local and sends the face image to the server, and the server determines the received face image as the face image to be detected.
Step S204, extracting features of the face image to be detected to obtain corresponding image features.
The image features may be various features including, but not limited to, color features, texture features, two-dimensional shape features, two-dimensional spatial relationship features, three-dimensional shape features, three-dimensional spatial relationship features, facial features, shape features of five sense organs, position and scale features of five sense organs, and the like. The method for extracting the image features can be any method capable of extracting the image features in the prior art, for example, a histogram intersection method, a distance method, a center distance method, a reference color table method and an accumulated color histogram method can be adopted for extracting color features, and a geometric method, a model method, a signal processing method and the like can be adopted for extracting texture features.
Step S206, inputting the obtained image features into a generator in a pre-trained first generation reactance network to obtain a first generation image corresponding to the face image to be detected.
Specifically, after the server extracts the image features, the extracted image features are input into a generator of a pre-trained first generation reactance network, wherein the first generation reactance network comprises a generator and a countermeasure, the generator is used for generating images according to the image features, and the discriminator is used for determining whether the images input into the discriminator are generated images or real images. In this embodiment, the first generation objective network is obtained by performing feature extraction based on a living face image set and training according to the extracted features, where the living face image set is an image set obtained by photographing a living face.
Step S208, inputting the obtained image features into a generator in a pre-trained second generation countermeasure network to obtain a second generation image corresponding to the face image to be detected.
Specifically, the server inputs the extracted image features into a generator of a pre-trained second generation countermeasure network, which also includes a generator and a countermeasure, and the second generation countermeasure network is obtained by performing feature extraction based on a non-living face image set and training according to the extracted features, wherein the non-living face image is an image obtained by capturing a non-living face, for example, a face image obtained by capturing a printed face image, a face image obtained by capturing a living face video, or the like.
It will be understood that, since the living face image is an image obtained by photographing a living face, and the non-living face image is an image obtained by photographing a non-living face, since the photographed object is different, the living face image and the non-living face image have different features, for example, the texture feature of the living face image contains depth information and frequency domain features, whereas the non-living face image does not contain depth information and the frequency domain features of the non-living face image have higher frequency components than the frequency domain features corresponding to the living face image. Therefore, the features learned by the first generation countermeasure network trained based on the living face image and the second generation countermeasure network trained based on the non-living face image are different, and further, the two generation images respectively obtained by the two generation countermeasure networks according to the image features extracted from the face image to be detected are also different.
Step S210, calculating the similarity between the first generated image and the face image to be detected, and obtaining the first similarity.
Step S212, calculating the similarity between the second generated image and the face image to be detected, and obtaining the second similarity.
Specifically, the similarity is used to represent the degree of similarity between two images, and the larger the similarity is, the more similar the two images are. In the present application, the server may calculate the similarity between the two images by using various methods. In some embodiments, histogram matching, mathematical matrix decomposition (e.g., singular value decomposition and non-negative matrix decomposition), feature point based image similarity calculation methods, and the like may be employed.
In other embodiments, the server calculates the similarity between the generated image and the image to be detected, and may include the following steps: scaling the generated image and the image to be detected to a preset size respectively; respectively carrying out gray scale processing on the scaled generated image and the image to be detected; sequentially calculating the average value of each row of pixel points in the generated image after gray processing, and calculating the variance of all the obtained average values to obtain a first characteristic value corresponding to the generated image; sequentially calculating the average value of each row of pixel points in the image to be detected after gray processing, and calculating the variance of all the obtained average values to obtain a second characteristic value corresponding to the image to be detected; and calculating a difference value between the first characteristic value and the second characteristic value, and obtaining the similarity based on the difference value. In this embodiment, the smaller the difference value is, the more similar the generated image and the image to be detected are, that is, the difference value and the similarity are in negative correlation, and in order to simplify the calculation, in a specific embodiment, the reciprocal of the difference value may be taken as the similarity value.
Step S214, determining a living body detection result corresponding to the face image to be detected according to the first similarity and the second similarity.
Specifically, since the first generation resist network is obtained by training based on the extracted features of the set of face images, the first generation resist network learns the features of the face images of the living body instead of the features of the face images of the non-living body, and therefore, if the image features corresponding to the images to be detected obtained by photographing the face images of the living body are input into the generator of the first generation resist network, the generated images are more similar to the images to be detected, while the second generation resist network is obtained by training based on the extracted features of the set of face images of the non-living body, and the second generation resist network learns the features of the images of the non-living body instead of the features of the face images of the living body, and therefore, when the image features corresponding to the images to be detected obtained by photographing the face images of the living body are input into the generator of the second generation resist network, the generated images are not very similar to the images to be detected, and in this case, the similarity of the images generated by the first generation resist network is necessarily greater than the similarity of the images generated by the second generation resist network.
Conversely, if the image features corresponding to the image to be detected obtained by photographing the non-living face are input to the generator of the first generation countermeasure network, the generated image is necessarily not very similar to the image to be detected, and when the image features corresponding to the image to be detected obtained by photographing the non-living face are input to the generator of the second generation countermeasure network, the generated image is relatively similar to the image to be detected, in which case the similarity of the images generated by the first generation countermeasure network is necessarily smaller than the similarity of the images generated by the second generation countermeasure network.
When the first similarity is larger than the second similarity, determining that the face in the face image to be detected is a living face; when the first similarity is smaller than the second similarity, it is determined that the face in the face image to be detected is not a living face. In this embodiment, the living body detection result of the face image to be detected can be objectively and accurately determined by comparing the two similarities.
Further, the server may transmit the obtained living body detection result to the terminal.
In the living body detection method, the image features are extracted from the image to be detected, the extracted image features are respectively input into the generator of the first generation antibody network and the generator of the second generation antibody network, the first generation image and the second generation image are obtained, then the similarity between the first generation image and the second generation image and the image to be detected is calculated, finally the living body detection result corresponding to the face image to be detected is determined by comparing the two similarities, and the living body detection result is judged without recording video for analysis, so that the living body detection time is greatly shortened, and the living body detection efficiency is improved.
In one embodiment, as shown in fig. 3, the first generation of the reactive network includes:
step S302, acquiring a living body face image set.
Specifically, each of the living face images of the living face image set is an image obtained by photographing a living face.
Step S304, extracting the characteristics of the living body face image in the living body face image set to obtain a first training characteristic corresponding to the living body face image.
Specifically, the first training features are image features extracted from the living body face image, and the first training features comprise information such as spatial information, frequency domain features and the like.
Step S306, inputting the obtained first training features into a first initial generator to obtain a first generated face image.
Wherein the first initial generator is a generator in a first initial generation countermeasure network. The first initially generated countermeasure network may be a generated countermeasure network (GAN, generative Adversarial Networks) that includes generators and discriminants that are respectively first initial generators and first initial discriminants for generating images from image features, the first initial discriminants for determining whether the input image is a generated image or a real image.
In one embodiment, the following steps are further included before step S306:
first, network configuration information of a first initial generator is determined, and network configuration information of a first initial arbiter is determined. The first initial generator and the first initial arbiter can be various neural networks, which neural network the first initial generator and the first initial arbiter are can be respectively determined, and the neural network comprises a plurality of layers of neurons, a plurality of neurons in each layer, a connection sequence relation among the neurons in each layer, which parameters each layer of neurons comprise, an activation function type corresponding to each layer of neurons, and the like. It will be appreciated that the network structure information that needs to be determined is also different for different neural network types. The parameter values of the network parameters of the first initial generator and the first initial arbiter may then be initialized. In practice, the respective network parameters of the first initial generator and the first initial arbiter may be initialized with a number of different small random numbers. The small random number is used for ensuring that the network does not enter a saturated state due to overlarge weight, so that training fails, and the different random numbers are used for ensuring that the network can learn normally.
Step S308, adjusting parameters of the first initial generator based on the obtained similarity between the first generated face image and the living body face image.
Specifically, the objective function may be set by maximizing the obtained similarity between the first generated face image and the living face image, and then, a preset optimization algorithm is adopted to adjust parameters of the first initial generator to optimize the objective function, and the parameter adjustment step is ended when a first preset training ending condition is satisfied. Wherein the first preset training ending condition includes, but is not limited to: the training time exceeds the preset duration, the times of executing the parameter adjusting step exceeds the preset times, and the similarity between the obtained generated face image and the living face image is larger than a preset similarity threshold.
The preset optimization algorithm may include, but is not limited to, gradient Descent (Gradient Descent), newton's Method (Newton's Method), quasi-Newton Method (Quasi-Newton methods), conjugate Gradient Method (Conjuga teGradient), heuristic optimization Method, and other various optimization algorithms now known or developed in the future. Among them, the similarity between two images may be calculated by various methods, for example, histogram matching, mathematical matrix decomposition (such as singular value decomposition and non-negative matrix decomposition), feature point-based image similarity calculation method, and the like may be employed.
Step S310, the obtained first generated face image and the obtained living body face image are respectively input into a first initial discriminator to obtain a first discrimination result and a second discrimination result.
Specifically, the first discrimination result is a discrimination result output by the first initial discriminator for the first generated face image input into the initial discriminator, for characterizing whether the first generated face image is a real face image; the second discrimination result is a discrimination result output by the initial discriminator for the living face image input into the initial discriminator, and is used for representing whether the living face image is a real face image or not. The discrimination result output by the first initial discriminator may be in various forms. For example, the discrimination result may be a discrimination result (e.g., number 1 or vector (1, 0)) for characterizing that the face image is a true face image or a discrimination result (e.g., number 0 or vector (0, 1)) for characterizing that the face image is not a true face image (i.e., generated face image); for another example, the discrimination result may further include a probability for characterizing that the face image is a true face image and/or a probability for characterizing that the face image is not a true face image (i.e., a generated face image), e.g., the discrimination result may be a vector including a first probability for characterizing that the face image is a true face image and a second probability for characterizing that the face image is not a true face image (i.e., a generated face image).
Step S312, based on the first difference and the second difference, adjusts parameters of the first initial generator and the first initial arbiter.
Specifically, the first difference and the second difference may be first calculated according to a preset loss function (e.g., L1 norm or L2 norm, etc.). Wherein the first difference is a difference between a first discrimination result and a no discrimination result for characterizing that the image input to the first initial discriminator is not a real face image, and the second difference is a difference between a second discrimination result and a yes discrimination result for characterizing that the image input to the first initial discriminator is a real face image. It will be appreciated that the specific loss function may be different when the form of the discrimination result output by the first initial generator is different.
Further, parameters of the first initial generator and the first initial arbiter may be adjusted based on the calculated first and second differences, and the tuning step may be ended when a second preset training ending condition is satisfied. Wherein the second preset training ending condition may include, but is not limited to: the training time exceeds the preset duration, the number of times of executing the parameter adjusting step exceeds the preset number of times, and the difference between the calculated first probability and the calculated second probability is smaller than a first preset difference threshold.
In this embodiment, various implementations may be used to adjust the parameters of the first initial generator and the first initial arbiter based on the calculated first difference and the second difference, for example, a BP (Back Propagation) algorithm, an SGD (Stochastic Gradient Descent, random gradient descent) algorithm, or the like may be used. Through optimizing the first initial generator and the first initial discriminator for a plurality of times, the generated image obtained after the image features corresponding to the living body face image are input into the first initial generator is similar to the living body face image, namely, the first initial generator learns the features in the living body face image.
Step S314, determining the adjusted first initial generator and first initial arbiter as a generator and arbiter in the first generation reactance network, respectively.
It may be appreciated that in this embodiment, the step of generating the first generation resist network may be a step performed at a server, or may be a step performed at another device. If the step is executed at the server, the server stores the obtained model structure information of the first generation reactance network and the parameter values of the model parameters locally after generating the first generation reactance network. If the method is the step executed by other equipment, after the other equipment generates the first generation reactance network, the other equipment sends the obtained model structure information of the first generation reactance network and the parameter value of the model parameter to a server, and the server stores the received model structure information and the received parameter value of the model parameter locally.
In one embodiment, the second generation of the antagonizing network includes: acquiring a non-living face image set; extracting features of non-living face images in the non-living face image set to obtain second training features corresponding to the non-living face images, wherein the second training features do not contain space information and contain frequency domain features with high frequency components less than those of the first training features; inputting the obtained second training characteristics into a second initial generator to obtain a second generated face image; adjusting parameters of a second initial generator based on the obtained similarity between the second generated face image and the non-living face image; respectively inputting the obtained second generated face image and the non-living face image into a second initial discriminator to obtain a third discrimination result and a fourth discrimination result, wherein the third discrimination result and the fourth discrimination result are respectively used for representing whether the obtained second generated face image and the non-living face image are real face images or not; adjusting parameters of the second initial generator and the second initial arbiter based on a third difference between a third discrimination result and a no discrimination result for characterizing that the image input to the second initial arbiter is not a real face image, and a fourth difference between a fourth discrimination result and a yes discrimination result for characterizing that the image input to the second initial arbiter is a real face image; and respectively determining the adjusted second initial generator and the second initial arbiter as a generator and a arbiter in a second generation countermeasure network.
It will be understood that, for the specific explanation and description of each step in this embodiment, reference may be made to the explanation and description of each step in the embodiment of the generating step of the first generation network, which is not repeated herein.
In one embodiment, as shown in fig. 4, there is provided a living body detection method including the steps of:
step S402, a face image to be detected is acquired.
Step S404, respectively inputting the face image to be detected into a first feature extraction model and a second feature extraction model trained in advance to obtain a first image feature and a second image feature.
Specifically, after obtaining the face image to be detected, the server may input the face image to be detected into the first feature extraction model to obtain a corresponding first image feature, and then input the face image to be detected into the second feature extraction model to obtain a corresponding second image feature. The first feature extraction model is obtained based on training a living face image set, wherein the living face image set is an image set obtained by shooting a living face; the second feature extraction model is obtained based on training of a non-living face image set, wherein the non-living face image is an image set obtained by shooting a non-living face, for example, a face image obtained by shooting a printed face image, a face image obtained by shooting a living face video, and the like.
In one embodiment, the feature extraction model may be a convolutional neural network, which may include at least one convolutional layer for extracting image features, a pooling layer for downsampling (downsampling) the input information to compress the amount of data and parameters, and an excitation function layer (Convolutional Neural Network, CNN) to reduce the overfitting. The calculation method of the convolution layer can refer to the following formula:
wherein,representing an activation function->Represents a gray image matrix, W represents a convolution kernel,>representing the convolution operation, b represents the offset value.
In a specific embodiment, the Sobel-Gx convolution kernel may be used to convolve the face image to be detected, i.e. in the above formulaObtaining a matrix, wherein the size of the convolution kernel can be determined according to the requirement, for example, the convolution kernel can be 3X3, then adding b (offset value) to each element in the obtained result, and inputting each element in the obtained result into an activation function to finally obtain an extracted graphThe image features, wherein the activation function may take the form of a sigmoid function as shown in the following equation:
In one embodiment, the training step of the first feature extraction model is as follows: acquiring a living body face image set; determining model structure information of an initial feature extraction model and network structure information of an initial generation countermeasure network, and initializing model parameters of the initial feature extraction model and network parameters of the initial generation countermeasure network; for a live face image in the live face image set, performing the following step of down-regulating parameters: inputting the living body face image into an initial feature extraction model to obtain image features corresponding to the living body face image; inputting the obtained image characteristics into an initial generator to obtain a generated face image; based on the obtained similarity between the generated face image and the living body face image, adjusting parameters of an initial feature extraction model and an initial generator; and determining the adjusted initial feature extraction model as a feature extraction model.
Wherein the initial generator is a generator in an initial generation countermeasure network, which may be a generation countermeasure network (GAN, generative Adversarial Networks) including an initial generator for generating an image and an initial discriminator for determining whether the input image is a generated image or a real image, which is predetermined for training the feature extraction model.
Further, in the implementation, the obtained generated face image and the similarity between the generated face image and the living face image can be maximized to be a target set target function, then a preset optimization algorithm is adopted, parameters of the initial feature extraction model and the initial generator are adjusted to optimize the target function, and the parameter adjustment step is ended under the condition that a preset training ending condition is met. For example, the preset training end conditions may include, but are not limited to: the training time exceeds the preset duration, the times of executing the parameter adjusting step exceeds the preset times, and the similarity between the obtained generated face image and the living face image is larger than a preset similarity threshold.
Further, the training step of the second feature extraction model may refer to the training step of the first feature extraction model in the above embodiment, which is not described herein.
Step S406, inputting the obtained first image features into a generator in a first generation reactance network trained in advance to obtain a first generation image corresponding to the face image to be detected;
step S408, inputting the obtained second image features into a generator in a second generation countermeasure network trained in advance to obtain a second generation image corresponding to the face image to be detected;
Step S410, calculating the similarity between the first generated image and the face image to be detected to obtain a first similarity;
step S412, calculating the similarity between the second generated image and the face image to be detected to obtain a second similarity;
step S414, determining a living body detection result corresponding to the face image to be detected according to the first similarity and the second similarity.
The explanation and description of each step in this application may refer to the descriptions in other embodiments in this application, and are not repeated herein.
In this embodiment, feature extraction is performed on the face image to be detected by the first feature extraction model and the second feature extraction model which are trained in advance, so that the difference between the first similarity and the second similarity can be further increased, erroneous judgment results caused by calculation errors are avoided, and the finally obtained living body detection results are more accurate.
When the face image to be detected is taken as the living face image, the first feature extraction model is used for learning the living face image features, so that when the face image to be detected is the living face image, the feature extraction is carried out through the first feature extraction model, the extracted living face image features are more accurate, the similarity between the image generated by the first generation contrast network according to the features and the image to be detected is larger, and the second feature extraction model is used for learning the non-living face image features, the features are extracted through the second feature extraction model and are input into the second generation contrast network, the similarity between the obtained generated image and the image to be detected is smaller, and the difference between the first similarity and the second similarity is enlarged.
In one embodiment, the living body detection method further includes: when the living body detection result corresponding to the face image to be detected is not a living body face, acquiring a user identification corresponding to the face image to be detected, and calculating the living body detection failure times of a user corresponding to the user identification in a preset time period; when the number of times of the living body detection failure is larger than a preset threshold value, acquiring a relevant account corresponding to the user identifier, and sending warning information to the relevant account.
Specifically, the user identification is used to uniquely identify the identity of the user currently performing the live detection. And when the detection result corresponding to the detected face image in the current detection is not the living face, the living body detection failure times of the user corresponding to the user identification in a preset time period can be calculated, wherein the living body detection failure refers to the condition that the detection result is not the living face, and if the failure times are larger than a preset threshold, warning information is sent to an associated account corresponding to the user identification, wherein the associated account is an account bound when the user is registered, and the method comprises the steps of but not limited to a mailbox account, a QQ account, a mobile phone number and the like. By sending the warning information of the living body detection failure to the associated account, the potential safety risk of the user can be timely reminded, and the loss caused by the account safety problem to the user is avoided.
It should be understood that, although the steps in the flowcharts of fig. 2-4 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 2-4 may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the sub-steps or stages are performed necessarily occur sequentially, but may be performed alternately or alternately with at least a portion of the sub-steps or stages of other steps or steps.
In one embodiment, as shown in fig. 5, there is provided a living body detection apparatus 500 including:
the face image to be detected acquisition module 502 acquires a face image to be detected;
the feature extraction module 504 is configured to perform feature extraction on a face image to be detected, so as to obtain corresponding image features;
a first generated image obtaining module 506, configured to input the obtained image features into a generator in a pre-trained first generation reactance network, to obtain a first generated image corresponding to a face image to be detected, where the first generation reactance network is obtained by training based on a live face image set, and the live face image set is an image set obtained by capturing a live face;
The second generated image obtaining module 508 inputs the obtained image features into a generator in a pre-trained second generated countermeasure network to obtain a second generated image corresponding to the face image to be detected, wherein the second generated countermeasure network is obtained based on non-living face image set training, and the non-living face image is an image set obtained by shooting a non-living face;
the first similarity calculation module 510 calculates the similarity between the first generated image and the face image to be detected, so as to obtain a first similarity;
a second similarity calculating module 512, configured to calculate a similarity between the second generated image and the face image to be detected, so as to obtain a second similarity;
the living body detection result determining module 514 is configured to determine a living body detection result corresponding to the face image to be detected according to the first similarity and the second similarity.
In one embodiment, the first similarity calculation module 510 is further configured to scale the first generated image and the image to be detected to a preset size respectively; respectively carrying out gray scale processing on the zoomed first generated image and the image to be detected; sequentially calculating the average value of each row of pixel points in the first generated image after gray processing, and calculating the variance of all the obtained average values to obtain a first characteristic value corresponding to the first generated image; sequentially calculating the average value of each row of pixel points in the image to be detected after gray processing, and calculating the variance of all the obtained average values to obtain a second characteristic value corresponding to the image to be detected; and calculating a difference value between the first characteristic value and the second characteristic value, and obtaining the first similarity based on the difference value.
In one embodiment, the apparatus further comprises: the first training module is used for acquiring a living body face image set; extracting features of the living body face images in the living body face image set to obtain first training features corresponding to the living body face images, wherein the first training features comprise spatial information and frequency domain features; inputting the obtained first training characteristics into a first initial generator to obtain a first generated face image; based on the obtained similarity between the first generated face image and the living body face image, adjusting parameters of the first initial generator; respectively inputting the obtained first generated face image and the obtained living body face image into a first initial discriminator to obtain a first discriminating result and a second discriminating result, wherein the first discriminating result and the second discriminating result are respectively used for representing whether the obtained first generated face image and the living body face image are real face images or not; adjusting parameters of the first initial generator and the first initial arbiter based on a first difference between a first discrimination result and a no discrimination result for characterizing that the image input to the first initial arbiter is not a real face image and a second difference between a second discrimination result and a yes discrimination result for characterizing that the image input to the first initial arbiter is a real face image; and respectively determining the adjusted first initial generator and the first initial arbiter as a generator and a arbiter in the first generation reactance network.
In one embodiment, the apparatus further comprises: the second training module is used for acquiring a non-living face image set; extracting features of non-living face images in the non-living face image set to obtain second training features corresponding to the non-living face images, wherein the second training features do not contain space information and contain frequency domain features with high frequency components less than those of the first training features; inputting the obtained second training characteristics into a second initial generator to obtain a second generated face image; adjusting parameters of a second initial generator based on the obtained similarity between the second generated face image and the non-living face image; respectively inputting the obtained second generated face image and the non-living face image into a second initial discriminator to obtain a third discrimination result and a fourth discrimination result, wherein the third discrimination result and the fourth discrimination result are respectively used for representing whether the obtained second generated face image and the non-living face image are real face images or not; adjusting parameters of the second initial generator and the second initial arbiter based on a third difference between a third discrimination result and a no discrimination result for characterizing that the image input to the second initial arbiter is not a real face image, and a fourth difference between a fourth discrimination result and a yes discrimination result for characterizing that the image input to the second initial arbiter is a real face image; and respectively determining the adjusted second initial generator and the second initial arbiter as a generator and a arbiter in a second generation countermeasure network.
In one embodiment, the feature extraction module 504 is further configured to input the face image to be detected into a first feature extraction model and a second feature extraction model that are trained in advance, to obtain a first image feature and a second image feature, where the first feature extraction model is obtained based on training a set of living face images, and the second feature extraction model is obtained based on training a set of non-living face images; the first generated image obtaining module 506 is further configured to input the obtained first image feature into a generator in a first generation reactance network trained in advance, so as to obtain a first generated image corresponding to the face image to be detected; the second generated image obtaining module 508 is further configured to input the obtained second image feature to a generator in a second pre-trained generating countermeasure network, so as to obtain a second generated image corresponding to the face image to be detected.
In one embodiment, the device further includes a warning information sending module, configured to obtain a user identifier corresponding to the face image to be detected when it is determined that the living body detection result corresponding to the face image to be detected is not a living body face, and calculate the number of times of living body detection failure of a user corresponding to the user identifier within a preset time period; when the number of times of the living body detection failure is larger than a preset threshold value, acquiring a relevant account corresponding to the user identifier, and sending warning information to the relevant account.
The specific definition of the living body detection device can be found in the definition of the living body detection method hereinabove, and the description thereof will be omitted. The respective modules in the above living body detection apparatus may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 6. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The nonvolatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a living body detection method.
It will be appreciated by those skilled in the art that the structure shown in fig. 6 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, a computer device is provided comprising a memory storing a computer program and a processor implementing the steps of the living being detection method of any of the embodiments described above when the computer program is executed.
In one embodiment, a computer readable storage medium is provided, on which a computer program is stored, which when executed by a processor implements the steps of the biopsy method described in any of the above embodiments.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (10)

1. A method of in vivo detection, the method comprising:
acquiring a face image to be detected;
respectively inputting the face image to be detected into a first feature extraction model and a second feature extraction model which are trained in advance to obtain a first image feature and a second image feature, wherein the first feature extraction model is obtained by training based on a living face image set, and the second feature extraction model is obtained by training based on a non-living face image set;
Inputting the obtained first image features into a generator in a pre-trained first generation contrast network to obtain a first generation image corresponding to a face image to be detected, wherein the first generation contrast network is obtained by training based on a living face image set, and the living face image set is an image set obtained by shooting a living face;
inputting the obtained second image features into a generator in a pre-trained second generation countermeasure network to obtain a second generation image corresponding to a face image to be detected, wherein the second generation countermeasure network is obtained by training based on a non-living face image set, and the non-living face image is an image set obtained by shooting a non-living face;
calculating the similarity between the first generated image and the face image to be detected to obtain a first similarity;
calculating the similarity between the second generated image and the face image to be detected to obtain a second similarity;
when the first similarity is larger than the second similarity, determining that the face in the face image to be detected is a living face; when the first similarity is smaller than the second similarity, it is determined that the face in the face image to be detected is not a living face.
2. The method according to claim 1, wherein the calculating the similarity between the first generated image and the face image to be detected to obtain a first similarity includes:
respectively scaling the first generated image and the face image to be detected to a preset size;
respectively carrying out gray scale processing on the scaled first generated image and the face image to be detected;
sequentially calculating the average value of each row of pixel points in the first generated image after gray processing, and calculating the variance of all the obtained average values to obtain a first characteristic value corresponding to the first generated image;
sequentially calculating the average value of each row of pixel points in the face image to be detected after gray processing, and calculating variances of all the obtained average values to obtain a second characteristic value corresponding to the face image to be detected;
and calculating a difference value between the first characteristic value and the second characteristic value, and obtaining a first similarity based on the difference value.
3. The method of claim 1, wherein the first generation of the reactive network comprises:
acquiring a living body face image set;
extracting features of the living body face images in the living body face image set to obtain first training features corresponding to the living body face images, wherein the first training features comprise spatial information and frequency domain features;
Inputting the obtained first training characteristics into a first initial generator to obtain a first generated face image;
adjusting parameters of the first initial generator based on the similarity between the obtained first generated face image and the living body face image;
respectively inputting the obtained first generated face image and the living body face image into a first initial discriminator to obtain a first discrimination result and a second discrimination result, wherein the first discrimination result and the second discrimination result are respectively used for representing whether the obtained first generated face image and the living body face image are real face images or not;
adjusting parameters of the first initial generator and the first initial discriminator based on a first difference between the first discrimination result and a no discrimination result for characterizing that the image input to the first initial discriminator is not a real face image, and a second difference between the second discrimination result and a yes discrimination result for characterizing that the image input to the first initial discriminator is a real face image;
and respectively determining the adjusted first initial generator and the first initial arbiter as a generator and a arbiter in the first generation reactance network.
4. A method according to claim 3, wherein the second generation of the challenge network comprises:
acquiring a non-living face image set;
extracting features of non-living face images in the non-living face image set to obtain second training features corresponding to the non-living face images, wherein the second training features do not contain space information and high-frequency components in the frequency domain features are less than those of the first training features;
inputting the obtained second training characteristics into a second initial generator to obtain a second generated face image;
adjusting parameters of the second initial generator based on the obtained similarity between the second generated face image and the non-living face image;
respectively inputting the obtained second generated face image and the non-living face image into a second initial discriminator to obtain a third discrimination result and a fourth discrimination result, wherein the third discrimination result and the fourth discrimination result are respectively used for representing whether the obtained second generated face image and the non-living face image are real face images or not;
adjusting parameters of the second initial generator and the second initial discriminator based on a third difference between the third discrimination result and a no discrimination result for characterizing that the image input to the second initial discriminator is not a real face image, and a fourth difference between the fourth discrimination result and a yes discrimination result for characterizing that the image input to the second initial discriminator is a real face image;
And respectively determining the adjusted second initial generator and the second initial arbiter as a generator and a arbiter in the second generation countermeasure network.
5. The method according to any one of claims 1 to 4, further comprising:
when the living body detection result corresponding to the face image to be detected is not a living body face, acquiring a user identification corresponding to the face image to be detected, and calculating the living body detection failure times of a user corresponding to the user identification in a preset time period;
and when the number of times of the living body detection failure is larger than a preset threshold value, acquiring an associated account corresponding to the user identifier, and sending warning information to the associated account.
6. A living body detection apparatus, characterized in that the apparatus comprises:
the face image acquisition module to be detected acquires a face image to be detected;
the feature extraction module is used for respectively inputting the face image to be detected into a first feature extraction model and a second feature extraction model which are trained in advance to obtain a first image feature and a second image feature, wherein the first feature extraction model is obtained by training based on a living face image set, and the second feature extraction model is obtained by training based on a non-living face image set;
The first generation image acquisition module is used for inputting the obtained first image characteristics into a generator in a pre-trained first generation reactance network to obtain a first generation image corresponding to a face image to be detected, wherein the first generation reactance network is obtained by training based on a living face image set, and the living face image set is an image set obtained by shooting a living face;
the second generation image obtaining module is used for inputting the obtained second image characteristics into a generator in a pre-trained second generation countermeasure network to obtain a second generation image corresponding to the face image to be detected, wherein the second generation countermeasure network is obtained by training based on a non-living face image set, and the non-living face image is an image set obtained by shooting a non-living face;
the first similarity calculation module is used for calculating the similarity between the first generated image and the face image to be detected to obtain first similarity;
the second similarity calculation module calculates the similarity between the second generated image and the face image to be detected to obtain second similarity;
the living body detection result determining module is used for determining that the face in the face image to be detected is a living body face when the first similarity is larger than the second similarity; when the first similarity is smaller than the second similarity, it is determined that the face in the face image to be detected is not a living face.
7. The apparatus of claim 6, wherein the first similarity calculation module is further configured to:
respectively scaling the first generated image and the face image to be detected to a preset size;
respectively carrying out gray scale processing on the scaled first generated image and the face image to be detected;
sequentially calculating the average value of each row of pixel points in the first generated image after gray processing, and calculating the variance of all the obtained average values to obtain a first characteristic value corresponding to the first generated image;
sequentially calculating the average value of each row of pixel points in the face image to be detected after gray processing, and calculating variances of all the obtained average values to obtain a second characteristic value corresponding to the face image to be detected;
and calculating a difference value between the first characteristic value and the second characteristic value, and obtaining a first similarity based on the difference value.
8. The apparatus of claim 6, further comprising a first training module for acquiring a set of live face images; extracting features of the living body face images in the living body face image set to obtain first training features corresponding to the living body face images, wherein the first training features comprise spatial information and frequency domain features; inputting the obtained first training characteristics into a first initial generator to obtain a first generated face image; adjusting parameters of the first initial generator based on the similarity between the obtained first generated face image and the living body face image; respectively inputting the obtained first generated face image and the living body face image into a first initial discriminator to obtain a first discrimination result and a second discrimination result, wherein the first discrimination result and the second discrimination result are respectively used for representing whether the obtained first generated face image and the living body face image are real face images or not; adjusting parameters of the first initial generator and the first initial discriminator based on a first difference between the first discrimination result and a no discrimination result for characterizing that the image input to the first initial discriminator is not a real face image, and a second difference between the second discrimination result and a yes discrimination result for characterizing that the image input to the first initial discriminator is a real face image; and respectively determining the adjusted first initial generator and the first initial arbiter as a generator and a arbiter in the first generation reactance network.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 5 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 5.
CN201910650995.9A 2019-07-18 2019-07-18 Living body detection method, living body detection device, computer equipment and storage medium Active CN110490076B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910650995.9A CN110490076B (en) 2019-07-18 2019-07-18 Living body detection method, living body detection device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910650995.9A CN110490076B (en) 2019-07-18 2019-07-18 Living body detection method, living body detection device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110490076A CN110490076A (en) 2019-11-22
CN110490076B true CN110490076B (en) 2024-03-01

Family

ID=68546128

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910650995.9A Active CN110490076B (en) 2019-07-18 2019-07-18 Living body detection method, living body detection device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110490076B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111126181A (en) * 2019-12-06 2020-05-08 深圳市中电数通智慧安全科技股份有限公司 Refueling safety supervision method and device and terminal equipment
CN111191521B (en) * 2019-12-11 2022-08-12 智慧眼科技股份有限公司 Face living body detection method and device, computer equipment and storage medium
CN111291730A (en) * 2020-03-27 2020-06-16 深圳阜时科技有限公司 Face anti-counterfeiting detection method, server and storage medium
CN111553202B (en) * 2020-04-08 2023-05-16 浙江大华技术股份有限公司 Training method, detection method and device for neural network for living body detection
CN112633113A (en) * 2020-12-17 2021-04-09 厦门大学 Cross-camera human face living body detection method and system
CN112766162B (en) * 2021-01-20 2023-12-22 北京市商汤科技开发有限公司 Living body detection method, living body detection device, electronic equipment and computer readable storage medium
CN112528969B (en) * 2021-02-07 2021-06-08 中国人民解放军国防科技大学 Face image authenticity detection method and system, computer equipment and storage medium
CN113033305B (en) * 2021-02-21 2023-05-12 云南联合视觉科技有限公司 Living body detection method, living body detection device, terminal equipment and storage medium
CN113378715B (en) * 2021-06-10 2024-01-05 北京华捷艾米科技有限公司 Living body detection method based on color face image and related equipment
CN113705341A (en) * 2021-07-16 2021-11-26 国家石油天然气管网集团有限公司 Small-scale face detection method based on generation countermeasure network
CN113516107B (en) * 2021-09-09 2022-02-15 浙江大华技术股份有限公司 Image detection method
CN114596615B (en) * 2022-03-04 2023-05-05 湖南中科助英智能科技研究院有限公司 Face living body detection method, device, equipment and medium based on countermeasure learning
CN115147705B (en) * 2022-09-06 2023-02-03 平安银行股份有限公司 Face copying detection method and device, electronic equipment and storage medium
CN117437675A (en) * 2023-10-23 2024-01-23 长讯通信服务有限公司 Face silence living body detection method, device, computer equipment and storage medium based on component decomposition and reconstruction

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107818313A (en) * 2017-11-20 2018-03-20 腾讯科技(深圳)有限公司 Vivo identification method, device, storage medium and computer equipment
CN108416324A (en) * 2018-03-27 2018-08-17 百度在线网络技术(北京)有限公司 Method and apparatus for detecting live body
CN108509888A (en) * 2018-03-27 2018-09-07 百度在线网络技术(北京)有限公司 Method and apparatus for generating information
CN108537152A (en) * 2018-03-27 2018-09-14 百度在线网络技术(北京)有限公司 Method and apparatus for detecting live body
CN109255322A (en) * 2018-09-03 2019-01-22 北京诚志重科海图科技有限公司 A kind of human face in-vivo detection method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6754619B2 (en) * 2015-06-24 2020-09-16 三星電子株式会社Samsung Electronics Co.,Ltd. Face recognition method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107818313A (en) * 2017-11-20 2018-03-20 腾讯科技(深圳)有限公司 Vivo identification method, device, storage medium and computer equipment
CN108416324A (en) * 2018-03-27 2018-08-17 百度在线网络技术(北京)有限公司 Method and apparatus for detecting live body
CN108509888A (en) * 2018-03-27 2018-09-07 百度在线网络技术(北京)有限公司 Method and apparatus for generating information
CN108537152A (en) * 2018-03-27 2018-09-14 百度在线网络技术(北京)有限公司 Method and apparatus for detecting live body
CN109255322A (en) * 2018-09-03 2019-01-22 北京诚志重科海图科技有限公司 A kind of human face in-vivo detection method and device

Also Published As

Publication number Publication date
CN110490076A (en) 2019-11-22

Similar Documents

Publication Publication Date Title
CN110490076B (en) Living body detection method, living body detection device, computer equipment and storage medium
CN108846355B (en) Image processing method, face recognition device and computer equipment
CN110569721B (en) Recognition model training method, image recognition method, device, equipment and medium
US11783639B2 (en) Liveness test method and apparatus
CN109271870B (en) Pedestrian re-identification method, device, computer equipment and storage medium
CN110490078B (en) Monitoring video processing method, device, computer equipment and storage medium
CN110399799B (en) Image recognition and neural network model training method, device and system
US10262190B2 (en) Method, system, and computer program product for recognizing face
EP2676224B1 (en) Image quality assessment
CN113239874B (en) Behavior gesture detection method, device, equipment and medium based on video image
CN111667001B (en) Target re-identification method, device, computer equipment and storage medium
WO2022121130A1 (en) Power target detection method and apparatus, computer device, and storage medium
CN111754396A (en) Face image processing method and device, computer equipment and storage medium
CN110532746B (en) Face checking method, device, server and readable storage medium
Chen et al. Mislgan: an anti-forensic camera model falsification framework using a generative adversarial network
CN110688950B (en) Face living body detection method and device based on depth information
CN111079587B (en) Face recognition method and device, computer equipment and readable storage medium
CN112580434A (en) Face false detection optimization method and system based on depth camera and face detection equipment
CN111985340A (en) Face recognition method and device based on neural network model and computer equipment
CN113128428B (en) Depth map prediction-based in vivo detection method and related equipment
CN112308035A (en) Image detection method, image detection device, computer equipment and storage medium
CN109871814B (en) Age estimation method and device, electronic equipment and computer storage medium
CN116229528A (en) Living body palm vein detection method, device, equipment and storage medium
Paul et al. Anti-Spoofing Face-Recognition Technique for eKYC Application
KR102488858B1 (en) Method, apparatus and program for digital restoration of damaged object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant