CN114821823B - Image processing, training of human face anti-counterfeiting model and living body detection method and device - Google Patents

Image processing, training of human face anti-counterfeiting model and living body detection method and device Download PDF

Info

Publication number
CN114821823B
CN114821823B CN202210377809.0A CN202210377809A CN114821823B CN 114821823 B CN114821823 B CN 114821823B CN 202210377809 A CN202210377809 A CN 202210377809A CN 114821823 B CN114821823 B CN 114821823B
Authority
CN
China
Prior art keywords
face image
living body
body detection
disturbance
sample face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210377809.0A
Other languages
Chinese (zh)
Other versions
CN114821823A (en
Inventor
杨杰之
王洪斌
周迅溢
曾定衡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mashang Xiaofei Finance Co Ltd
Original Assignee
Mashang Xiaofei Finance Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mashang Xiaofei Finance Co Ltd filed Critical Mashang Xiaofei Finance Co Ltd
Priority to CN202210377809.0A priority Critical patent/CN114821823B/en
Publication of CN114821823A publication Critical patent/CN114821823A/en
Application granted granted Critical
Publication of CN114821823B publication Critical patent/CN114821823B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an image processing, human face anti-counterfeiting model training and living body detection method and device, which are used for solving the problem that the existing defense model only has a defense effect on part of countermeasure samples, so that the universality is poor. The image processing method comprises the following steps: inputting an original sample face image into a pre-trained living body detection model to obtain a first living body detection result; determining disturbance gradient information of the original sample face image based on the first living body detection result and model parameters of the living body detection model, wherein the disturbance gradient information is used for representing the association degree of the disturbance degree increased on the original sample face image to the first living body detection result; and carrying out disturbance processing on the original sample face image based on the disturbance gradient information to obtain an anti-sample face image.

Description

Image processing, training of human face anti-counterfeiting model and living body detection method and device
Technical Field
The application relates to the technical field of computers, in particular to a method and a device for image processing, training of a human face anti-counterfeiting model and living body detection.
Background
In recent years, deep learning is a main technical means of human face living body detection, and has the characteristics of high precision and high speed. However, with the development of anti-attack technology, a large impact is generated on the deep learning model, and the input image is carefully designed to attack the model, so that the semantic features extracted by the model are contradicted with the semantic features of the picture, and the model is guided to output erroneous pre-judgment.
Considering that the countermeasure samples in the related art are mostly countermeasure samples similar to the real face image obtained by image reconstruction based on the real face image, the obtained countermeasure sample face images are limited due to the fact that the rules of image reconstruction are consistent and the number of the real face images is limited, and then the trained defense model can only play a role in defending part of the countermeasure samples.
Disclosure of Invention
The embodiment of the application aims to provide an image processing method, a training method of a human face anti-counterfeiting model and a living body detection method and device, which are used for solving the problem that the existing defense model only has a defense effect on part of a countermeasure sample, so that the universality is poor.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical scheme:
In a first aspect, embodiments of the present application provide an image processing method, including:
inputting an original sample face image into a pre-trained living body detection model to obtain a first living body detection result;
determining disturbance gradient information of the original sample face image based on the first living body detection result and model parameters of the living body detection model, wherein the disturbance gradient information is used for representing the association degree of the disturbance degree increased on the original sample face image to the first living body detection result;
and carrying out disturbance processing on the original sample face image based on the disturbance gradient information to obtain an anti-sample face image.
It can be seen that in the embodiment of the present application, after a first living body detection result corresponding to an original sample face image is obtained through a pre-trained living body detection model, disturbance gradient information of the original sample face image is determined based on the first living body detection result and model parameters of the living body detection model, so that a correlation degree of a disturbance degree increased on the original sample face image to the first living body detection result of the original sample face image can be obtained; in this way, based on the disturbance gradient information of the original sample face image, the disturbance processing is carried out on the original sample face image, after the disturbance processing of various disturbance degrees is carried out on the original sample face image, the anti-sample face image with various disturbance degrees can be obtained, and then the obtained anti-sample face image is used for training the anti-counterfeiting model, so that the defending capability of the anti-counterfeiting model on various anti-counterfeiting samples is improved, and the universality and the robustness of the anti-counterfeiting model are improved.
In a second aspect, an embodiment of the present application provides a training method for a face anti-counterfeiting model, including:
inputting a sample face image in a sample set into a face anti-counterfeiting model to be trained to obtain a second living body detection result corresponding to the sample face image in the sample set, wherein the sample set at least comprises an anti-sample face image, the anti-sample face image is obtained by carrying out disturbance processing on the original sample face image based on disturbance gradient information of the original sample face image, the disturbance gradient information is determined based on model parameters of a pre-trained living body detection model and a first living body detection result obtained by the living body detection model for the original sample face image, and the disturbance gradient information is used for representing the association degree of the disturbance degree increased on the original sample face image on the first living body detection result;
and adjusting model parameters of the face anti-counterfeiting model to be trained based on the pseudo tag corresponding to the sample face image in the sample set and the second living body detection result to obtain a final face anti-counterfeiting model, wherein the pseudo tag corresponding to the sample face image in the sample set is determined based on the first living body detection result corresponding to the original sample face image.
It can be seen that in the embodiment of the present application, a knowledge distillation technology is adopted, a antibody sample face image obtained from an original sample face image is used as a training sample of a face anti-counterfeiting model, a pseudo tag corresponding to the training sample is determined based on a first living body detection result obtained from a living body detection model for the original sample face image, the training sample and the tag thereof are utilized to train the face anti-counterfeiting model, so that knowledge of the living body detection model is migrated to the face anti-counterfeiting model, and the face anti-counterfeiting model can quickly learn the generalization capability of the living body detection model, namely, the mapping knowledge from an input face image to an obtained living body detection result, so that the face anti-counterfeiting model retains the performance close to that of the living body detection model and can perform living body detection on the input face image; in addition, as most of antibody samples are obtained by using detection results obtained by an existing living body detection model for a real human face image, disturbance is added on the basis of the real human face image, therefore, after a first living body detection result corresponding to an original sample human face image is obtained by a pre-trained living body detection model, disturbance gradient information of the original sample human face image is determined based on the first living body detection result and model parameters of the living body detection model, and the association degree of the disturbance degree added on the original sample human face image to the first living body detection result of the original sample human face image can be obtained; then, based on disturbance gradient information of the original sample face image, disturbance processing is carried out on the original sample face image, so that the obtained anti-sample face image can cover various disturbance degrees, and then the obtained anti-sample face image is used for training the anti-counterfeiting model, so that the anti-counterfeiting model is beneficial to improving the defending capability of the anti-counterfeiting model on various anti-counterfeiting samples, and the universality and the robustness of the anti-counterfeiting model are improved.
In a third aspect, an embodiment of the present application provides a living body detection method, including:
inputting a face image to be detected into a pre-trained face anti-counterfeiting model to obtain a third living body detection result, wherein the face anti-counterfeiting model is obtained by training based on a sample face image in a sample set and a corresponding pseudo tag thereof, the sample set at least comprises an antibody sample face image, the antibody sample face image is obtained by carrying out disturbance processing on the original sample face image based on disturbance gradient information of the original sample face image, the disturbance gradient information is determined based on model parameters of the pre-trained living body detection model and a first living body detection result obtained by the living body detection model for the original sample face image, the disturbance gradient information is used for representing the association degree of the increased disturbance degree on the original sample face image on the first living body detection result, and the pseudo tag corresponding to the sample face image in the sample set is determined based on the first living body detection result corresponding to the original sample face image and is used for representing whether the corresponding sample face image belongs to the living body image or not;
And determining whether the face image to be detected is a living face image or not based on the third living body detection result.
It can be seen that, in the embodiment of the present application, since the face anti-counterfeiting model uses knowledge distillation technology, a antibody sample face image obtained from an original sample face image is used as a training sample of the face anti-counterfeiting model, and based on a first living body detection result obtained from a living body detection model for the original sample face image, a pseudo tag corresponding to the training sample is determined, and the training sample and the tag thereof are used to train the face anti-counterfeiting model, the face anti-counterfeiting model can learn the generalization capability of the living body detection model, that is, the mapping knowledge from an input face image to an obtained living body detection result, and then the face anti-counterfeiting model retains the performance close to the living body detection model, so that the input face image can be subjected to living body detection; and because most of antibody samples are obtained by using detection results obtained by an existing living body detection model for a real human face image, disturbance is added on the basis of the real human face image, therefore, after a first living body detection result corresponding to an original sample human face image is obtained by a preselected and trained living body detection model, disturbance gradient information of the original sample human face image is determined based on the first living body detection result and model parameters of the living body detection model, and the correlation degree of the disturbance degree added on the original sample human face image to the first living body detection result of the original sample human face image can be obtained; then, based on disturbance gradient information of the original sample face image, disturbance processing is carried out on the original sample face image, so that the obtained anti-sample face image can cover the characteristics of a large number of existing anti-sample, and then the obtained anti-sample face image is used for training an anti-counterfeiting model, so that the trained anti-counterfeiting model has excellent defensive ability to various anti-sample, and has strong universality and robustness.
In a fourth aspect, an embodiment of the present application provides an image processing apparatus, including:
the first living body detection module is used for inputting the original sample face image into a pre-trained living body detection model to obtain a first living body detection result;
the gradient determining module is used for determining disturbance gradient information of the original sample face image based on the first living body detection result and model parameters of the living body detection model, wherein the disturbance gradient information is used for representing the association degree of the disturbance degree added on the original sample face image to the first living body detection result;
and the disturbance module is used for carrying out disturbance processing on the original sample face image based on the disturbance gradient information to obtain an anti-sample face image.
In a fifth aspect, an embodiment of the present application provides a training device for a face anti-counterfeiting model, including:
the second living body detection module is used for inputting a sample face image in a sample set into the face anti-counterfeiting model to obtain a second living body detection result corresponding to the sample face image in the sample set, wherein the sample set at least comprises a resist sample face image, the resist sample face image is obtained by carrying out disturbance processing on the original sample face image based on disturbance gradient information of the original sample face image, the disturbance gradient information is determined based on model parameters of a pre-trained living body detection model and a first living body detection result obtained by the living body detection model for the original sample face image, and the disturbance gradient information is used for representing the association degree of the disturbance degree increased on the original sample face image on the first living body detection result;
The adjusting module is used for adjusting the model parameters of the face anti-counterfeiting model based on the pseudo labels corresponding to the sample face images in the sample set and the second living body detection results, and the pseudo labels corresponding to the sample face images in the sample set are determined based on the first living body detection results corresponding to the original sample face images.
In a sixth aspect, embodiments of the present application provide a living body detection apparatus, including:
the third living body detection module is used for inputting a face image to be detected into a pre-trained face anti-counterfeiting model to obtain a third living body detection result, wherein the face anti-counterfeiting model is obtained by training a sample face image and a corresponding pseudo tag thereof in a sample set, the sample set at least comprises a correlation sample face image, the correlation sample face image is obtained by carrying out disturbance processing on the original sample face image based on disturbance gradient information of the original sample face image, the disturbance gradient information is determined based on model parameters of the pre-trained living body detection model and a first living body detection result obtained by the living body detection model for the original sample face image, the disturbance gradient information is used for representing the correlation degree of the increased disturbance degree on the original sample face image to the first living body detection result, the pseudo tag corresponding to the sample face image in the sample set is determined based on the first living body detection result corresponding to the original sample face image, and the pseudo tag is used for representing whether the corresponding sample face image belongs to the living body image or not;
And the living body determining module is used for determining whether the face image to be detected is a living body face image or not based on the third living body detection result.
In a seventh aspect, embodiments of the present application provide an electronic device, including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of any one of the first to third aspects.
In an eighth aspect, embodiments of the present application provide a computer-readable storage medium, which when executed by a processor of an electronic device, enables the electronic device to perform the method of any one of the first to third aspects.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the present application;
FIG. 2 is a schematic structural diagram of a living body detection model according to an embodiment of the present application;
FIG. 3 is a flow chart of a disturbance processing method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a sensitive area provided by an embodiment of the present application;
fig. 5 is a flow chart of a training method of a face anti-counterfeiting model according to an embodiment of the present application;
fig. 6 is a flow chart of a training method of a face anti-counterfeiting model according to another embodiment of the present application;
fig. 7 is a flow chart of a training method of a face anti-counterfeiting model according to another embodiment of the present application;
fig. 8 is a schematic structural diagram of a face anti-counterfeiting model according to an embodiment of the present application;
FIG. 9 is a schematic flow chart of a living body detection method according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a training device for a face anti-counterfeiting model according to an embodiment of the present application;
FIG. 12 is a schematic view of a living body detecting device according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purposes, technical solutions and advantages of the present application, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The terms "first," "second," and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein. Furthermore, in the present specification and claims, "and/or" means at least one of the connected objects, and the character "/" generally means a relationship in which the associated object is an "or" before and after.
Partial conceptual description:
challenge sample: given the existing model, the image generated by adding the perturbation to the artwork is called an antigen sample. Countering the sample may lead to erroneous predictions of the model.
To avoid erroneous pre-decisions of model output due to attacks, in some embodiments: an initial defense model is built, image reconstruction is carried out on the basis of a real face image, a countermeasure sample similar to the real face image is obtained, and then the real face image and the countermeasure sample are used as training samples to train the initial defense model, so that the defense model is obtained. However, due to the limited number of real face images, the available challenge samples are limited, that is, each real face image can only acquire one challenge sample image (only one reconstruction rule is used for image reconstruction), so that the trained defense model can only play a role in defending part of the challenge samples, and the universality is poor.
Therefore, the embodiment of the application provides an image processing method, based on the first living detection result obtained by the existing living detection model aiming at the original sample face image and the model parameters of the living detection model, the association degree of the increased disturbance degree on the original sample face image to the living detection result of the face image is analyzed, disturbance information is added on the original sample face image based on the analysis result, when different disturbance degrees are added on the original sample face image, the association degree of the living detection result on the face image is different, namely, by setting different disturbance degrees on the face image, various types of addable disturbance information can be obtained, so that even if the original sample face image is limited, the human face image of a antibody sample with various disturbance degrees can be obtained, and the obtained human face image of the antibody sample is used for training the human face anti-counterfeiting model, so that the human face anti-counterfeiting model is beneficial to improving the defending capacity of various antibody samples of the human face anti-counterfeiting model, and the universality and the robustness of the human face anti-counterfeiting model are improved.
Based on the antibody sample face image obtained by the image processing method, the embodiment of the application also provides a training method of the face anti-counterfeiting model, based on a knowledge distillation technology, the antibody sample face image is used as a training sample, and based on a first living body detection result obtained by an existing living body detection model aiming at an original sample face image, a pseudo tag corresponding to the training sample is determined, and then the training sample and the pseudo tag corresponding to the training sample are used for training the face anti-counterfeiting model, so that knowledge of the living body detection model is migrated to the face anti-counterfeiting model, the generalization capability of the living body detection model can be quickly learned by the face anti-counterfeiting model, namely mapping knowledge from an input face image to an output living body detection result is obtained, and the face anti-counterfeiting model keeps performance close to the living body detection model, so that the input face image can be subjected to living body detection; in addition, the obtained face image of the anti-sample can cover various disturbance degrees, and the obtained face image of the anti-sample is used for training the anti-counterfeiting model, so that the defending capability of the anti-counterfeiting model on various anti-counterfeiting samples is improved, and the universality and the robustness of the anti-counterfeiting model are improved.
The embodiment of the application also provides a living body detection method, and the human face anti-counterfeiting model obtained through training can be used for rapidly and accurately detecting whether the input human face image is a living body human face image.
It should be understood that, the image processing method, the training method of the face anti-counterfeiting model and the living body detection method provided in the embodiments of the present application may be executed by an electronic device or software installed in the electronic device, and in particular may be executed by a terminal device or a server device.
The following describes in detail the technical solutions provided by the embodiments of the present application with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of an image processing method according to an embodiment of the present application is provided, and the method may include the following steps:
s102, inputting the original sample face image into a pre-trained living body detection model to obtain a first living body detection result.
In the embodiment of the application, the living body detection model may be a model which is trained in advance and can detect an input face image in living body. Specifically, the living body detection model is obtained by training based on an original sample face image and a label thereof, and the label of the original sample face image is used for indicating whether the original sample face image belongs to the living body face image or not. In practical applications, to ensure the detection accuracy of the living body detection model, the living body detection model may be obtained by training based on a plurality of different original sample face images and labels thereof.
More specifically, the training process of the living body detection model can be specifically implemented as follows: inputting a plurality of original sample face images into a living body detection model to be trained, and obtaining a first living body detection result of each original sample face image; then, determining detection loss of a living body detection model based on a first preset loss function and first living body detection and labels of each original sample face image, wherein the detection loss of the living body detection model is used for representing the difference between a living body detection result obtained by the living body detection model aiming at the input face image and the real category of the input face image; further, based on the detection loss of the living body detection model, model parameters of the living body detection model are adjusted. The model parameters of the living body detection model may include, for example, but not limited to, the number of nodes in each network layer in the living body detection model, connection relationships and connection edge weights between nodes in different network layers, offsets corresponding to the nodes in each network layer, and the like.
In practical application, the first preset loss function may be set according to actual needs, which is not limited in this embodiment of the present application. For example, the first preset loss function may be as shown in the following formula (1):
Wherein, the liquid crystal display device comprises a liquid crystal display device,represents the detection loss of the living body detection model, X represents the number of face images of the original sample, and X i Representing the i-th original sample face image, Y i (x i ) Label representing i-th original sample face image, ft represents living body detection model, ft (x i ) And the first living body detection result of the ith original sample face image is represented.
In addition, when the model parameters of the living body detection model are adjusted based on the detection loss of the living body detection model, the model parameters may be adjusted by using a forward propagation algorithm or a backward propagation algorithm, which is not limited in the embodiment of the present application.
It should be noted that the above process is only one adjustment process of the living body detection model. In practical applications, multiple adjustments may be required and thus the above process may be repeated multiple times until preset training stop conditions are met, thereby obtaining a final living detection model. The preset training stop condition may be that the detection loss of the living body detection model is smaller than a preset loss threshold, or may also be that the iteration number reaches a preset number, which is not limited in the embodiment of the present application.
The living body detection model may have any suitable structure, and may be specifically set according to actual needs, which is not limited in the embodiment of the present application. In an alternative implementation, to reduce the amount of computation and increase the biopsy efficiency, the biopsy model may employ a lightweight network with a smaller amount of parameters, such as MobileNet. More specifically, as shown in fig. 2, the living body detection model may include a first classifier and at least one level of feature extraction layer, where each level of feature extraction layer is used for performing feature extraction on input information to obtain corresponding image features, the first level of feature extraction layer is used for performing feature extraction on an input original sample face image to obtain corresponding image features, and the rest levels of feature extraction layers except the first level of feature extraction layer are used for performing feature extraction on image features obtained by the previous level of feature extraction layer to obtain image features corresponding to the current level of feature extraction layer; the first classifier is used for performing living detection on the image features obtained by the last-stage feature extraction layer to obtain a first living detection result of the face image of the original sample. In practical applications, the feature extraction layer may have any suitable structure, for example, bottleneck, etc., which is not limited in this embodiment of the present application.
In the embodiment of the application, after the original sample face image is input into the living body detection model, a first living body detection result of the original sample face image can be obtained. The first living body detection result is used for indicating whether the original sample face image is a living body face image or not. In practical applications, the first living body detection result may be a code for uniquely indicating that the original sample face image is a living body face image or a non-living body face image, for example, a one-hot (one-hot) code, where the code (1, 0) indicates that the original sample face image is a living body face image, and the code (0, 1) indicates that the original sample face image is a non-living body face image; of course, the first living body detection result may further include a first living body probability that the original sample face image belongs to a living body face image and a first non-living body probability that the original sample face image belongs to a non-living body face image.
S104, determining disturbance gradient information of the original sample face image based on the first living body detection result and model parameters of the living body detection model.
The disturbance gradient information of the original sample face image is used for representing the association degree of the disturbance degree added on the original sample face image to the first living body detection result corresponding to the original sample face image. It should be noted that, adding different disturbance degrees to the original sample face image may obtain different correlation degrees of the first living body detection result corresponding to the original sample face image, that is, different disturbance gradient information may be obtained by setting different disturbance degrees, so that when the disturbance processing is performed on the original sample face image, the disturbance processing of different disturbance degrees may be performed, so as to obtain a plurality of anti-sample face images of different disturbance degrees.
Considering that the existing antibody samples are approximate real face images obtained by image reconstruction on the basis of real face images, the antibody samples have relative consistency (image reconstruction rules are consistent), but the obtained antibody sample face images are limited due to the limited number of the real face images, so that the trained defending model can only defend part of the antibody samples, therefore, the correlation degree of disturbance degree to the living body detection result of the face images is analyzed based on the model parameters of the first living body detection result and the living body detection model obtained by the existing living body detection model aiming at the original sample face images, and the generation of the antibody sample face images covering various disturbance degrees is facilitated, so that the diversity of the antibody sample face images is improved, and the training of the human face anti-counterfeiting model with good universality is facilitated.
Considering that any disturbance on a face image affects both the determination result that the face image belongs to a living face image and the determination result that the face image belongs to a non-living face image for any face image, and the living detection result of the face image is determined by both the determination results, especially in the case that the living detection result is output in the form of classification probability, based on the determination result that the first sample face image belongs to the living face image by the increased disturbance degree on the original sample face image and the determination result that the original sample face image belongs to the non-living face image by the increased disturbance degree on the original sample face image, the disturbance gradient information of the original sample face image can be determined.
Optionally, the step S104 may include: determining a first gradient based on a first living body probability of which the original sample face image is a living body face image and model parameters of a living body detection model, wherein the first gradient is used for representing the association degree of the disturbance degree added on the original sample face image to the first living body probability; determining a second gradient based on the first non-living probability that the original sample face image is the non-living face image and model parameters of the living detection model, wherein the second gradient is used for representing the association degree of the disturbance degree added on the original sample face image to the first non-living probability; and determining disturbance gradient information of the original sample face image based on the first gradient and the second gradient.
For the first gradient, performing partial derivative calculation on the first living probability based on model parameters of the living detection model to obtain the first gradient; similarly, for the second gradient, performing partial derivative calculation on the first non-living probability based on the model parameters of the living detection model, so as to obtain the second gradient. Since the living body detection result of the original sample face image is determined by the relative magnitude between the first living body probability and the first non-living body probability, in order to enable the obtained disturbance gradient information to accurately reflect the correlation degree of the increased disturbance degree on the original sample face image to the first living body detection result, after the first gradient and the second gradient are obtained, the maximum absolute value of the difference value between the first gradient and the second gradient can be determined as the disturbance gradient information, namely Wherein (1)>Representing a first gradient, ++>Representing a second gradient.
Only one specific implementation of determining perturbation gradient information is shown here. Of course, it should be understood that the perturbation gradient information may be determined in other ways, and the embodiments of the present application are not limited in this regard.
S106, based on disturbance gradient information of the original sample face image, carrying out disturbance processing on the original sample face image to obtain an anti-sample face image.
Based on the association degree of the disturbance degree added on the original sample face image to the living body detection result of the face image, disturbance information is added on the original sample face image, so that even if the original sample face image is limited, the anti-sample face image with various disturbance degrees can be obtained, the diversity of the anti-sample face image is improved, the obtained anti-sample face image is used for training the face anti-counterfeiting model, the defending capability of the face anti-counterfeiting model to various anti-counterfeiting samples is improved, and the universality and the robustness of the face anti-counterfeiting model are improved.
In order to enable the face image of the challenge sample to well simulate the fake face image, so as to further improve the defending effect of the face anti-counterfeiting model against the challenge sample, in an alternative implementation, as shown in fig. 3, S106 may include:
S161, determining a sensitive area of the original sample face image and the disturbance quantity corresponding to the sensitive area based on disturbance gradient information of the original sample face image and image data of the original sample face image.
The sensitive area of the original sample face image refers to an area sensitive to disturbance on the original sample face image, that is, the increased disturbance degree on the sensitive area of the original sample face image has a larger influence on the living body detection result of the original sample face image. The corresponding disturbance quantity of the sensitive area is used for indicating the increased disturbance degree of the sensitive area.
In order to ensure the accuracy of the obtained sensitive area and the corresponding disturbing amount thereof, thereby better simulating the fake face image so as to further improve the robustness and the universality of the face anti-counterfeiting model, in an alternative implementation manner, the determination of the sensitive area of the first sample face image and the disturbing amount corresponding to the sensitive area can be specifically realized as follows:
and A1, determining the disturbance quantity corresponding to each pixel in the original sample face image based on the disturbance gradient information of the original sample face image and the pixel information of each pixel in the original sample face image.
Wherein, for each pixel in the original sample face image, the perturbation amount corresponding to the pixel is used to represent the increased perturbation degree on the pixel.
Specifically, for each pixel in the original sample face image, the bias derivative calculation can be performed on the disturbance gradient information of the original sample face image based on the pixel information of the pixel, and then the obtained result is determined as the disturbance quantity corresponding to the pixel. It should be noted that different disturbance gradient information can be obtained by setting different disturbance degrees; different disturbance gradient information and different corresponding disturbance amounts of pixels in the determined original sample face image.
And step A2, determining a target pixel in the original sample face image based on the disturbance quantity corresponding to each pixel in the original sample face image.
Specifically, each pixel in the original sample face image may be sorted according to the corresponding disturbance amount, and the N pixels with the highest corresponding disturbance amounts may be determined as the target pixels. Wherein N is an integer greater than or equal to 1, and the value thereof can be set according to actual needs, which is not limited in the embodiment of the present application.
And step A3, determining a sensitive area of the original sample face image based on the position information of the target pixel.
Specifically, considering that the pixels sensitive to disturbance and surrounding areas thereof may be induced to become an antagonistic sample, after the target pixel is determined, the target pixel may be expanded based on the position of the target pixel, so as to obtain a sensitive area of the face image of the original sample, where the size of the sensitive area is smaller than that of the face image of the original sample.
As shown in fig. 4, assuming that a P point in an original sample face image is a determined target pixel, a width intermediate variable sw and a width threshold mask of a sensitive area can be set according to a width W of the original sample face image, and a height intermediate variable sh and a height threshold mask of the sensitive area can be set according to a height H of the original sample face image, wherein sw is more than 1 and less than W, sh is more than 1 and less than masks is less than H, so that disturbance effects of the original sample face image are prevented from being influenced due to overlarge sensitive areas, and detection accuracy of a face anti-counterfeiting model is improved; then, vertex coordinates of the sensitive area, that is, vertex P1 (x+sw-mask, y-sh), vertex P2 (x+sw-mask, y+sh-mask), vertex P3 (x-sw, y-sh), and vertex P4 (x-sw, y+sh-mask) are determined based on coordinates (x, y) of the target pixel P, the width intermediate variable sw, the width threshold mask of the sensitive area, the height intermediate variable sh, and the height threshold mask of the sensitive area, and then a rectangular area composed of vertices P1 to P4 is determined as the sensitive area.
And step A4, determining the disturbance quantity corresponding to the sensitive area based on the disturbance quantity corresponding to the target pixel.
Alternatively, the disturbance quantity corresponding to the target pixel may be determined as the disturbance quantity corresponding to the sensitive area. Of course, in other alternative implementations, the disturbance quantity corresponding to the sensitive area may also be determined based on the disturbance quantity corresponding to the pixel covered by the sensitive area, for example, any one of the maximum value, the minimum value, the average value, and the like of the disturbance quantity corresponding to the pixel covered by the sensitive area is determined as the disturbance quantity corresponding to the sensitive area, which is not limited in the embodiment of the present application.
It can be understood that, because the disturbance is added to each pixel in the original sample face image, the living body detection result of the original sample face image is affected, the target pixel sensitive to the disturbance in the original sample face image is determined based on the disturbance quantity corresponding to each pixel in the original sample face image, then the determined disturbance quantity corresponding to the sensitive area is more accurate based on the determined sensitive area of the target pixel and the disturbance quantity corresponding to the target pixel, and further the disturbance processing is performed on the original sample face image by utilizing the determined sensitive area and the corresponding disturbance quantity thereof, so that the obtained anti-sample face image can better simulate various fake face images, and the universality of the face anti-fake model is improved.
Only one specific implementation of determining the sensitive area and the disturbance quantity corresponding to the sensitive area on the face image is shown here. Of course, it should be understood that the sensitive area on the face image and the disturbance amount corresponding to the sensitive area may also be determined in other manners, which is not limited in the embodiment of the present application.
S162, based on the disturbance quantity corresponding to the sensitive area, carrying out disturbance processing on the sensitive area to obtain an anti-sample face image.
Specifically, the sum of the disturbance amounts corresponding to the original pixels of the sensitive area and the sensitive area can be determined to be the target pixel of the sensitive area, and then the sensitive area is filled with the target pixel, so that the antigen sample face image is obtained.
Only one specific implementation of the perturbation process on the sensitive area is shown here. Of course, it should be understood that the disturbance processing on the sensitive area may be implemented in other manners, which are not limited by the embodiments of the present application.
The embodiment of the present application herein shows a specific implementation of S106 described above. Of course, it should be understood that S106 may be implemented in other manners, which are not limited in this embodiment of the present application.
In another embodiment of the present application, before the step S106, the image processing method of the embodiment of the present application may further include: and determining that the first living body detection result of the original sample face image meets a preset disturbance condition.
In practical application, the preset disturbance condition may be set according to actual needs, which is not limited in the embodiment of the present application. Optionally, considering that the confidence of the face image with higher living probability or non-living probability is higher, and thus the face image is easier to be classified and identified, in order to further increase the robustness of the face anti-counterfeiting model, the preset disturbance conditions may include: any one of the first living probability and the first non-living probability exceeds a preset probability threshold. The preset probability threshold may be set according to actual needs, for example, the preset probability threshold may be set to 0.5, which is not limited in the embodiment of the present application.
Further, if the first living body detection result of the original sample face image does not meet the preset disturbance condition, the original sample face image can be directly used for training the face anti-counterfeiting model.
It can be understood that after the first living body detection result of the original sample face image is obtained, the confidence coefficient of the original sample face image of which the first living body detection result meets the preset disturbance condition is higher and is easier to be classified and identified, so that the original sample face image meeting the preset disturbance condition is subjected to disturbance treatment and then is used for training the face anti-counterfeiting model, and the robustness of the face anti-counterfeiting model is improved.
According to the image processing method provided by the embodiment of the application, after the first living body detection result corresponding to the original sample face image is obtained through the living body detection model of the pre-selection training, disturbance gradient information of the original sample face image is determined based on the first living body detection result and model parameters of the living body detection model, and the association degree of the disturbance degree added on the original sample face image to the first living body detection result of the original sample face image can be obtained; and then, based on disturbance gradient information of the original sample face image, carrying out disturbance processing on the original sample face image, thereby obtaining the anti-sample face image with various disturbance degrees, training the anti-counterfeiting model by using the obtained anti-sample face image, and being beneficial to improving the defending ability of the anti-counterfeiting model on various anti-counterfeiting samples, and further improving the universality and the robustness of the anti-counterfeiting model.
Referring to fig. 5, a flow chart of a training method of a face anti-counterfeiting model according to an embodiment of the present application is provided, and the method may include the following steps:
s502, inputting the sample face image in the sample set into a face anti-counterfeiting model to be trained, and obtaining a second living body detection result corresponding to the sample face image in the sample set.
Wherein the sample set comprises at least an anti-sample face image. Of course, as shown in fig. 6 and fig. 7, in order to make the face anti-counterfeiting model have a better living body detection effect, the sample set further includes an original sample face image.
The anti-sample face image is obtained by carrying out disturbance processing on the original sample face image based on disturbance gradient information of the original sample face image. The disturbance gradient information of the original sample face image is used for representing the association degree of the disturbance degree increased on the original sample face image to the first living body detection result.
The disturbance gradient information of the original sample face image is determined based on model parameters of a pre-trained living body detection model and a first living body detection result obtained by the living body detection model aiming at the original sample face image. The living body detection model perturbs the gradient information.
S504, based on the pseudo tag corresponding to the sample face image in the sample set and the second living body detection result, adjusting model parameters of the face anti-counterfeiting model to be trained to obtain a final face anti-counterfeiting model.
The pseudo tag corresponding to the sample face image in the sample set is determined based on a first living body detection result corresponding to the original sample face image. Specifically, as shown in fig. 6 and 7, for an anti-sample face image in a sample set, a first living body detection result corresponding to an original sample face image for generating the anti-sample face image may be determined as a pseudo tag of the anti-sample face image; for an original sample face image in a sample set, a first living body detection result corresponding to the original sample face image can be determined to be a pseudo tag of the original sample face image.
It can be understood that, since the challenge sample face image itself does not have a category label, a pseudo label corresponding to the challenge sample face image is determined based on the first living body detection result corresponding to the original sample face image used to generate the challenge sample face image, which is equivalent to manually labeling the challenge sample face image to indicate whether the challenge sample face image is a living body face image. In addition, the false label corresponding to the sample face image in the sample set is determined based on the first living body detection result corresponding to the original sample face image, so that when the face anti-counterfeiting model is trained by utilizing the sample face image in the sample set and the false label corresponding to the sample face image, knowledge of the existing living body detection model can be migrated to the face anti-counterfeiting model in a knowledge distillation mode, the generalization capability of the living body detection model, namely mapping knowledge from the input face image to the obtained living body detection result, can be quickly learned by the face anti-counterfeiting model, and further the face anti-counterfeiting model keeps performance close to the living body detection model, so that the input face image can be detected in a living body.
Specifically, the category indicated by the first living body detection result (such as a living body face image or a non-living body face image) may be regarded as a pseudo tag corresponding to the antigen sample face image. For example, if the first living body detection result includes a first living body probability that the original sample face image belongs to the living body face image and a first non-living body probability that the first sample face image belongs to the non-living body face image, a larger value of the first living body probability and the first non-living body probability may be determined as the pseudo tag corresponding to the anti-sample face image.
Since the finally trained Fs model eventually converges to the Ft model in the process of moving knowledge distillation of one model (called the Ft model) to another model (called the Fs model), it can also be equivalently:
h (Fs (X), ft (X))=h (Ft (X))+kl (Ft (X) |fs (X)), where H is the entropy of information of Fs (X) and Ft (X), KL is the KL divergence of Fs (X) and Ft (X), and when kl=0 is the distillation process, it is the only valid solution, and therefore Fs (X) =ft (X) must be satisfied. However, the Fs model distilled out performs better against samples, since only a limited number of samples are accessible during distillation, and the training algorithm solves the approximate optimization problem, which is generally non-linear and non-convex.
Because the false label corresponding to the sample face image in the sample set is determined based on the first living body detection result output by the pre-trained living body detection model to the original sample face image, the difference between the false label corresponding to the sample face image in the sample set and the second living body detection result output by the face anti-counterfeiting model aiming at the sample face image in the sample set can reflect the learning knowledge effect of the face anti-counterfeiting model from the living body detection model, and further the model parameters of the face anti-counterfeiting model can be adjusted based on the learning knowledge effect, so that the detection accuracy of the face anti-counterfeiting model is improved, and the defending effect of the face anti-counterfeiting model on the antibody sample is improved.
Specifically, in an alternative implementation, S504 may include: inputting the sample face image in the sample set into a face anti-counterfeiting model to be trained, and outputting a second living body detection result of the sample face image in the sample set; determining the classification loss of the face anti-counterfeiting model based on the pseudo tag corresponding to the sample face image in the sample set and the second living body detection result; based on the classification loss of the face anti-counterfeiting model, the model parameters of the face anti-counterfeiting model to be trained are adjusted, and the final face anti-counterfeiting model is obtained.
The classification loss of the face anti-counterfeiting model is used for representing the difference between a pseudo tag corresponding to a sample face image input into the face anti-counterfeiting model and a second living body detection result obtained by the face anti-counterfeiting model aiming at the input sample face image, and the difference can reflect the learning knowledge effect of the face anti-counterfeiting model from the living body detection model. In practical application, the classification loss of the face anti-counterfeiting model can be determined by a second preset loss function shown in the following formula (2):
wherein argmin θFs represents the classification loss of the face anti-counterfeiting model, X represents the number of sample face images in a sample set, and X i Represents the i-th sample face image in the sample set, ft (x i ) Representing a pseudo label corresponding to an ith sample face image, fs representing a face anti-counterfeiting model, fs (x i ) Second living body detection representing ith sample face imageAs a result.
In practical applications, the second living body detection result may be a code for uniquely indicating that the corresponding sample face image is a living body face image or a non-living body face image, for example, a one-hot (one-hot) code, where the code (1, 0) indicates that the corresponding sample face image is a living body face image, and the code (0, 1) indicates that the corresponding sample face image is a non-living body face image; of course, the second living body detection result may further include a second living body probability that the corresponding sample face image belongs to the living body face image and a second non-living body probability that the corresponding sample face image belongs to the non-living body face image.
Model parameters of the face anti-counterfeiting model may include, for example, but not limited to, the number of nodes in each network layer in the face anti-counterfeiting model, connection relationships and connection edge weights between nodes in different network layers, offsets corresponding to the nodes in each network layer, and the like.
More specifically, when the network parameters of the face anti-counterfeiting model are adjusted by adopting a back propagation algorithm, the classification loss caused by each network layer in the face anti-counterfeiting model can be determined by adopting the back propagation algorithm based on the classification loss of the face anti-counterfeiting model and the current network parameters of each network layer in the face anti-counterfeiting model; then, the classification loss of the face anti-counterfeiting model is reduced as a target, and relevant parameters of each network layer in the layer-by-layer face anti-counterfeiting model are used.
It should be noted that, the above-mentioned process is only one adjustment process, and in practical application, multiple adjustments may be needed, so that the above-mentioned adjustment process may be repeatedly performed multiple times until a preset training stop condition is met, thereby obtaining the final face anti-counterfeiting model. The preset training stop condition may be that the classification loss of the face anti-counterfeiting model is smaller than a preset loss threshold, or may also be that the adjustment times reach the preset times, or the like, which is not limited in the embodiment of the present application.
Further, considering that information capable of representing data similarity exists in the relative probability of the false detection result, the first living probability or the first non-living probability may be too close to 0 in the first living detection result, so that the influence on the classification loss of the face anti-counterfeiting model is small to be negligible. In order to retain the useful information, when the pseudo tag corresponding to the sample face image in the sample set is determined, conversion processing can be performed on the first living body detection result of the original sample face image, so that the first living body detection result after conversion processing can provide more information for face anti-counterfeiting model learning compared with the original first living body detection result, and when the classification loss of the face anti-counterfeiting model is determined, conversion processing can be performed on the second living body detection result of the sample face image in the sample set, so that the second living body detection result after conversion processing can provide more information for face anti-counterfeiting model learning compared with the original second living body detection result.
Specifically, before S504 described above, the first living body detection result is converted into the first target living body detection result based on the preset conversion coefficient, and the first target living body detection result is determined as the pseudo tag corresponding to the antigen sample face image and the pseudo tag corresponding to the original sample face image. Illustratively, the first biopsy result may be converted into a first target biopsy result by the following equation (3):
Wherein, softmax (z i1 ) Representing a first target living body detection result corresponding to the ith original sample face image, z i1 And (3) representing a first living body detection result corresponding to the ith original sample face image, wherein t represents a preset conversion coefficient, and C represents the number of the original sample face images.
Accordingly, the above S504 may be specifically implemented as: and converting the second living body detection result of the sample face image in the sample set into a second target living body detection result based on the same preset conversion coefficient, and determining the classification loss of the face anti-counterfeiting model based on the pseudo tag corresponding to the sample face image in the sample set and the second target living body detection result. Illustratively, the second living detection result may be converted into the second target living detection result by the following formula (4):
wherein, softmax (z i2 ) Representing a second target living body detection result corresponding to an ith sample face image in the sample set, Z i2 And (3) representing a second living body detection result corresponding to the ith sample face image in the sample set, wherein t represents a preset conversion coefficient, and C represents the number of the sample face images in the sample set.
The embodiment of the present application herein shows a specific implementation of S504 described above. Of course, it should be understood that S504 may be implemented in other manners, which are not limited in this embodiment of the present application.
In this embodiment of the present application, the face anti-counterfeiting model may have any appropriate structure, and may specifically be set according to actual needs, which is not limited in this embodiment of the present application. In an alternative implementation, to reduce the amount of computation and improve the living body detection efficiency, the face anti-counterfeiting model may employ a lightweight network with a smaller amount of parameters, such as MobileNet. More specifically, as shown in fig. 8, the face anti-counterfeiting model may include a classifier and at least one level of feature extraction layer, where the first level of feature extraction layer is used to perform feature extraction on a sample face image in a sample set to obtain corresponding image features, and the rest levels of feature extraction layers except the first level of feature extraction layer are used to perform feature extraction on the image features obtained by the previous level of feature extraction layer to obtain image features corresponding to the current level of feature extraction layer; the classifier is used for performing living body detection on the image features obtained by the last-stage feature extraction layer to obtain a second living body detection result. In practical applications, the feature extraction layer may have any suitable structure, for example, bottleneck, etc., which is not limited in this embodiment of the present application. In order to distinguish the classifier from the classifier in the living body detection model, in the embodiment of the application, the classifier in the living body detection model is called a first classifier, and the classifier in the face anti-counterfeiting model is called a second classifier.
In another alternative implementation manner, considering that disturbance information in the anti-sample face image in the sample set deepens and is overlapped along with the output of each level of feature extraction layer in the face anti-counterfeiting model, further the disturbance information is more and more obvious on the image features output by each level of feature extraction layer, and finally the image features possibly input into the second classifier are distorted, so that the detection accuracy of the second classifier is affected. Wherein the number of noise reducers may be determined according to the number of feature extraction layers.
The noise reducer is used for carrying out noise reduction treatment on the image features obtained by the upper-stage feature extraction layer in any two adjacent feature extraction layers and then inputting the image features into the lower-stage feature extraction layer. The first-stage feature extraction layer is used for extracting features of the sample face images in the sample set to obtain corresponding image features; the other all levels of feature extraction layers except the first level of feature extraction layer are used for carrying out feature extraction on the image features obtained by the upper level of feature extraction layer after noise reduction treatment by the noise reducer, so as to obtain the image features corresponding to the current level of feature extraction layer.
Because the disturbance actually belongs to noise for the face image, the noise reducer can adopt a filter in practical application based on the fact. Alternatively, the noise reducer may employ a gaussian filter, where the weight is adjusted by the spatial distance, so that the boundary is blurred to some extent, but the noise reducer may perform the function of noise reduction, and the face image may not be excessively distorted. Of course, it should be understood that other filters may be used for the noise reducer, such as a median filter, a bilateral filter, and the like.
According to the training method of the face anti-counterfeiting model, a knowledge distillation technology is adopted, a antibody sample face image obtained from an original sample face image is used as a training sample of the face anti-counterfeiting model, a pseudo tag corresponding to the training sample is determined based on a first living body detection result obtained from a living body detection model for the original sample face image, the training sample and the tag are utilized to train the face anti-counterfeiting model, so that knowledge of the living body detection model is transferred to the face anti-counterfeiting model, the face anti-counterfeiting model can quickly learn the generalization capability of the living body detection model, namely mapping knowledge from an input face image to an obtained living body detection result, and the face anti-counterfeiting model keeps performance close to the living body detection model, so that the input face image can be subjected to living body detection; and because most of antibody samples are obtained by using detection results obtained by an existing living body detection model for a real human face image, disturbance is added on the basis of the real human face image, therefore, after a first living body detection result corresponding to an original sample human face image is obtained by a preselected and trained living body detection model, disturbance gradient information of the original sample human face image is determined based on the first living body detection result and model parameters of the living body detection model, and the correlation degree of the disturbance degree added on the original sample human face image to the first living body detection result of the original sample human face image can be obtained; then, based on disturbance gradient information of the original sample face image, disturbance processing is carried out on the original sample face image, so that the obtained anti-sample face image can cover various disturbance degrees, and then the obtained anti-sample face image is used for training the anti-counterfeiting model, so that the anti-counterfeiting model is beneficial to improving the defending capability of the anti-counterfeiting model on various anti-counterfeiting samples, and the universality and the robustness of the anti-counterfeiting model are improved.
Based on the training method of the face anti-counterfeiting model shown in the embodiment of the application, the trained face anti-counterfeiting model can be applied to any scene needing living body detection. The following describes the application process based on the face anti-counterfeiting model in detail.
The embodiment of the application also provides a living body detection method which can carry out living body detection on the input face image based on the face anti-counterfeiting model trained by the method shown in fig. 1.
Referring to fig. 9, a flowchart of a living body detection method according to an embodiment of the present application is provided, and the method may include the following steps:
s902, inputting the face image to be detected into a pre-trained face anti-counterfeiting model to obtain a third living body detection result.
The human face anti-counterfeiting model is obtained by training based on a sample human face image in a sample set and a corresponding pseudo tag thereof, the sample set at least comprises a correlation sample human face image, the correlation sample human face image is obtained by performing disturbance processing on the original sample human face image based on disturbance gradient information of the original sample human face image, the disturbance gradient information is determined based on model parameters of a pre-trained living body detection model and a first living body detection result obtained by the living body detection model aiming at the original sample human face image, the disturbance gradient information is used for representing the correlation degree of the disturbance degree increased on the original sample human face image to the first living body detection result, the pseudo tag corresponding to the sample human face image in the sample set is determined based on the first living body detection result corresponding to the original sample human face image, and the pseudo tag is used for representing whether the corresponding sample human face image belongs to the living body human face image.
In practical applications, the third living body detection result may be a code for uniquely indicating that the face image to be detected is a living body face image or a non-living body face image, such as a one-hot (one-hot) code, where the code (1, 0) indicates that the face image to be detected is a living body face image, and the code (0, 1) indicates that the face image to be detected is a non-living body face image; of course, the third living body detection result may further include a third living body probability that the face image to be detected belongs to a living body face image and a third non-living body probability that the face image to be detected belongs to a non-living body face image.
S904, based on the third living body detection result, it is determined whether the face image to be detected is a living body face image.
Specifically, if the third living body detection result is a code that uniquely indicates that the face image to be detected is a living body face image or a non-living body face image, it may be determined whether the face image to be detected is a living body face image based on the code.
If the third living body detection result includes a third living body probability that the face image to be detected belongs to a living body face image and a third non-living body probability that the face image to be detected belongs to a non-living body face image, it may be determined whether the face image to be detected is a living body face image based on a relative size between the third living body probability and the third non-living body probability. For example, if the third living probability is greater than the third non-living probability, it may be determined that the face image to be detected is a living face image; if the third living body probability is smaller than the third non-living body probability, the face image to be detected can be determined to be the non-living body face image.
According to the living body detection method provided by the embodiment of the application, the human face anti-counterfeiting model is obtained by training the human face anti-counterfeiting model by using a knowledge distillation technology, a antigen sample human face image obtained from an original sample human face image is used as a training sample of the human face anti-counterfeiting model, and based on a first living body detection result obtained by the living body detection model for the original sample human face image, a pseudo label corresponding to the training sample is determined, and the human face anti-counterfeiting model is trained by using the training sample and the label thereof, so that the human face anti-counterfeiting model can learn the generalization capability of the living body detection model, namely, the mapping knowledge from an input human face image to an obtained living body detection result is obtained, and the human face anti-counterfeiting model further keeps the performance close to the living body detection model so as to carry out living body detection on the input human face image; and because most of antibody samples are obtained by using detection results obtained by an existing living body detection model for a real human face image, disturbance is added on the basis of the real human face image, therefore, after a first living body detection result corresponding to an original sample human face image is obtained by a preselected and trained living body detection model, disturbance gradient information of the original sample human face image is determined based on the first living body detection result and model parameters of the living body detection model, and the correlation degree of the disturbance degree added on the original sample human face image to the first living body detection result of the original sample human face image can be obtained; then, based on disturbance gradient information of the original sample face image, disturbance processing is carried out on the original sample face image, so that the obtained anti-sample face image can cover the characteristics of a large number of existing anti-sample, and then the obtained anti-sample face image is used for training an anti-counterfeiting model, so that the trained anti-counterfeiting model has excellent defensive ability to various anti-sample, and has strong universality and robustness.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In addition, corresponding to the image processing method shown in fig. 1, the embodiment of the application also provides an image processing device. Referring to fig. 10, a schematic structural diagram of an image processing apparatus 1000 according to an embodiment of the present application is provided, where the apparatus 1000 includes:
the first living body detection module 1010 is configured to input an original sample face image into a pre-trained living body detection model, so as to obtain a first living body detection result;
a gradient determining module 1020, configured to determine perturbation gradient information of the original sample face image based on the first living body detection result and model parameters of the living body detection model, where the perturbation gradient information is used to represent a degree of association of an increased degree of perturbation on the original sample face image to the first living body detection result;
And the disturbance module 1030 is configured to perform disturbance processing on the original sample face image based on the disturbance gradient information, so as to obtain an antagonistic sample face image.
According to the image processing device provided by the embodiment of the application, after the first living body detection result corresponding to the original sample face image is obtained through the living body detection model of the pre-selection training, disturbance gradient information of the original sample face image is determined based on the first living body detection result and model parameters of the living body detection model, and the association degree of the disturbance degree added on the original sample face image to the first living body detection result of the original sample face image can be obtained; and then, based on disturbance gradient information of the original sample face image, carrying out disturbance processing on the original sample face image, thereby obtaining the anti-sample face image with various disturbance degrees, training the anti-counterfeiting model by using the obtained anti-sample face image, and being beneficial to improving the defending ability of the anti-counterfeiting model on various anti-counterfeiting samples, and further improving the universality and the robustness of the anti-counterfeiting model.
Optionally, the perturbation module includes:
the disturbance determination submodule is used for determining a sensitive area of the original sample face image and disturbance quantity corresponding to the sensitive area based on the disturbance gradient information and the image data of the original sample face image;
And the disturbance processing sub-module is used for carrying out disturbance processing on the sensitive area based on the disturbance quantity corresponding to the sensitive area to obtain the face image of the anti-sample.
Optionally, the disturbance determination submodule is configured to:
determining the corresponding disturbing quantity of each pixel in the original sample face image based on the disturbing gradient information and the pixel information of each pixel in the original sample face image;
determining a target pixel in the original sample face image based on the disturbance quantity corresponding to each pixel in the original sample face image;
determining a sensitive area of the original sample face image based on the position information of the target pixel;
and determining the disturbance quantity corresponding to the sensitive area based on the disturbance quantity corresponding to the target pixel.
Optionally, the disturbance determining submodule determines a sensitive area of the original sample face image based on the position information of the target pixel, including:
and expanding the target pixel based on the position information of the target pixel to obtain the sensitive area, wherein the size of the sensitive area is smaller than that of the original sample face image.
Optionally, the first living body detection result includes a first living body probability that the original sample face image belongs to a living body face image and a first non-living body probability that the original sample face image belongs to a non-living body face image;
The gradient determination module includes:
a first gradient determination submodule for determining a first gradient based on the first living body probability and model parameters of the living body detection model, wherein the first gradient is used for representing the association degree of the increased disturbance degree on the original sample face image to the first living body probability;
a second gradient determination sub-module for determining a second gradient for representing a degree of association of an increased degree of disturbance on the original sample face image to the first non-living probability based on the first non-living probability and model parameters of the living detection model;
a perturbation gradient determination sub-module for determining the perturbation gradient information based on the first gradient and the second gradient.
Optionally, the apparatus 1000 further includes:
the detection result judging module is used for determining that the first living body detection result meets a preset disturbance condition before the gradient determining module determines disturbance gradient information of the original sample face image based on the first living body detection result and model parameters of the living body detection model.
Obviously, the image processing apparatus provided in the embodiment of the present application may be used as an execution subject of the image processing method shown in fig. 1, so that the functions of the image processing apparatus implemented in fig. 1 can be implemented. Since the principle is the same, the description is not repeated here.
In addition, corresponding to the training method of the face anti-counterfeiting model shown in fig. 5, the embodiment of the application also provides a training device of the face anti-counterfeiting model. Referring to fig. 11, a schematic structural diagram of a training device 1100 for a face anti-counterfeiting model according to an embodiment of the present application is provided, where the device includes:
the second living body detection module 1110 is configured to input a sample face image in a sample set into a face anti-counterfeiting model to be trained, and obtain a second living body detection result corresponding to the sample face image in the sample set, where the sample set at least includes a resist sample face image, the resist sample face image is obtained by performing disturbance processing on an original sample face image based on disturbance gradient information of the original sample face image, the disturbance gradient information is determined based on model parameters of a pre-trained living body detection model and a first living body detection result obtained by the living body detection model for the original sample face image, and the disturbance gradient information is used to represent a degree of association of a disturbance degree increased on the original sample face image on the first living body detection result;
the adjusting module 1120 is configured to adjust model parameters of the face anti-counterfeiting model to be trained based on labels corresponding to sample face images in the sample set and second living body detection results to obtain a final face anti-counterfeiting model, where pseudo labels corresponding to sample face images in the sample set are determined based on first living body detection results corresponding to the original sample face images.
According to the training device for the human face anti-counterfeiting model, a knowledge distillation technology is adopted, a antibody sample human face image obtained from an original sample human face image is used as a training sample of the human face anti-counterfeiting model, a pseudo tag corresponding to the training sample is determined based on a first living body detection result obtained by a living body detection model aiming at the original sample human face image, the training sample and the tag are utilized to train the human face anti-counterfeiting model, so that knowledge of the living body detection model is migrated to the human face anti-counterfeiting model, the human face anti-counterfeiting model can quickly learn the generalization capability of the living body detection model, namely mapping knowledge from an input human face image to an obtained living body detection result, and the human face anti-counterfeiting model keeps performance close to the living body detection model, so that the input human face image can be detected in a living body; and because most of antibody samples are obtained by using detection results obtained by an existing living body detection model for a real human face image, disturbance is added on the basis of the real human face image, therefore, after a first living body detection result corresponding to an original sample human face image is obtained by a preselected and trained living body detection model, disturbance gradient information of the original sample human face image is determined based on the first living body detection result and model parameters of the living body detection model, and the correlation degree of the disturbance degree added on the original sample human face image to the first living body detection result of the original sample human face image can be obtained; then, based on disturbance gradient information of the original sample face image, disturbance processing is carried out on the original sample face image, so that the obtained anti-sample face image can cover various disturbance degrees, and then the obtained anti-sample face image is used for training the anti-counterfeiting model, thereby being beneficial to improving the defending capability of the anti-counterfeiting model on various anti-counterfeiting samples, and further improving the universality and the robustness of the anti-counterfeiting model
Optionally, the face anti-counterfeiting model to be trained comprises a classifier, a noise reducer and a multi-stage feature extraction layer;
the noise reducer is used for carrying out noise reduction treatment on the image features obtained by the upper-stage feature extraction layer in any two adjacent feature extraction layers and inputting the image features to the lower-stage feature extraction layer;
the first-stage feature extraction layer is used for extracting features of the sample face images in the sample set to obtain corresponding image features;
the other all levels of feature extraction layers except the first level of feature extraction layer are used for carrying out feature extraction on the image features obtained by the upper level of feature extraction layer after the noise reduction treatment of the noise reducer to obtain the image features corresponding to the current level of feature extraction layer;
the classifier is used for performing living body detection on the image features obtained by the last-stage feature extraction layer to obtain the second living body detection result.
Optionally, the apparatus 1100 further includes:
a conversion module, configured to convert, before the adjustment module adjusts the model parameters of the face anti-counterfeiting model to be trained based on the pseudo tag and the second living detection result corresponding to the sample face image in the sample set, the first living detection result corresponding to the original sample face image into a first target living detection result based on a preset conversion coefficient,
The false label determining module is used for determining the first target living body detection result as a false label corresponding to the antigen sample face image;
the adjustment module includes:
a conversion sub-module for converting the second living body detection result corresponding to the anti-sample face image into a second target living body detection result corresponding to the anti-sample face image based on the preset conversion coefficient, and,
and the adjusting sub-module is used for adjusting the model parameters of the face anti-counterfeiting model to be trained based on the pseudo tag corresponding to the anti-sample face image in the sample set and the second target living body detection result to obtain a final face anti-counterfeiting model.
Obviously, the training device for the face anti-counterfeiting model provided by the embodiment of the application can be used as an execution main body of the training method for the face anti-counterfeiting model shown in fig. 5, so that the function of the training device for the face anti-counterfeiting model in fig. 5 can be realized. Since the principle is the same, the description is not repeated here.
In addition, the embodiment of the present application also provides a living body detection apparatus corresponding to the living body detection method shown in fig. 9 described above. Referring to fig. 12, a schematic structural diagram of a living body detection apparatus 1200 according to an embodiment of the present application is provided, where the apparatus includes:
The third living body detection module 1210 is configured to input a face image to be detected into a pre-trained face anti-counterfeiting model to obtain a third living body detection result, where the face anti-counterfeiting model is obtained by training based on a sample face image in a sample set and a pseudo tag corresponding to the sample face image, the sample set at least includes an anti-sample face image, the anti-sample face image is obtained by performing disturbance processing on the original sample face image based on disturbance gradient information of the original sample face image, the disturbance gradient information is determined based on model parameters of the pre-trained living body detection model and a first living body detection result obtained by the living body detection model for the original sample face image, the disturbance gradient information is used to indicate a degree of association of a disturbance degree increased on the original sample face image with the first living body detection result, and the pseudo tag corresponding to the sample face image in the sample set is determined based on the first living body detection result corresponding to the original sample face image, and the pseudo tag is used to indicate whether the corresponding sample face image belongs to the living body image;
the living body determining module 1220 is configured to determine whether the face image to be detected is a living body face image based on the third living body detection result.
According to the living body detection device provided by the embodiment of the application, the human face anti-counterfeiting model is obtained by training the human face anti-counterfeiting model by using a knowledge distillation technology, a antibody sample human face image obtained from an original sample human face image is used as a training sample of the human face anti-counterfeiting model, and based on a first living body detection result obtained by the living body detection model for the original sample human face image, a pseudo label corresponding to the training sample is determined, and the human face anti-counterfeiting model is trained by using the training sample and the label thereof, so that the human face anti-counterfeiting model can learn the generalization capability of the living body detection model, namely, the mapping knowledge from an input human face image to an obtained living body detection result is obtained, and the human face anti-counterfeiting model keeps the performance close to the living body detection model, so that the input human face image can be detected in a living body; and because most of antibody samples are obtained by using detection results obtained by an existing living body detection model for a real human face image, disturbance is added on the basis of the real human face image, therefore, after a first living body detection result corresponding to an original sample human face image is obtained by a preselected and trained living body detection model, disturbance gradient information of the original sample human face image is determined based on the first living body detection result and model parameters of the living body detection model, and the correlation degree of the disturbance degree added on the original sample human face image to the first living body detection result of the original sample human face image can be obtained; then, based on disturbance gradient information of the original sample face image, disturbance processing is carried out on the original sample face image, so that the obtained anti-sample face image can cover the characteristics of a large number of existing anti-sample, and then the obtained anti-sample face image is used for training an anti-counterfeiting model, so that the trained anti-counterfeiting model has excellent defensive ability to various anti-sample, and has strong universality and robustness.
Obviously, the living body detection device provided by the embodiment of the application can be used as an execution body of the living body detection method shown in fig. 9, so that the function of the living body detection device in fig. 9 can be realized. Since the principle is the same, the description is not repeated here.
Fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present application. Referring to fig. 12, at the hardware level, the electronic device includes a processor, and optionally an internal bus, a network interface, and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory (non-volatile Memory), such as at least 1 disk Memory. Of course, the electronic device may also include hardware required for other services.
The processor, network interface, and memory may be interconnected by an internal bus, which may be an ISA (Industry Standard Architecture ) bus, a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus, or EISA (Extended Industry Standard Architecture ) bus, among others. The buses may be classified as address buses, data buses, control buses, etc. For ease of illustration, only one bi-directional arrow is shown in FIG. 12, but not only one bus or type of bus.
And the memory is used for storing programs. In particular, the program may include program code including computer-operating instructions. The memory may include memory and non-volatile storage and provide instructions and data to the processor.
The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs, forming the image processing device on a logic level. The processor is used for executing the programs stored in the memory and is specifically used for executing the following operations:
inputting an original sample face image into a pre-trained living body detection model to obtain a first living body detection result;
determining disturbance gradient information of the original sample face image based on the first living body detection result and model parameters of the living body detection model, wherein the disturbance gradient information is used for representing the association degree of the disturbance degree increased on the original sample face image to the first living body detection result;
and carrying out disturbance processing on the original sample face image based on the disturbance gradient information to obtain an anti-sample face image.
Or the processor reads the corresponding computer program from the nonvolatile memory to the memory and then runs the computer program to form the training device of the human face anti-counterfeiting model on the logic level. The processor is used for executing the programs stored in the memory and is specifically used for executing the following operations:
Inputting a sample face image in a sample set into a face anti-counterfeiting model to obtain a second living body detection result corresponding to the sample face image in the sample set, wherein the sample set at least comprises an antigen sample face image, the antigen sample face image is obtained by carrying out disturbance processing on an original sample face image based on disturbance gradient information of the original sample face image, the disturbance gradient information is determined based on model parameters of a pre-trained living body detection model and a first living body detection result obtained by the living body detection model for the original sample face image, and the disturbance gradient information is used for representing the association degree of the disturbance degree increased on the original sample face image on the first living body detection result;
and adjusting model parameters of the face anti-counterfeiting model based on a pseudo tag corresponding to the sample face image in the sample set and a second living body detection result, wherein the pseudo tag corresponding to the sample face image in the sample set is determined based on a first living body detection result corresponding to the original sample face image.
Alternatively, the processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs the computer program, and the living body detection device is formed on a logic level. The processor is used for executing the programs stored in the memory and is specifically used for executing the following operations:
Inputting a face image to be detected into a pre-trained face anti-counterfeiting model to obtain a third living body detection result, wherein the face anti-counterfeiting model is obtained by training based on a sample face image in a sample set and a corresponding pseudo tag thereof, the sample set at least comprises an antibody sample face image, the antibody sample face image is obtained by carrying out disturbance processing on the original sample face image based on disturbance gradient information of the original sample face image, the disturbance gradient information is determined based on model parameters of the pre-trained living body detection model and a first living body detection result obtained by the living body detection model for the original sample face image, the disturbance gradient information is used for representing the association degree of the increased disturbance degree on the original sample face image on the first living body detection result, and the pseudo tag corresponding to the sample face image in the sample set is determined based on the first living body detection result corresponding to the original sample face image and is used for representing whether the corresponding sample face image belongs to the living body image or not;
and determining whether the face image to be detected is a living face image or not based on the third living body detection result.
The method executed by the image processing device disclosed in the embodiment shown in fig. 1 of the present application or the method executed by the training device of the face anti-counterfeiting model disclosed in the embodiment shown in fig. 5 or the method executed by the living body detecting device disclosed in the embodiment shown in fig. 9 may be applied to a processor or implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method.
The electronic device may further perform the method of fig. 1 and implement the function of the embodiment shown in fig. 1 by the image processing apparatus, or the electronic device may further perform the method of fig. 5 and implement the function of the training apparatus of the anti-counterfeiting model of the face in the embodiment shown in fig. 5, or the electronic device may further perform the method of fig. 9 and implement the function of the living body detection apparatus in the embodiment shown in fig. 9, which is not described herein again.
Of course, other implementations, such as a logic device or a combination of hardware and software, are not excluded from the electronic device of the present application, that is, the execution subject of the following processing flow is not limited to each logic unit, but may be hardware or a logic device.
The present embodiments also provide a computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a portable electronic device comprising a plurality of application programs, enable the portable electronic device to perform the method of the embodiment of fig. 1, and in particular to:
inputting an original sample face image into a pre-trained living body detection model to obtain a first living body detection result;
Determining disturbance gradient information of the original sample face image based on the first living body detection result and model parameters of the living body detection model, wherein the disturbance gradient information is used for representing the association degree of the disturbance degree increased on the original sample face image to the first living body detection result;
and carrying out disturbance processing on the original sample face image based on the disturbance gradient information to obtain an anti-sample face image.
Alternatively, the instructions, when executed by a portable electronic device comprising a plurality of applications, enable the portable electronic device to perform the method of the embodiment shown in fig. 5, and in particular to:
inputting a sample face image in a sample set into a face anti-counterfeiting model to obtain a second living body detection result corresponding to the sample face image in the sample set, wherein the sample set at least comprises an antigen sample face image, the antigen sample face image is obtained by carrying out disturbance processing on an original sample face image based on disturbance gradient information of the original sample face image, the disturbance gradient information is determined based on model parameters of a pre-trained living body detection model and a first living body detection result obtained by the living body detection model for the original sample face image, and the disturbance gradient information is used for representing the association degree of the disturbance degree increased on the original sample face image on the first living body detection result;
And adjusting model parameters of the face anti-counterfeiting model based on a pseudo tag corresponding to the sample face image in the sample set and a second living body detection result, wherein the pseudo tag corresponding to the sample face image in the sample set is determined based on a first living body detection result corresponding to the original sample face image.
Alternatively, the instructions, when executed by a portable electronic device comprising a plurality of applications, enable the portable electronic device to perform the method of the embodiment shown in fig. 9, and in particular to:
inputting a face image to be detected into a pre-trained face anti-counterfeiting model to obtain a third living body detection result, wherein the face anti-counterfeiting model is obtained by training based on a sample face image in a sample set and a corresponding pseudo tag thereof, the sample set at least comprises an antibody sample face image, the antibody sample face image is obtained by carrying out disturbance processing on the original sample face image based on disturbance gradient information of the original sample face image, the disturbance gradient information is determined based on model parameters of the pre-trained living body detection model and a first living body detection result obtained by the living body detection model for the original sample face image, the disturbance gradient information is used for representing the association degree of the increased disturbance degree on the original sample face image on the first living body detection result, and the pseudo tag corresponding to the sample face image in the sample set is determined based on the first living body detection result corresponding to the original sample face image and is used for representing whether the corresponding sample face image belongs to the living body image or not;
And determining whether the face image to be detected is a living face image or not based on the third living body detection result.
In summary, the foregoing description is only a preferred embodiment of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.

Claims (13)

1. An image processing method, comprising:
inputting an original sample face image into a pre-trained living body detection model to obtain a first living body detection result, wherein the first living body detection result comprises a first living body probability that the original sample face image belongs to a living body face image and a first non-living body probability that the original sample face image belongs to a non-living body face image;
Determining disturbance gradient information of the original sample face image based on the first living body detection result and model parameters of the living body detection model, wherein the disturbance gradient information is used for representing the association degree of the disturbance degree added on the original sample face image to the first living body detection result, and the disturbance gradient information is determined by the association degree of the disturbance degree added on the original sample face image to the first living body probability and the association degree of the disturbance degree added on the original sample face image to the first non-living body probability;
and carrying out disturbance processing on the original sample face image based on the disturbance gradient information to obtain an anti-sample face image.
2. The method according to claim 1, wherein the performing perturbation processing on the original sample face image based on the perturbation gradient information to obtain an anti-sample face image comprises:
determining a sensitive area of the original sample face image and a disturbance quantity corresponding to the sensitive area based on the disturbance gradient information and the image data of the original sample face image;
and carrying out disturbance processing on the sensitive area based on the disturbance quantity corresponding to the sensitive area to obtain the anti-sample face image.
3. The method according to claim 2, wherein determining the sensitive area of the original sample face image and the disturbance variable corresponding to the sensitive area based on the disturbance gradient information and the image data of the original sample face image includes:
determining the corresponding disturbing quantity of each pixel in the original sample face image based on the disturbing gradient information and the pixel information of each pixel in the original sample face image;
determining a target pixel in the original sample face image based on the disturbance quantity corresponding to each pixel in the original sample face image;
determining a sensitive area of the original sample face image based on the position information of the target pixel;
and determining the disturbance quantity corresponding to the sensitive area based on the disturbance quantity corresponding to the target pixel.
4. A method according to claim 3, wherein said determining a sensitive area of the original sample face image based on the location information of the target pixel comprises:
and expanding the target pixel based on the position information of the target pixel to obtain the sensitive area, wherein the size of the sensitive area is smaller than that of the original sample face image.
5. The method according to any one of claims 1 to 4, wherein the determining disturbance gradient information of the original sample face image based on the first living detection result and model parameters of the living detection model includes:
determining a first gradient based on the first living body probability and model parameters of the living body detection model, wherein the first gradient is used for representing the association degree of the increased disturbance degree on the original sample face image to the first living body probability;
determining a second gradient based on the first non-living probability and model parameters of the living detection model, wherein the second gradient is used for representing the association degree of the increased disturbance degree on the original sample face image to the first non-living probability;
the perturbation gradient information is determined based on the first gradient and the second gradient.
6. The training method of the human face anti-counterfeiting model is characterized by comprising the following steps of:
inputting a sample face image in a sample set into a face anti-counterfeiting model to be trained to obtain a second living body detection result corresponding to the sample face image in the sample set;
wherein the sample set comprises at least an anti-sample face image, the anti-sample face image being obtained by the image processing method according to any one of claims 1 to 5;
And adjusting model parameters of the face anti-counterfeiting model to be trained based on the pseudo tag corresponding to the sample face image in the sample set and the second living body detection result to obtain a final face anti-counterfeiting model, wherein the pseudo tag corresponding to the sample face image in the sample set is determined based on the first living body detection result corresponding to the original sample face image.
7. The method of claim 6, wherein the face anti-counterfeiting model to be trained comprises a classifier, a noise reducer and a multi-level feature extraction layer;
the first-stage feature extraction layer is used for extracting features of the sample face images in the sample set to obtain corresponding image features;
the noise reducer is used for carrying out noise reduction treatment on the image features obtained by the upper-level feature extraction layer in any two adjacent feature extraction layers;
the other all levels of feature extraction layers except the first level of feature extraction layer are used for carrying out feature extraction on the image features obtained by the upper level of feature extraction layer after the noise reduction treatment of the noise reducer to obtain the image features corresponding to the current level of feature extraction layer;
the classifier is used for performing living body detection on the image features obtained by the last-stage feature extraction layer to obtain the second living body detection result.
8. The method according to claim 6 or 7, wherein before adjusting model parameters of the face anti-counterfeiting model to be trained based on the pseudo tag and the second living detection result corresponding to the sample face image in the sample set, the method further comprises:
converting a first living body detection result corresponding to the original sample face image into a first target living body detection result based on a preset conversion coefficient, and,
determining the first target living body detection result as a pseudo tag corresponding to the antigen sample face image;
the step of adjusting model parameters of the face anti-counterfeiting model to be trained based on the pseudo tag corresponding to the antigen sample face image in the sample set and the second living body detection result to obtain a final face anti-counterfeiting model comprises the following steps:
converting a second living body detection result corresponding to the antigen sample face image into a second target living body detection result corresponding to the antigen sample face image based on the preset conversion coefficient;
and adjusting model parameters of the face anti-counterfeiting model to be trained based on the pseudo tag corresponding to the antigen sample face image in the sample set and a second target living body detection result to obtain a final face anti-counterfeiting model.
9. A living body detecting method, characterized by comprising:
inputting a face image to be detected into a pre-trained face anti-counterfeiting model to obtain a third living body detection result, wherein the face anti-counterfeiting model is obtained by training based on a sample face image in a sample set and a corresponding pseudo tag thereof, the sample set at least comprises an anti-sample face image, and the anti-sample face image is obtained by the image processing method according to any one of claims 1 to 5;
and determining whether the face image to be detected is a living face image or not based on the third living body detection result.
10. An image processing apparatus, comprising:
the first living body detection module is used for inputting an original sample face image into a pre-trained living body detection model to obtain a first living body detection result, wherein the first living body detection result comprises a first living body probability that the original sample belongs to a living body face image and a first non-living body probability that the original sample face image belongs to a non-living body face image;
a gradient determining module, configured to determine disturbance gradient information of the original sample face image based on the first living body detection result and model parameters of the living body detection model, where the disturbance gradient information is used to represent a degree of association of a degree of disturbance added to the original sample face image to the first living body detection result, and the disturbance gradient information is determined by a degree of association of a degree of disturbance added to the original sample face image to the first living body probability, and a degree of association of a degree of disturbance added to the original sample face image to the first non-living body probability;
And the disturbance module is used for carrying out disturbance processing on the original sample face image based on the disturbance gradient information to obtain an anti-sample face image.
11. A living body detecting device, characterized by comprising:
the third living body detection module is used for inputting a face image to be detected into a pre-trained face anti-counterfeiting model to obtain a third living body detection result, wherein the face anti-counterfeiting model is obtained by training based on a sample face image in a sample set and a corresponding pseudo tag thereof, the sample set at least comprises an antigen sample face image, and the antigen sample face image is obtained by the image processing method according to any one of claims 1 to 5;
and the living body determining module is used for determining whether the face image to be detected is a living body face image or not based on the third living body detection result.
12. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of any one of claims 1 to 9.
13. A computer readable storage medium, characterized in that instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method of any one of claims 1 to 9.
CN202210377809.0A 2022-04-12 2022-04-12 Image processing, training of human face anti-counterfeiting model and living body detection method and device Active CN114821823B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210377809.0A CN114821823B (en) 2022-04-12 2022-04-12 Image processing, training of human face anti-counterfeiting model and living body detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210377809.0A CN114821823B (en) 2022-04-12 2022-04-12 Image processing, training of human face anti-counterfeiting model and living body detection method and device

Publications (2)

Publication Number Publication Date
CN114821823A CN114821823A (en) 2022-07-29
CN114821823B true CN114821823B (en) 2023-07-25

Family

ID=82534424

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210377809.0A Active CN114821823B (en) 2022-04-12 2022-04-12 Image processing, training of human face anti-counterfeiting model and living body detection method and device

Country Status (1)

Country Link
CN (1) CN114821823B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115878783B (en) * 2023-01-03 2023-11-03 北京百度网讯科技有限公司 Text processing method, deep learning model training method and sample generation method
CN117079336B (en) * 2023-10-16 2023-12-22 腾讯科技(深圳)有限公司 Training method, device, equipment and storage medium for sample classification model

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000186982A (en) * 1998-12-24 2000-07-04 Sumitomo Rubber Ind Ltd Indoor tire tread analyzer
CN103765448A (en) * 2011-06-10 2014-04-30 菲利普莫里斯生产公司 Systems and methods for quantifying the impact of biological perturbations
CN110647645A (en) * 2019-08-06 2020-01-03 厦门大学 Attack image retrieval method based on general disturbance
CN111148966A (en) * 2017-07-28 2020-05-12 西斯纳维 Heading determination from magnetic field measured by magnetic sensor
CN111178527A (en) * 2019-12-31 2020-05-19 北京航空航天大学 Progressive confrontation training method and device
CN112488172A (en) * 2020-11-25 2021-03-12 北京有竹居网络技术有限公司 Method, device, readable medium and electronic equipment for resisting attack
CN112580732A (en) * 2020-12-25 2021-03-30 北京百度网讯科技有限公司 Model training method, device, equipment, storage medium and program product
CN112597885A (en) * 2020-12-22 2021-04-02 北京华捷艾米科技有限公司 Face living body detection method and device, electronic equipment and computer storage medium
CN112784984A (en) * 2021-01-29 2021-05-11 联想(北京)有限公司 Model training method and device
CN113076901A (en) * 2021-04-12 2021-07-06 深圳前海微众银行股份有限公司 Model stability interpretation method, device, equipment and storage medium
CN113076557A (en) * 2021-04-02 2021-07-06 北京大学 Multimedia privacy protection method, device and equipment based on anti-attack
CN113158900A (en) * 2021-04-22 2021-07-23 中国平安人寿保险股份有限公司 Training method, device and equipment for human face living body detection model and storage medium
CN113159093A (en) * 2020-01-23 2021-07-23 罗伯特·博世有限公司 Methods, systems, and media for determining interpretable masks from neural networks
CN114220097A (en) * 2021-12-17 2022-03-22 中国人民解放军国防科技大学 Anti-attack-based image semantic information sensitive pixel domain screening method and application method and system
CN114299313A (en) * 2021-12-24 2022-04-08 北京瑞莱智慧科技有限公司 Method and device for generating anti-disturbance and storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000186982A (en) * 1998-12-24 2000-07-04 Sumitomo Rubber Ind Ltd Indoor tire tread analyzer
CN103765448A (en) * 2011-06-10 2014-04-30 菲利普莫里斯生产公司 Systems and methods for quantifying the impact of biological perturbations
CN111148966A (en) * 2017-07-28 2020-05-12 西斯纳维 Heading determination from magnetic field measured by magnetic sensor
CN110647645A (en) * 2019-08-06 2020-01-03 厦门大学 Attack image retrieval method based on general disturbance
CN111178527A (en) * 2019-12-31 2020-05-19 北京航空航天大学 Progressive confrontation training method and device
CN113159093A (en) * 2020-01-23 2021-07-23 罗伯特·博世有限公司 Methods, systems, and media for determining interpretable masks from neural networks
CN112488172A (en) * 2020-11-25 2021-03-12 北京有竹居网络技术有限公司 Method, device, readable medium and electronic equipment for resisting attack
CN112597885A (en) * 2020-12-22 2021-04-02 北京华捷艾米科技有限公司 Face living body detection method and device, electronic equipment and computer storage medium
CN112580732A (en) * 2020-12-25 2021-03-30 北京百度网讯科技有限公司 Model training method, device, equipment, storage medium and program product
CN112784984A (en) * 2021-01-29 2021-05-11 联想(北京)有限公司 Model training method and device
CN113076557A (en) * 2021-04-02 2021-07-06 北京大学 Multimedia privacy protection method, device and equipment based on anti-attack
CN113076901A (en) * 2021-04-12 2021-07-06 深圳前海微众银行股份有限公司 Model stability interpretation method, device, equipment and storage medium
CN113158900A (en) * 2021-04-22 2021-07-23 中国平安人寿保险股份有限公司 Training method, device and equipment for human face living body detection model and storage medium
CN114220097A (en) * 2021-12-17 2022-03-22 中国人民解放军国防科技大学 Anti-attack-based image semantic information sensitive pixel domain screening method and application method and system
CN114299313A (en) * 2021-12-24 2022-04-08 北京瑞莱智慧科技有限公司 Method and device for generating anti-disturbance and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种面向人脸活体检测的对抗样本生成算法;马玉琨 等;《软件学报》;第30卷(第2期);469-480 *

Also Published As

Publication number Publication date
CN114821823A (en) 2022-07-29

Similar Documents

Publication Publication Date Title
US10776671B2 (en) Joint blur map estimation and blur desirability classification from an image
CN114821823B (en) Image processing, training of human face anti-counterfeiting model and living body detection method and device
CN111340180B (en) Countermeasure sample generation method and device for designated label, electronic equipment and medium
CN111860398B (en) Remote sensing image target detection method and system and terminal equipment
CN110610143B (en) Crowd counting network method, system, medium and terminal for multi-task combined training
CN108876847B (en) Image positioning method, device, system and storage medium
CN114140683A (en) Aerial image target detection method, equipment and medium
CN114764868A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN114078201B (en) Multi-target class confrontation sample generation method and related equipment
CN111507406A (en) Method and equipment for optimizing neural network text recognition model
CN111144425B (en) Method and device for detecting shot screen picture, electronic equipment and storage medium
CN115995042A (en) Video SAR moving target detection method and device
CN114299358A (en) Image quality evaluation method and device, electronic equipment and machine-readable storage medium
Chen et al. Image steganalysis with multi-scale residual network
CN113723352A (en) Text detection method, system, storage medium and electronic equipment
CN111753729B (en) False face detection method and device, electronic equipment and storage medium
CN113435531A (en) Zero sample image classification method and system, electronic equipment and storage medium
CN116310850B (en) Remote sensing image target detection method based on improved RetinaNet
CN114998755A (en) Method and device for matching landmarks in remote sensing image
CN110399868B (en) Coastal wetland bird detection method
CN117036869B (en) Model training method and device based on diversity and random strategy
CN117392374B (en) Target detection method, device, equipment and storage medium
CN114065868B (en) Training method of text detection model, text detection method and device
CN117496555A (en) Pedestrian re-recognition model training method and device based on scale transformation scene learning
Na et al. Research on Water Surface Environment Perception Method Based on Visual and Positional Information Fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant