CN114463859A - Anti-attack method and device for living body detection, electronic equipment and storage medium - Google Patents

Anti-attack method and device for living body detection, electronic equipment and storage medium Download PDF

Info

Publication number
CN114463859A
CN114463859A CN202111295940.4A CN202111295940A CN114463859A CN 114463859 A CN114463859 A CN 114463859A CN 202111295940 A CN202111295940 A CN 202111295940A CN 114463859 A CN114463859 A CN 114463859A
Authority
CN
China
Prior art keywords
image
face
initial reference
probability
face image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111295940.4A
Other languages
Chinese (zh)
Other versions
CN114463859B (en
Inventor
杨杰之
蒋宁
王洪斌
吴至友
周迅溢
曾定衡
皮家甜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mashang Xiaofei Finance Co Ltd
Original Assignee
Mashang Xiaofei Finance Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mashang Xiaofei Finance Co Ltd filed Critical Mashang Xiaofei Finance Co Ltd
Priority to CN202111295940.4A priority Critical patent/CN114463859B/en
Publication of CN114463859A publication Critical patent/CN114463859A/en
Application granted granted Critical
Publication of CN114463859B publication Critical patent/CN114463859B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Collating Specific Patterns (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an attack resisting method and device for in-vivo detection, electronic equipment and a storage medium, and relates to the technical field of internet. The method comprises the following steps: firstly, a face image containing a forged face is obtained, and then face living body detection is carried out on the face image to obtain a detection result. And when the detection result is a false body, generating a reference face image based on the face image, wherein the first image difference between the reference face image and the face image is smaller than the specified difference, and the probability that the reference face image belongs to a living body is larger than the specified probability, and finally outputting the reference face image as a countermeasure sample. Because two conditions of image difference and difference between the probability of living bodies and the designated probability can be considered simultaneously in the adjusting process, the image difference and the difference between the probabilities can be balanced in the repeated adjusting process, and the countersamples with higher attack success rate can be obtained only by slightly changing the face images containing the forged faces.

Description

Anti-attack method and device for living body detection, electronic equipment and storage medium
Technical Field
The present application relates to the field of human face living body detection technologies, and in particular, to an attack resisting method and apparatus for living body detection, an electronic device, and a storage medium.
Background
With the development of face recognition technology, face living body detection technology becomes a key step in face recognition technology. The human face living body detection technology is not absolutely safe, and a human face confrontation sample generated by specially searching the weakness of the human face living body detection model can deceive the human face living body detection model, so that the human face living body detection model outputs an incorrect detection result. However, the success rate of the face to resist the sample attack is low at present, and the face is not deceptive.
Disclosure of Invention
In view of the above problems, the present application provides a method, an apparatus, an electronic device and a storage medium for resisting attack in vivo detection, which can solve the above problems.
In a first aspect, an embodiment of the present application provides an attack-fighting method for in-vivo detection, where the method includes: acquiring a face image containing a forged face; carrying out human face living body detection on the human face image to obtain a detection result; if the detection result is a false body, generating a reference face image based on the face image, wherein the first image difference between the reference face image and the face image is smaller than a specified difference, and the probability that the reference face image belongs to a living body is larger than a specified probability; outputting the reference face image as a confrontation sample.
In a second aspect, an embodiment of the present application provides an anti-attack apparatus for in-vivo detection, where the apparatus includes: the device comprises an acquisition module, a detection module, a determination module and an output module. The system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a face image containing a forged face; the detection module is used for carrying out human face living body detection on the human face image to obtain a detection result; a determining module, configured to generate a reference face image based on the face image if the detection result is a prosthesis, where a first image difference between the reference face image and the face image is smaller than a specified difference, and a probability that the reference face image belongs to a living body is larger than a specified probability; and the output module is used for outputting the reference face image as a countermeasure sample.
In a third aspect, an embodiment of the present application provides an electronic device, including: one or more processors; a memory; one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to perform the above-described method.
In a fourth aspect, the present application provides a computer-readable storage medium, in which a program code is stored, and the program code can be called by a processor to execute the above method.
In a fifth aspect, the present application provides a computer program product containing instructions, which is characterized in that the instructions are stored in the computer program product, and when the instructions are run on a computer, the instructions cause the computer to implement the above method.
According to the method and the device, the face living body detection is carried out on the face image containing the forged face, when the detection result is a false body, a reference face image can be generated based on the face image, wherein the first image difference between the reference face image and the face image is smaller than the designated difference, the probability that the reference face image belongs to the living body is larger than the designated probability, and finally the reference face image is output as a countermeasure sample. The countermeasure sample is generated based on the face image containing the forged face, and because two conditions of image difference and difference between the probability of living bodies and the specified probability are considered simultaneously in the adjustment process, the image difference and the difference between the probabilities can be balanced in the repeated adjustment process, and the countermeasure sample with higher attack success rate can be obtained only by slightly changing the face image containing the forged face.
These and other aspects of the present application will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram illustrating an application environment of a method for countering an attack of liveness detection according to an embodiment of the present application;
FIG. 2 is a flow chart of a method for countering an attack of in-vivo detection provided by an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating a face image including a forged face according to an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating a newly generated face image according to an embodiment of the present application;
FIG. 5 is a flow chart of a method for countering an attack of in-vivo detection according to another embodiment of the present application;
FIG. 6 is a flow chart of a method for countering an attack of in-vivo detection according to another embodiment of the present application;
FIG. 7 is a block flow diagram illustrating a method for countering an attack by in-vivo detection according to an embodiment of the present application;
FIG. 8 is a flow chart of a method for countering an attack of in-vivo detection according to still another embodiment of the present application;
FIG. 9 is a schematic flow chart illustrating a process of optimizing a face image by using a gradient descent method according to an embodiment of the present application;
FIG. 10 is a block flow diagram illustrating a method for countering an attack for liveness detection according to yet another embodiment of the present application;
FIG. 11 is a block diagram illustrating an anti-attack apparatus for in-vivo detection provided by an embodiment of the present application;
fig. 12 is a block diagram illustrating an electronic device according to an embodiment of the present application;
fig. 13 shows a block diagram of a computer-readable storage medium according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
With the rapid development of the internet industry, in recent years, the artificial intelligence technology using machine learning and deep learning as the landmark technology has been widely applied in the related fields of video images, voice recognition, natural voice processing and the like, and particularly, the application in face recognition is becoming more extensive. The face recognition shows huge development potential under the drive of artificial intelligence and big data, the application scene of the face recognition is continuously expanded, and the face recognition gradually falls on the ground from public fields such as security protection to the commercial fields of payment and verification. However, the face recognition is a pair of double-edged swords, and the information security problems of data leakage, personal privacy invasion and the like are brought while the technology continuously evolves and the application is continuously popularized. Particularly, in the anti-attack technology aiming at the face living body detection, a face anti-sample can be generated by specially searching the weakness of the face living body detection model, and the face anti-sample can deceive the face living body detection model so that the face living body detection model outputs an incorrect detection result. In the human face living body detection, the face living body detection model is verified or defense trained by actively generating the countermeasure sample, so that the identification accuracy rate or the attack resistance of the face living body detection model can be improved. However, the success rate of using the currently generated countermeasure sample to counter attack is low, and the face biopsy model is not deceptive, so that the effect is poor when the face biopsy model is verified based on the countermeasure sample or is subjected to further processing such as defense training.
In order to solve the above problems, the inventors of the present application found, after careful research, that a countermeasure sample is generated based on a face image including a forged face, and by determining whether an image difference between a newly generated face image and an original face image and a probability that the newly generated face image belongs to a living body satisfy conditions, the countermeasure sample with a high attack success rate can be obtained only by slightly changing the face image including the forged face.
In order to better understand the method, the apparatus, the electronic device, and the storage medium for resisting attacks in living body detection provided by the embodiments of the present application, an application environment suitable for the embodiments of the present application is described below.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating an application environment of a method for anti-attack of liveness detection according to an embodiment of the present application. Illustratively, the attack resisting method, the attack resisting device, the electronic equipment and the storage medium for living body detection provided by the embodiment of the application can be applied to the electronic equipment. Alternatively, the electronic device may be, for example, a server 110 as shown in fig. 1, and the server 110 may be connected to the image capturing device 120 through a network. Wherein the network is used to provide a medium for a communication link between server 110 and image capture device 120. The network may include various connection types, such as wired communication links, wireless communication links, and so on, which are not limited by the embodiments of the present application.
Optionally, in other embodiments, the electronic device may also be a smartphone, tablet, laptop, or the like. At this time, the image capturing function of the image capturing device 120 may be integrated into an electronic device, for example, a camera of a smart phone, a tablet, a notebook, etc. may be used to capture an image.
It should be understood that the server 110, network, and image capture device 120 in fig. 1 are merely illustrative. There may be any number of servers, networks, and image capture devices, as desired for implementation. Illustratively, the server 110 may be a physical server, a server cluster composed of a plurality of servers, or the like, and the image capturing device 120 may be a mobile phone, a tablet, a camera, a notebook, or the like. It is understood that embodiments of the present application may also allow multiple image capture devices 120 to access server 110 simultaneously.
In some embodiments, the image capturing device 120 may send the captured images to the server 110 through the network, and after the electronic device receives the images, the images may be processed by the method for resisting attack of living body detection according to the embodiment of the present application. Illustratively, the images may include face images of forged faces, and the face images are used for adjusting the forged faces in the face images to generate countermeasure samples.
The above application environments are only examples for facilitating understanding, and it should be understood that the embodiments of the present application are not limited to the above application environments.
The method, the apparatus, the electronic device and the storage medium for resisting attack of living body detection provided by the embodiments of the present application will be described in detail by specific embodiments.
Referring to fig. 2, a flow chart of an attack-fighting method for live detection according to an embodiment of the present application is shown. As will be described in detail with respect to the flow shown in fig. 2, the method for countering an attack of living body detection may specifically include the following steps:
step S210: and acquiring a face image containing a forged face.
In application scenarios such as security protection, face payment and the like, a face image of a user is usually acquired in real time, then the face image is identified, and the identity of the user is verified according to the face features of the user. In general, before the identity of the face in the face image is verified, whether the currently detected face is a real person or not is determined through face living body detection, so that the identity of a user is prevented from being falsely used through a photo, a face mask and the like, and the safety of user information can be ensured.
In face live body detection, a face image is detected, and it is possible to identify whether the face image is a face image acquired from a real person (corresponding to a detection result of a live body) or a face image generated from a forged face (corresponding to a detection result of a false body). In the anti-attack technology of human face living body detection, a human face image containing a forged human face can be processed to obtain an anti-sample. The countermeasure sample can simulate a face image of a real person, after the countermeasure sample is generated, the countermeasure sample can be used for replacing the original face image containing a forged face to carry out face living body detection, at the moment, if the countermeasure sample is detected as a living body, the countermeasure attack is successful, and the purpose of deceiving the face living body detection is achieved.
Alternatively, in the human face living body detection process, a human face living body detection model may be used to perform living body detection on the input human face image, and the human face living body detection model may be a machine learning model that is trained in advance by using a large amount of training data. In the actual application scene of human face living body detection, the accuracy of the human face living body detection model is reduced by the antagonistic sample. However, in the process of training the face in-vivo detection model, if the countermeasure sample is actively generated and then the accuracy verification or defense training is performed on the face in-vivo detection model by using the countermeasure sample, the recognition accuracy of the face in-vivo detection model and the performance of resisting the countermeasure attack can be improved.
In some embodiments, the forged face may be a printed paper face, a face photograph, a face mask, or the like, and for example, the forged face may be photographed by an image acquisition device such as a camera, so as to obtain a face image including the forged face. Alternatively, the artificial face may be a virtual face, such as an avatar generated from a real face, or the like.
Step S220: and carrying out face living body detection on the face image to obtain a detection result.
In the embodiment of the application, a face image containing a forged face can be input into a face living body detection system for face living body detection, and then the face living body detection system can output a detection result of the face image. Alternatively, in the face live detection system, a face live detection model with parameters trained in advance may be used to detect whether the input face image is a live body or a prosthetic body. It is understood that the detection result of the human face living body detection may be either of a living body or a prosthesis. The detection result is that the living body can indicate that the face image input through face living body detection is the face image of the real user; the detection result is that the false body can indicate that the face image input through the face living body detection is not the face image of the real user and can be the false body face disguised as the real user.
In some embodiments, the living human face detection model may extract and analyze image feature information such as surface texture, micro-motion, facial features, and the like of a human face image through a design operator or a neural network to complete living human face detection. For example, as shown in fig. 3, the face image in fig. 3 is a face mask, the face mask usually presents smooth surface texture, and the surface texture of the real face is usually fine due to the existence of skin texture, at this time, for example, by extracting the surface texture of the face image in fig. 3, the face image can be obtained as a prosthesis according to the surface texture with a large difference between the face mask and the real face.
Step S230: and if the detection result is a false body, generating a reference face image based on the face image, wherein the first image difference between the reference face image and the face image is smaller than a specified difference, and the probability that the reference face image belongs to a living body is larger than a specified probability.
In some embodiments, if a detection result obtained after performing face live body detection on a face image containing a forged face is a live body, it means that the forged face is not much different from a real face, and an effect of falsifying or falsifying can be achieved, so that when the detection result is a live body, the face image containing the forged face can be output as a countermeasure sample.
In the embodiment of the application, if the detection result is a false body, it indicates that there is a large difference between the forged face and the real face at present, and the face image cannot deceive the face live body detection. At this time, the face image including the forged face may be adjusted to obtain a reference face image, and in the adjustment process, it may be determined whether a first image difference between the newly generated face image and the original face image is smaller than a specified difference, and whether a probability that the newly generated face image belongs to a living body is larger than a specified probability. If the condition is satisfied, the newly generated face image may be used as the reference face image.
In some embodiments, in the process of generating the reference face image based on the face image, disturbance information may be added to the face image to generate the reference face image. For example, image features affecting the face living body detection, which determine the face image as a prosthesis, may be extracted, and then feature values corresponding to the image features may be adjusted on the face image. For example, if a face image including a face mask is detected as a false body due to the surface texture, the feature value of the surface texture may be adjusted, for example, from 0.1 to 0.3 (assuming that the smaller the feature value of the surface texture, the smoother the surface texture appears), and a newly generated face image may be obtained after the feature value adjustment. Then, the newly generated face image is judged, and whether the newly generated face image meets the conditions or not is determined.
It is understood that there may be a plurality of image features simultaneously affecting the detection result of the face live body detection. For example, in the case of a face mask, in addition to the surface texture, facial features of the face mask may also cause the detection result to be a false body (e.g., loss of facial features at the eye positions of the face mask, etc.). Therefore, when the face image is adjusted, a plurality of image features can be adjusted synchronously. For example, the surface texture of the face mask, the facial features of the eye positions, etc. are simultaneously adjusted to obtain a newly generated face image, as shown in fig. 4.
Optionally, in the process of adjusting the face image to obtain the reference face image, the condition of the newly generated face image may be determined after each adjustment, or the condition of the newly generated face image may be determined after multiple adjustments, which is not limited in the embodiment of the present application.
When the newly generated face image is determined to meet the conditions, the newly generated face image is smaller in difference with the original face image, the original face image is less in disturbance, and meanwhile, because the probability that the reference face image belongs to a living body is greater than the specified probability, the success rate of using the newly generated reference face image to resist attacks can be determined to be higher, so that the newly generated face image can be used as the reference face image in the embodiment of the application, and the finally obtained reference face image can meet the conditions.
It should be noted that, if the difference between the reference face image generated based on the original face image and the original face image is large, the image quality may be affected, for example, the definition of the reference face image is damaged, and the recognition result of the face identity after the anti-attack success of the living body detection may also be affected. For example, in a face payment scenario, a paper photo of user a may be used as a face image of a fake face to counter attack on user a's payment account. If the difference between the reference face image generated after processing the paper photo of the user a and the original paper photo is large, the human face presented by the reference face image may not be recognized although the human face is the same as the real human face (for example, the reference face image is changed by a face compared with the original face image due to too large difference). At this time, even if the reference face image passes the face live body detection, authentication failure may be caused when authentication is performed using the reference face image.
In addition, by limiting the first image difference between the two face images to be smaller than a specified difference, the amount of calculation in the image adjustment process can also be reduced.
Illustratively, the specified difference and the specified probability may be set in advance. Alternatively, the value of the specified difference, the specified probability, may be set to a fixed size according to the actual situation of the face live detection. Alternatively, the designated probability may be set as the probability that the reference face image belongs to the prosthesis, that is, whether the probability that the reference face image belongs to the living body is greater than the probability that the reference face image belongs to the prosthesis needs to be judged, and when it is judged that the first image difference between the reference face image and the original face image is smaller than the designated difference and the probability that the reference face image belongs to the living body is greater than the probability that the reference face image belongs to the prosthesis, the reference face image may be output as the countermeasure sample.
Step S240: outputting the reference face image as a countermeasure sample.
In the embodiment of the application, because the reference face image is generated based on the original face image containing the forged face, and the first image difference between the reference face image and the original face image is smaller than the specified difference, the probability that the reference face image belongs to the living body is greater than the specified probability, that is, the reference face image with higher attack probability of face living body detection can be obtained only by slightly changing the original face image. And finally, the reference face image can be used as a countermeasure sample to be output, and a countermeasure sample with high attack success rate can be obtained finally.
In some embodiments, after obtaining the countermeasure sample, the countermeasure sample may be applied to an attack countermeasure scenario of face identity recognition, for example, security testing of security and face payment scenarios, or may be used as training data to verify the accuracy of a face living body detection model and perform defense training, and so on. At this time, because the attack success rate of the countermeasure sample generated by the embodiment of the application is high, when the countermeasure sample is applied to any of the above-mentioned scenes, a higher challenge is posed to both the face recognition model and the face living body detection model, and forward excitation can be formed for technical update of the face recognition model and the face living body detection model.
In summary, the anti-attack method for live detection provided in this embodiment may first obtain a face image including a forged face, and then perform live detection on the face image to obtain a detection result. And when the detection result is a false body, generating a reference face image based on the face image, wherein the first image difference between the reference face image and the face image is less than the specified difference, and the probability that the reference face image belongs to a living body is greater than the specified probability. And finally, outputting the reference face image as a countermeasure sample. According to the method and the device, the countermeasure sample is generated based on the face image containing the forged face, and because two conditions of image difference and difference between the probability of living bodies and the specified probability are considered simultaneously in the adjustment process, the image difference and the difference between the probabilities can be balanced in the repeated adjustment process, so that the countermeasure sample with high attack success rate can be obtained only by slightly changing the face image containing the forged face.
In some embodiments, optionally, on the basis of the foregoing embodiments, this embodiment provides a method for resisting against attacks by live detection, where a face live detection may be performed on an initial reference image predicted from a face image including a forged face, a detection probability that the initial reference image belongs to a live body is determined, then based on a second image difference between the face image and the initial reference image and a difference between the detection probability and a specified probability, the initial reference image is adjusted to obtain a new initial reference image, and face live detection and subsequent operations are performed on the initial reference image in a return manner until the initial reference image is adjusted to be a reference face image, and a reference face image with a higher success rate of resisting attacks may be obtained in a case where generated perturbation information is more slight. Please refer to fig. 5, which shows a flowchart of a method for countering an attack of in-vivo detection according to another embodiment of the present application. The generating of the reference face image based on the face image may specifically include the following steps:
step S510: and predicting an initial reference image based on the face image.
In the embodiment of the application, if a reference face image is to be generated based on the face image containing the forged face, an initial reference image can be obtained by predicting the face image, and then the initial reference image is adjusted.
Optionally, the face image including the forged face may be used as the initial reference image, or the face image including the forged face may be subjected to preset processing and then generated into the initial reference image, or a face image related to the face image including the forged face may be preset as the initial face image.
Illustratively, the preset processing for the face image containing the forged face may be to remove unnecessary or redundant interference information in the face image, such as image background, noise affecting image detection, and other image noise that may exist. Taking the user picture as an example, the preset processing may be to recognize a face image of a specific user from the user picture (for example, to recognize a face of the specific user from a plurality of persons on the user picture), and the like.
Step S520: and carrying out face living body detection on the initial reference image, and determining the detection probability of the initial reference image belonging to a living body.
In the embodiment of the application, in the process of adjusting the initial reference image, the probability that the initial reference image is detected as a living body may be predicted first, and then the detection probability that the initial reference image belongs to the living body is obtained.
In some embodiments, the face image may be labeled with a classification label, for example, the classification label of the face image detected as a living body may be labeled as label t (the classification label of the face image detected as a prosthesis may also be labeled as label i). Further, when the probability P that the initial reference image is detected as a living body is predicted, the initial reference image may be input into a classification model for classification, and a probability Pt that a classification label of the initial reference image is a label t (corresponding to a probability Pi that a classification label of the initial reference image is a label i) is obtained, so that a detection probability P ═ Pt that the initial reference image belongs to a living body may be obtained.
Step S530: and adjusting the initial reference image to obtain a new initial reference image based on the second image difference between the face image and the initial reference image and the difference between the detection probability and the designated probability, and returning to execute face living body detection and subsequent operation on the initial reference image until the initial reference image is adjusted to be the reference face image.
In some embodiments, the difference between the pixel values of the face image and the initial reference image may be directly compared to obtain the difference between the second image and the initial reference image, or the difference between the second image and the initial reference image may be calculated by calculating the difference between the face image and the initial reference imageImage distance between the image and the initial reference image to obtain the second image difference. Illustratively, the face image and the initial reference image can be respectively generated into image vectors, and the distance between the two image vectors is calculated through a metric function D (-). Wherein the metric function D (-) can be L2The norm is formally defined.
In some embodiments, a detection probability P that the initial reference image belongs to a living body may be determined by the foregoing steps, and then the detection probability P is compared with a specified probability Px, resulting in a difference Δ P between the detection probability P and the specified probability Px, which may be equal to P-Px, for example. The designated probability Px may be obtained by presetting, and the designated probability Px may also be equal to a probability that the reference face image belongs to a prosthesis, for example, Px ═ Pi. Alternatively, in some exemplary embodiments, the face liveness detection on the reference face image may only be detected as either a live body or a prosthesis, and thus Pi may be equal to 1-Pt.
Then, the initial reference image may be adjusted to obtain a new initial reference image based on the second image difference and the difference Δ P between the detection probability and the designated probability. It can be understood that, when the initial reference image is adjusted for the first time, two conditions that the second image difference is less than the specified difference and the difference Δ P between the probabilities is greater than 0 (i.e., the probability P that the reference face image belongs to the living body is greater than the specified probability Px) may not be simultaneously satisfied, and therefore, when the two conditions cannot be simultaneously satisfied, the initial reference image needs to be adjusted repeatedly, that is, the face living body detection and the subsequent operation on the initial reference image need to be performed back until the two conditions are simultaneously satisfied, and the obtained initial reference image is the reference face image.
The following description will take the face image containing the forged face as the initial reference image as an example.
Assume that the assigned probability Px is 0.2 and the assigned probability Px is 0.5. When the initial reference image is adjusted for the first time, the second image difference is 0 and is less than the specified probability, but at this time, the initial reference image is a face image containing a fake face, which has been detected as a false body in the foregoing embodiment, the detection probability P may be determined to be 0 and is less than the specified probability, and therefore, the face image containing the fake face needs to be adjusted. The image parameter for each adjustment to the initial reference image may be determined as perturbation information δ, which may be determined, for example, from the difference of the second image obtained after each last adjustment and the difference Δ P between the detection probability and the specified probability. Finally, after the initial reference image is repeatedly adjusted, the new initial reference image obtained at the moment can be used as the reference face image until the second image difference between the new initial reference image obtained after the adjustment and the face image containing the forged face is smaller than the specified probability and the detection probability that the new initial reference image belongs to the living body is larger than the specified probability which can be satisfied at the same time, and the second image difference at the moment is consistent with the first image difference.
It can be understood that the finally obtained reference face image balances the first image difference and the probability difference between the reference face image and the original face image, and a countermeasure sample with a higher attack success rate can be obtained under the condition that the generated disturbance information is more tiny. In addition, by replacing the forged face in the face image, for example, replacing the face mask with a printed paper face to perform counterattack, the finally obtained counterattack sample can be more targeted.
In some embodiments of the present application, optionally, on the basis of the foregoing embodiments, the present embodiment may repeatedly adjust the initial reference image to obtain a new initial reference image by using the anti-attack method for living body detection provided in the present embodiment. Please refer to fig. 6, which shows a flowchart of a method for countering an attack of in-vivo detection according to another embodiment of the present application. The method may specifically comprise the steps of:
step S610: and judging whether the adjusting times are smaller than a designated value, wherein the adjusting times are times for executing adjusting operation, and the adjusting operation is adjusting the initial reference image to obtain a new initial reference image.
In the embodiment of the present application, the number of times of adjustment is the number of times of performing an operation of adjusting the initial reference image to obtain a new initial reference avatar, that is, the number of times of adjustment may be accumulated as the adjustment operation is performed. Alternatively, the adjustment number may be initialized before the initial reference image is adjusted for the first time, for example, the adjustment number may be initialized to 0. Therefore, when the initial reference image is adjusted for the first time, the adjustment times are smaller than the designated numerical value, and the initial reference image can be directly adjusted for the first time.
In some embodiments, the number of adjustments to adjust the initial reference image may be limited. For example, the size of the adjustment times can be limited by a specified numerical value, so that the local optimal solution of the reference face image can be obtained after the initial reference image is subjected to image adjustment for a limited number of times. Therefore, when the initial reference image is adjusted to obtain a new initial reference image, the adjustment times can be accumulated, and the adjustment times obtained by statistics are compared with the specified value in each adjustment to judge the size between the adjustment times and the specified value.
Step S620: and if the adjusting times are less than the specified numerical value, adjusting the initial reference image to obtain a new initial reference image based on the second image difference and the difference between the detection probability and the specified probability.
It should be noted that, when the adjustment times obtained by comparison are smaller than the specified value, the current adjustment times are very small. If the initial reference image is adjusted only a few times, because the initial reference image is predicted from a face image containing a forged face, the face contained in the new initial reference image obtained through a few adjustments is still close to the forged face but not a real face, that is, the probability that the new initial reference image obtained at this time is detected as a false body is high, and therefore, when the adjustment times are less than a specified value, the initial reference image should be continuously adjusted based on the second image difference between the face image and the new initial reference image obtained after each adjustment, and based on the difference between the detection probability that the initial reference image belongs to a living body and the specified probability.
Alternatively, when the designated value is set to a suitable value, it may be indicated that the initial reference image obtained after adjustment for a sufficient number of times can achieve a high attack success rate requirement, and in this case, the values of the designated difference and the designated probability may be set according to the size of the designated value. In some embodiments, when the number of times of adjustment reaches the specified value, the new initial reference image obtained after adjustment may be used as the reference face image. For example, the designated value may be set to 100, and after the initial reference image is adjusted 100 times, the new initial reference image may be used as the local optimal solution of the reference face image.
Step S630: increasing the number of adjustments.
Optionally, the initial reference image may be adjusted to obtain a new initial reference image, and the adjustment times may be accumulated to increase the adjustment times. For example, 1 may be added to the adjustment count acquired in the foregoing step to obtain a new adjustment count.
Step S640: and if the adjusting times are not less than the designated numerical value, judging whether the detection probability is greater than the designated probability.
In other embodiments, by limiting the number of times of adjustment of the initial reference image, it may be further configured to perform conditional judgment on the newly generated initial reference image every time adjustment of a specified value is performed, that is, after each adjustment of the specified value is performed, it is judged whether the detection probability that the initial reference image belongs to a living body is greater than the specified probability, so that the probability that the finally obtained new initial reference image attacks successfully is higher, and then the number of times of adjustment is accumulated again, so that the purpose of cyclic judgment may be achieved.
Step S650: and if the initial reference image is larger than the designated probability, taking the current initial reference image as a reference face image.
Step S660: and if the number of times of adjustment is not larger than the specified probability, adjusting the initial reference image to obtain a new initial reference image based on the second image difference and the difference between the detection probability and the specified probability, and clearing the number of times of adjustment.
In this embodiment, when the adjustment times reach a specified value, if it is determined that the detection probability is greater than the specified probability, which indicates that the currently obtained initial reference image can satisfy the condition of high attack success rate, the current initial reference image may be used as the reference face image.
If the detection probability is judged to be not greater than the designated probability, the probability that the attack of the currently obtained initial reference image is successful is low, and the initial reference image needs to be continuously adjusted, namely the initial reference image is continuously adjusted based on the difference of the second image between the face image and the initial reference image and the difference between the detection probability and the designated probability. In addition, in order to achieve the purpose of determining whether the detection probability that the initial reference image belongs to the living body is greater than the specified probability after each adjustment of the specified value, the adjustment times need to be cleared, so that the adjustment times can be accumulated again in the next round of condition determination.
In some embodiments, as shown in fig. 7, after the initial reference image is predicted based on the face image, the initial reference image may be adjusted to obtain the reference face image. For example, before each adjustment, the detection probability that the initial reference image belongs to the living body may be first determined, and then the number of times of adjustment to perform an adjustment operation, which is an operation of adjusting the initial reference image to obtain a new initial reference image, is acquired. Optionally, before the first adjustment, the adjustment number may be initialized, for example, the adjustment number may be initialized to 0.
And then, judging whether the adjustment times are smaller than a specified value or not, when the adjustment times do not reach the specified value (namely the adjustment times are smaller than the specified value), adjusting the initial reference image based on the second image difference and the probability difference to obtain a new initial reference image, then increasing the adjustment times, and returning to execute the detection probability and the subsequent processing process for determining that the initial reference image belongs to the living body. Until the adjusting times reach a specified value (namely, the adjusting times are not more than the specified value), whether the detection probability that the new initial reference image belongs to the living body is more than the specified probability is judged. If the probability is greater than the specified probability, taking the obtained new initial reference image as a reference face image; if the initial reference image is less than the designated probability, the new initial reference image does not meet the requirements, the adjustment times need to be cleared, then the initial reference image is continuously adjusted, and the adjustment times are recalculated until the detection probability is greater than the designated probability.
Through the cyclic process, whether the detection probability of the initial reference image obtained at the moment, which belongs to the living body, is greater than the designated probability can be judged every time the designated numerical value is adjusted, if the detection probability is not greater than the designated probability, the initial reference image obtained at the moment is continuously adjusted, then the adjustment times are counted again, so that the detection probability is judged again when the adjustment times reach the designated numerical value again until the detection probability is greater than the designated probability, and the reference face image is output.
Therefore, the adjustment times of the initial reference image and the detection probability that the initial reference image belongs to a living body after each adjustment are simultaneously restricted, the reference face image with higher attack success rate can be obtained after image adjustment for a limited number of times, and meanwhile, the change of the initial reference image can be restricted within a smaller range by limiting the adjustment times. Finally, after the reference face image is output as a countermeasure sample, the countermeasure sample with high attack success rate can be obtained only by slightly changing the face image containing the forged face.
In other embodiments of the present application, in the process of repeatedly adjusting the initial reference image to obtain a new initial reference image, condition judgment may be performed on a newly generated face image after each adjustment. Specifically, please refer to fig. 8, which shows a flowchart of a method for countering an attack of a liveness detection according to still another embodiment of the present application. The method specifically comprises the following steps:
step S810: determining that the initial reference image has been adjusted.
It is to be understood that the detection probability that the new initial reference image obtained at the time belongs to the living body may be conditionally judged at each adjustment, and therefore, it may be judged whether or not the detection probability is greater than the specified probability after it is determined that the initial reference image has been adjusted.
Step S820: and judging whether the detection probability is greater than the specified probability.
Step S830: and if the initial reference image is larger than the designated probability, taking the current initial reference image as a reference face image.
After determining that the initial reference image has been adjusted, a determination may be made as to a probability of detection that the obtained new initial reference image belongs to a living body. In some embodiments, once it is determined that the detection probability that the new initial reference image belongs to the living body is greater than the specified probability, it may be indicated that the probability that the attack of the new initial reference image obtained at this time is successful has reached the requirement, the adjustment of the initial reference image may be finished, and the new initial reference image obtained at this time is used as the reference face image, so that the countermeasure sample with a high attack success rate may be obtained.
Step S840: and if the initial reference image is not larger than the specified probability, adjusting the initial reference image to obtain a new initial reference image based on the difference between the face image and the second image and the difference between the detection probability and the specified probability.
In other embodiments, if it is determined that the detection probability that the new initial reference image belongs to the living body is smaller than the specified probability, at this time, the attack success rate of the new initial reference image does not meet the requirement yet, the initial reference image needs to be continuously adjusted, the difference between the second image and the specified probability needs to be continuously adjusted, and until the detection probability is greater than the specified probability, the current initial reference image is used as the reference face image.
It can be understood that, in each adjustment, the initial reference image obtained in the previous adjustment is adjusted again, so that it can be ensured that only a small change needs to be made to the initial reference image every time, and finally, after a limited number of adjustments, the obtained reference face image can not only meet the requirement of high attack success rate, but also ensure that the difference between the reference face image and the original face image is small.
Optionally, in an embodiment of the present application, in the process of adjusting the initial reference image to obtain a new initial reference image, the local optimal solution of the reference face image may also be obtained by establishing an objective function and performing iterative solution by using a gradient descent algorithm. Specifically, please refer to fig. 9, which illustrates a flowchart of optimizing a face image by using a gradient descent method according to an embodiment of the present application. The method may specifically comprise the steps of:
step S910: an objective function is established based on the second image difference and based on a difference between the detection probability and a specified probability.
In some embodiments, the objective function may be a difference between a second image difference between the face image and the initial reference image and a difference between a detection probability that the initial reference image belongs to a living body and a specified probability, i.e., the objective function may be
Figure BDA0003336604620000111
Wherein the content of the first and second substances,
Figure BDA0003336604620000112
can be a metric function for representing a face image x containing a forged face and an initial reference image
Figure BDA0003336604620000114
A second image difference between the first and second images,
Figure BDA0003336604620000113
indicating that the face image x is to be brought into contact with the initial reference image
Figure BDA0003336604620000121
The second image difference therebetween is minimal,
Figure BDA0003336604620000122
for representing an initial reference image
Figure BDA0003336604620000123
A difference between the probability of belonging to a living body and the specified probability. c is a hyperparameter greater than 0. In some embodiments, the parameter c may be searched by using a grid search, and the range of the grid may be 1e-1 to 1e10, for example.
Alternatively, the metric function may L2A norm is defined, then
Figure BDA0003336604620000124
Alternatively, the specified probability may be set to the probability that the initial reference image belongs to the prosthesis. Further, the classification label of the face image detected as a living body may be denoted as a label t, and the classification label of the face image detected as a prosthesis may be denoted as a label i. Further, when the probability that the initial reference image is detected as a living body is predicted, the initial reference image may be input into a classification model for classification, and the probability that the classification label of the initial reference image is the label t is obtained
Figure BDA0003336604620000125
And probability of classification label of initial reference image as label i
Figure BDA0003336604620000126
Then it can be
Figure BDA0003336604620000127
Is shown as
Figure BDA0003336604620000128
k is a hyperparameter greater than 0.
Thus, the objective function may be
Figure BDA0003336604620000129
In some embodiments, the initial reference image may be probabilistically predicted using an activation function Sigmoid in the classification model described aboveAt this time, can be obtained
Figure BDA00033366046200001210
Alternatively, to facilitate the calculation, the initial reference image participating in the calculation may be used
Figure BDA00033366046200001211
After normalization, the objective function is solved. Illustratively, since the value of a pixel point in an image is usually [0,255%]In the range, then the normalized value should be [0,1 ]]Within the range, in order to prevent the pixel value of the initial face image after adjustment from exceeding the range, a hyperbolic tangent function tanh () and a variable w may be introduced to solve the problem, and therefore, the initial face image may be described as:
Figure BDA00033366046200001212
after normalization, the objective function can be described as
Figure BDA00033366046200001213
Optionally, the face image x of the forged face may also be normalized, and the face image of the forged face may be described as:
Figure BDA00033366046200001214
step S920: and solving the objective function by adopting a gradient descent method to obtain an iterative formula of the initial reference image.
In some embodiments, the objective function may be solved by a gradient descent method, and derivation of the objective function during the gradient descent may be obtained
Figure BDA00033366046200001215
Further, the iterative formula for the initial reference image may be described as
Figure BDA00033366046200001216
Wherein eta is the learning rate, wkMay represent an initial reference image, w, obtained after k iterationsk+1It may represent the initial reference image obtained after k +1 iterations.
The face image including the forged face is taken as the initial reference image for explanation. It will be appreciated that, at the time of the first iteration (the number of iterations may correspond to the number of adjustments of the initial reference image in the previous embodiment), a new initial reference image is obtained at the time of the first iteration
Figure BDA00033366046200001217
And η may be preset, for example η may be set to 1e-3,
Figure BDA00033366046200001218
the target function is derived in the initial condition, and the target function is composed of a second image difference between the face image and the initial reference image and a difference between the detection probability and the designated probability based on the living body belonging to the initial reference image, so that the purpose of adjusting the initial reference image based on the second image difference and the difference between the detection probability and the designated probability can be achieved.
Step S930: and adjusting the initial reference image based on the iterative formula to obtain a new initial reference image.
It should be noted that, after the preset number of iterations, the obtained new initial reference image may be set as the optimal solution of the reference face image, for example, after 100 iterations, w is obtained based on the above iteration formula100Then w100Will be used as a reference face image. Optionally, after each preset iteration time, determining whether the obtained detection probability that the new initial reference image belongs to the living body is greater than the specified probability, if not, continuing to perform iterative computation, and performing probability judgment again through the preset iteration time until the detection probability that the new initial reference image belongs to the living body is determined to be greater than the specified probability, and finally performing probability judgment again through the preset iteration timeAnd obtaining a reference face image. Optionally, after each iteration, it may be determined whether the detection probability that the obtained new initial reference image belongs to the living body is greater than a specified probability, if not, the iterative computation is continued, and if so, the obtained new initial reference image is used as the reference face image.
Optionally, the process of probability judgment may be replaced by directly performing living face detection on a new initial reference image obtained at that time, that is, after each preset iteration number, performing living face detection on the obtained new initial reference image, taking the new initial reference image detected as a living body as a reference face image, and when the new initial reference image is detected as a prosthesis, continuing to perform iterative solution on the new initial reference image based on the iteration process. For example, whether the initial reference image belongs to a living body is detected every 20 iterations or every iteration, and the initial reference image detected as the living body is finally used as the reference face image. Therefore, the face living body detection is directly carried out on the initial face image in the iteration process, the attack success rate of the finally obtained reference face image is higher, and compared with probability prediction, the face living body detection directly carried out has higher reliability.
The method for resisting attack of the above living body detection will be exemplarily described below by taking a paper face as an example.
Referring to fig. 10, a block flow diagram of a method for countering an attack of liveness detection according to another embodiment of the present application is shown. Alternatively, the paper face may be obtained by printing a face photograph. After the paper face is printed, the paper face can be used as a forged face, and a face image of the paper face is collected through an image collecting device such as a camera, so that the face image can be used as a face image containing the forged face to resist attack on a face living body detection system, and a resisting sample is obtained.
Specifically, the face image may be input to a face liveness detection system. For example, in the face live body detection system, a face live body detection model with parameters trained in advance may be used to determine whether an input face image is a live body. If the facial image is judged to be a prosthesis, the facial image can be adjusted, for example, the facial image can be transmitted into a disturbance generation algorithm, disturbance information is obtained through the disturbance generation algorithm, and then the disturbance information is added to the original facial image to obtain a new facial image. For example, the process of generating a new face image by using the perturbation generation algorithm may refer to the corresponding process of adjusting the initial reference image to obtain a new initial reference image in the foregoing embodiment, wherein before the adjustment, the initial reference image may be obtained based on the face image prediction of the paper face. It should be noted that the perturbation information may be a pixel difference between the face images before and after each adjustment.
In some embodiments, the new face image generated after each adjustment may be input into the face live body detection system to determine whether the face image is a live body, or the new face image may be determined whether the face image is a live body after each adjustment of a specified number of times. If the new face image is judged to be a living body, the new face image can be used as a countermeasure sample to be output; if the new face image is still judged as a false body, the process of adjusting the face image is repeated until the new face image is judged as a living body.
Referring to fig. 11, a block diagram of an anti-attack apparatus for in-vivo detection according to an embodiment of the present application is shown. Specifically, the apparatus may include: an acquisition module 1110, a detection module 1120, a determination module 1130, and an output module 1140.
The acquiring module 1110 is configured to acquire a face image including a forged face; the detection module 1120 is configured to perform living human face detection on the human face image to obtain a detection result; a determining module 1130, configured to generate a reference face image based on the face image if the detection result is a false body, where a first image difference between the reference face image and the face image is smaller than a specified difference, and a probability that the reference face image belongs to a living body is larger than a specified probability; an output module 1140 for outputting the reference face image as a countermeasure sample.
In some embodiments, the determining module 1130 may include: a prediction module for predicting an initial reference image based on the face image; the determining submodule is used for carrying out face living body detection on the initial reference image and determining the detection probability that the initial reference image belongs to a living body; and the adjusting module is used for adjusting the initial reference image to obtain a new initial reference image based on the second image difference between the face image and the initial reference image and the difference between the detection probability and the designated probability, and returning to execute face living body detection and subsequent operation on the initial reference image until the initial reference image is adjusted to be the reference face image.
Optionally, the adjusting module may include: an adjusting submodule, configured to adjust the initial reference image to obtain a new initial reference image based on the second image difference and based on a difference between the detection probability and a specified probability if an adjustment frequency is smaller than the specified value, where the adjustment frequency is a frequency for performing an adjustment operation, and the adjustment operation is an operation for adjusting the initial reference image to obtain the new initial reference image; and the number calculating module is used for increasing the adjusting number.
Optionally, the method may further include: the second judgment module is used for judging whether the detection probability is greater than the specified probability or not if the adjustment times are not less than the specified numerical value; the first processing module is used for taking the current initial reference image as a reference face image if the initial reference image is larger than the specified probability; and the second processing module is used for adjusting the initial reference image to obtain a new initial reference image and clearing the adjusting times based on the second image difference and the difference between the detection probability and the designated probability if the initial reference image is not greater than the designated probability.
In other embodiments, the adjusting module may include: a third judging module, configured to judge whether the detection probability is greater than the specified probability if the initial reference image has been adjusted; the third processing module is used for taking the current initial reference image as a reference face image if the initial reference image is larger than the specified probability; and the fourth processing module is used for adjusting the initial reference image to obtain a new initial reference image based on the second image difference and the difference between the detection probability and the designated probability if the initial reference image is not greater than the designated probability.
Optionally, in some embodiments, the adjusting module may include: a fifth processing module for establishing an objective function based on the second image difference and based on a difference between the detection probability and a specified probability; the solving module is used for solving the objective function by adopting a gradient descent method to obtain an iterative formula of the initial reference image; and the iteration module is used for adjusting the initial reference image based on the iteration formula to obtain a new initial reference image.
In some embodiments, the objective function in the fifth processing module may be
Figure BDA0003336604620000141
Figure BDA0003336604620000151
Wherein x is the face image,
Figure BDA0003336604620000152
for the purpose of the initial reference image,
Figure BDA0003336604620000153
representing the probability of said initial reference image being classified as a label i corresponding to a prosthesis,
Figure BDA0003336604620000154
and c and k are both hyperparameters larger than 0, and represent the probability that the initial reference image is classified as a label t corresponding to a living body.
Optionally, the anti-attack apparatus for living body detection may further include: and the defense training module is used for carrying out defense training on the human face living body detection model based on the confrontation sample.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the modules/units/sub-units/components in the above-described apparatus may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, the coupling or direct coupling or communication connection between the modules shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or modules may be in an electrical, mechanical or other form.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Referring to fig. 12, a block diagram of an electronic device according to an embodiment of the disclosure is shown. The electronic device in this embodiment may include one or more of the following components: a processor 1210, a memory 1220, and one or more applications, wherein the one or more applications may be stored in the memory 1220 and configured to be executed by the one or more processors 1210, the one or more applications configured to perform a method as described in the aforementioned method embodiments.
The electronic device may be any of various types of computer system devices that are mobile, portable, and perform wireless communications, among others. In particular, the electronic device may be a mobile phone or smart phone (e.g., iPhone (TM) based, Android (TM) based phone), a Portable gaming device (e.g., Nintendo DS (TM), PlayStation Portable (TM), Gameboy Advance (TM), iPhone (TM)), a laptop, a PDA, a Portable internet device, a music player and data storage device, other handheld devices and devices such as a smart watch, smart band, headset, pendant, etc., and other wearable devices (e.g., such as electronic glasses, electronic clothing, electronic bracelets, electronic necklaces, electronic tattoos, electronic devices, or Head Mounted Devices (HMDs)).
The electronic device may also be any of a number of electronic devices including, but not limited to, cellular phones, smart watches, smart bracelets, other wireless communication devices, personal digital assistants, audio players, other media players, music recorders, video recorders, cameras, other media recorders, radios, medical devices, vehicle transportation equipment, calculators, programmable remote controls, pagers, laptop computers, desktop computers, printers, netbooks, Personal Digital Assistants (PDAs), Portable Multimedia Players (PMPs), moving picture experts group (MPEG-1 or MPEG-2) audio layer 3(MP3) players, portable medical devices, and digital cameras and combinations thereof.
In some cases, the electronic device may perform a variety of functions (e.g., playing music, displaying videos, storing pictures, and receiving and sending telephone calls). The electronic device may be, for example, a cellular telephone, media player, other handheld device, wristwatch device, pendant device, earpiece device, or other compact portable device, if desired.
Optionally, the electronic device may also be a server, for example, an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a CDN (Content Delivery Network), a big data and artificial intelligence platform, and a dedicated or platform server providing face recognition, automatic driving, an industrial internet service, and data communication (such as 4G, 5G, and the like).
Processor 1210 may include one or more processing cores. The processor 1210, using various interfaces and lines to connect various parts within the overall electronic device, performs various functions of the electronic device and processes data by executing or executing instructions, applications, code sets, or instruction sets stored in the memory 1220, and calling data stored in the memory 1220. Alternatively, the processor 1210 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 1210 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 1210, but may be implemented by a communication chip.
The Memory 1220 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 1220 may be used to store instructions, applications, code sets, or instruction sets. The memory 1220 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The data storage area can also store data (such as a phone book, audio and video data, chatting record data) and the like created by the electronic equipment in use.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the processor 1210 and the memory 1220 of the electronic device described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Referring to fig. 13, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer-readable storage medium 1300 has stored therein program code that can be called by a processor to execute the methods described in the above-described method embodiments.
The computer-readable storage medium 1300 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Optionally, the computer-readable storage medium 1300 includes a non-volatile computer-readable storage medium. The computer readable storage medium 1300 has storage space for program code 1310 for performing any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 1310 may be compressed, for example, in a suitable form. The computer-readable storage medium 1300 may be, for example, a Read-Only Memory (ROM), a Random Access Memory (RAM), an SSD, a charged Erasable Programmable Read-Only Memory (EEPROM), or a Flash Memory (Flash).
In some embodiments, a computer program product or computer program is provided that includes computer instructions stored in a computer-readable storage medium. The computer instructions are read by a processor of the computer device from a computer-readable storage medium, and the computer instructions are executed by the processor to cause the computer device to perform the steps of the above-described method embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, SSD, Flash), and includes several instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the methods of the embodiments of the present application.
According to the anti-attack method and device for the in-vivo detection, the electronic equipment and the storage medium, the face image containing the forged face can be obtained firstly, and then the face in-vivo detection is carried out on the face image to obtain the detection result. And when the detection result is a false body, generating a reference face image based on the face image, wherein the first image difference between the reference face image and the face image is less than the specified difference, and the probability that the reference face image belongs to a living body is greater than the specified probability. And finally, outputting the reference face image as a countermeasure sample. Because two conditions of image difference and difference between the probability of living bodies and the designated probability can be considered simultaneously in the adjusting process, the image difference and the difference between the probabilities can be balanced in the repeated adjusting process, and the countersamples with higher attack success rate can be obtained only by slightly changing the face images containing the forged faces.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (12)

1. A method of countering an attack for in vivo testing, the method comprising:
acquiring a face image containing a forged face;
carrying out human face living body detection on the human face image to obtain a detection result;
if the detection result is a false body, generating a reference face image based on the face image, wherein the first image difference between the reference face image and the face image is smaller than a specified difference, and the probability that the reference face image belongs to a living body is larger than a specified probability;
outputting the reference face image as a countermeasure sample.
2. The method of claim 1, wherein generating a reference face image based on the face image comprises:
predicting an initial reference image based on the face image;
carrying out face living body detection on the initial reference image, and determining the detection probability of the initial reference image belonging to a living body;
and adjusting the initial reference image to obtain a new initial reference image based on the second image difference between the face image and the initial reference image and the difference between the detection probability and the designated probability, and returning to execute face living body detection and subsequent operation on the initial reference image until the initial reference image is adjusted to be the reference face image.
3. The method of claim 2, wherein adjusting the initial reference image to obtain a new initial reference image based on the second image difference between the face image and the initial reference image and based on the difference between the detection probability and the specified probability comprises:
if the adjustment times are smaller than the designated numerical value, adjusting the initial reference image to obtain a new initial reference image based on the second image difference and the difference between the detection probability and the designated probability, wherein the adjustment times are the times of executing adjustment operation, and the adjustment operation is adjusting the initial reference image to obtain the new initial reference image;
increasing the number of adjustments.
4. The method of claim 3, further comprising:
if the adjusting times are not less than the designated value, judging whether the detection probability is greater than the designated probability;
if the initial reference image is larger than the designated probability, taking the current initial reference image as a reference face image;
and if the number of times of adjustment is not larger than the specified probability, adjusting the initial reference image to obtain a new initial reference image based on the second image difference and the difference between the detection probability and the specified probability, and clearing the number of times of adjustment.
5. The method of claim 2, wherein adjusting the initial reference image to obtain a new initial reference image based on the second image difference between the face image and the initial reference image and based on the difference between the detection probability and the specified probability comprises:
if the initial reference image is adjusted, judging whether the detection probability is greater than the designated probability;
if the initial reference image is larger than the designated probability, taking the current initial reference image as a reference face image;
and if the initial reference image is not larger than the specified probability, adjusting the initial reference image to obtain a new initial reference image based on the second image difference and the difference between the detection probability and the specified probability.
6. The method of claim 2, wherein adjusting the initial reference image to obtain a new initial reference image based on the second image difference and based on the difference between the detection probability and the specified probability comprises:
establishing an objective function based on the second image difference and based on a difference between the detection probability and a specified probability;
solving the objective function by adopting a gradient descent method to obtain an iterative formula of the initial reference image;
and adjusting the initial reference image based on the iterative formula to obtain a new initial reference image.
7. The method of claim 6, wherein the objective function is
Figure FDA0003336604610000021
Wherein x is the face image,
Figure FDA0003336604610000022
for the purpose of the initial reference image,
Figure FDA0003336604610000023
representing the probability of said initial reference image being classified as a label i corresponding to a prosthesis,
Figure FDA0003336604610000024
and c and k are both hyperparameters larger than 0, and represent the probability that the initial reference image is classified as a label t corresponding to a living body.
8. The method of claim 1, further comprising:
and carrying out defense training on a human face living body detection model based on the confrontation sample.
9. An apparatus for countering an attack for in vivo testing, the apparatus comprising:
the acquisition module is used for acquiring a face image containing a forged face;
the detection module is used for carrying out human face living body detection on the human face image to obtain a detection result;
a determining module, configured to generate a reference face image based on the face image if the detection result is a prosthesis, where a first image difference between the reference face image and the face image is smaller than a specified difference, and a probability that the reference face image belongs to a living body is larger than a specified probability;
and the output module is used for outputting the reference face image as a countermeasure sample.
10. An electronic device, comprising:
one or more processors;
a memory;
one or more programs stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-8.
11. A computer-readable storage medium, characterized in that a program code is stored in the computer-readable storage medium, which program code can be called by a processor to perform the method according to any one of claims 1 to 8.
12. A computer program product comprising instructions stored thereon, which, when run on a computer, cause the computer to carry out the method according to any one of claims 1 to 8.
CN202111295940.4A 2021-11-03 2021-11-03 Method and device for generating challenge sample for living body detection, electronic device and storage medium Active CN114463859B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111295940.4A CN114463859B (en) 2021-11-03 2021-11-03 Method and device for generating challenge sample for living body detection, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111295940.4A CN114463859B (en) 2021-11-03 2021-11-03 Method and device for generating challenge sample for living body detection, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN114463859A true CN114463859A (en) 2022-05-10
CN114463859B CN114463859B (en) 2023-08-11

Family

ID=81405090

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111295940.4A Active CN114463859B (en) 2021-11-03 2021-11-03 Method and device for generating challenge sample for living body detection, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN114463859B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109543640A (en) * 2018-11-29 2019-03-29 中国科学院重庆绿色智能技术研究院 A kind of biopsy method based on image conversion
CN109583342A (en) * 2018-11-21 2019-04-05 重庆邮电大学 Human face in-vivo detection method based on transfer learning
CN110866287A (en) * 2019-10-31 2020-03-06 大连理工大学 Point attack method for generating countercheck sample based on weight spectrum
CN111753595A (en) * 2019-03-29 2020-10-09 北京市商汤科技开发有限公司 Living body detection method and apparatus, device, and storage medium
CN111783629A (en) * 2020-06-29 2020-10-16 浙大城市学院 Human face in-vivo detection method and device for resisting sample attack
CN112329837A (en) * 2020-11-02 2021-02-05 北京邮电大学 Countermeasure sample detection method and device, electronic equipment and medium
US20210049505A1 (en) * 2019-08-14 2021-02-18 Dongguan University Of Technology Adversarial example detection method and apparatus, computing device, and non-volatile computer-readable storage medium
CN112766189A (en) * 2021-01-25 2021-05-07 北京有竹居网络技术有限公司 Depth forgery detection method, device, storage medium, and electronic apparatus
CN113111776A (en) * 2021-04-12 2021-07-13 京东数字科技控股股份有限公司 Method, device and equipment for generating countermeasure sample and storage medium
CN113554089A (en) * 2021-07-22 2021-10-26 西安电子科技大学 Image classification countermeasure sample defense method and system and data processing terminal
CN113591526A (en) * 2020-04-30 2021-11-02 华为技术有限公司 Face living body detection method, device, equipment and computer readable storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109583342A (en) * 2018-11-21 2019-04-05 重庆邮电大学 Human face in-vivo detection method based on transfer learning
CN109543640A (en) * 2018-11-29 2019-03-29 中国科学院重庆绿色智能技术研究院 A kind of biopsy method based on image conversion
CN111753595A (en) * 2019-03-29 2020-10-09 北京市商汤科技开发有限公司 Living body detection method and apparatus, device, and storage medium
US20210049505A1 (en) * 2019-08-14 2021-02-18 Dongguan University Of Technology Adversarial example detection method and apparatus, computing device, and non-volatile computer-readable storage medium
CN110866287A (en) * 2019-10-31 2020-03-06 大连理工大学 Point attack method for generating countercheck sample based on weight spectrum
CN113591526A (en) * 2020-04-30 2021-11-02 华为技术有限公司 Face living body detection method, device, equipment and computer readable storage medium
CN111783629A (en) * 2020-06-29 2020-10-16 浙大城市学院 Human face in-vivo detection method and device for resisting sample attack
CN112329837A (en) * 2020-11-02 2021-02-05 北京邮电大学 Countermeasure sample detection method and device, electronic equipment and medium
CN112766189A (en) * 2021-01-25 2021-05-07 北京有竹居网络技术有限公司 Depth forgery detection method, device, storage medium, and electronic apparatus
CN113111776A (en) * 2021-04-12 2021-07-13 京东数字科技控股股份有限公司 Method, device and equipment for generating countermeasure sample and storage medium
CN113554089A (en) * 2021-07-22 2021-10-26 西安电子科技大学 Image classification countermeasure sample defense method and system and data processing terminal

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
GREGORY DITZLER等: "Fine tuning lasso in an adversarial environment against gradient attacks", 《2017 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI)》, pages 1 - 7 *
刘恒等: "基于生成式对抗网络的通用性对抗扰动生成方法", 《信息网络安全》, vol. 20, no. 05, pages 57 - 64 *
张柯等: "基于对抗生成网络的人脸照片去网纹技术", 《重庆师范大学学报(自然科学版)》, vol. 36, no. 6, pages 110 - 18 *
曾定衡等: "面向无线视频浏览的增量学习人脸检测算法研究", 《小型微型计算机系统》, vol. 35, no. 6, pages 13531357 *
杨杰之: "轻量级鲁棒人脸活体检测方法研究", 《中国优秀硕士学位论文全文数据库》, pages 138 - 135 *
邓雄等: "人脸识别活体检测研究方法综述", 《计算机应用研究》, vol. 37, no. 9, pages 2579 - 2585 *

Also Published As

Publication number Publication date
CN114463859B (en) 2023-08-11

Similar Documents

Publication Publication Date Title
CN111461089B (en) Face detection method, and training method and device of face detection model
CN107330408B (en) Video processing method and device, electronic equipment and storage medium
CN107330904B (en) Image processing method, image processing device, electronic equipment and storage medium
CN111368685B (en) Method and device for identifying key points, readable medium and electronic equipment
US8463025B2 (en) Distributed artificial intelligence services on a cell phone
CN108920639B (en) Context obtaining method and device based on voice interaction
CN111476306A (en) Object detection method, device, equipment and storage medium based on artificial intelligence
CN111738735B (en) Image data processing method and device and related equipment
CN114333078A (en) Living body detection method, living body detection device, electronic apparatus, and storage medium
CN111611873A (en) Face replacement detection method and device, electronic equipment and computer storage medium
CN111368127B (en) Image processing method, image processing device, computer equipment and storage medium
CN112149615A (en) Face living body detection method, device, medium and electronic equipment
CN114092678A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111382791B (en) Deep learning task processing method, image recognition task processing method and device
CN111340213B (en) Neural network training method, electronic device, and storage medium
CN114783070A (en) Training method and device for in-vivo detection model, electronic equipment and storage medium
CN114841340B (en) Identification method and device for depth counterfeiting algorithm, electronic equipment and storage medium
CN111597944B (en) Living body detection method, living body detection device, computer equipment and storage medium
CN110135329B (en) Method, device, equipment and storage medium for extracting gestures from video
CN114463859B (en) Method and device for generating challenge sample for living body detection, electronic device and storage medium
CN115731620A (en) Method for detecting counter attack and method for training counter attack detection model
CN115984977A (en) Living body detection method and system
CN111898529B (en) Face detection method and device, electronic equipment and computer readable medium
CN108446660A (en) The method and apparatus of facial image for identification
CN114882557A (en) Face recognition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant