CN118196913A - Attack image generation method, training method of generated network and related device - Google Patents

Attack image generation method, training method of generated network and related device Download PDF

Info

Publication number
CN118196913A
CN118196913A CN202410190531.5A CN202410190531A CN118196913A CN 118196913 A CN118196913 A CN 118196913A CN 202410190531 A CN202410190531 A CN 202410190531A CN 118196913 A CN118196913 A CN 118196913A
Authority
CN
China
Prior art keywords
image
sample
attack
attacker
style
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410190531.5A
Other languages
Chinese (zh)
Inventor
高康康
郝敬松
朱树磊
殷俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202410190531.5A priority Critical patent/CN118196913A/en
Publication of CN118196913A publication Critical patent/CN118196913A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The application discloses an attack image generation method, a training method for generating a network and a related device, wherein the attack image generation method comprises the following steps: obtaining an attacker image; generating an original attack image with a style matched with that of an attacked image based on the attacker image, wherein the original attack image contains image contents of a corresponding attack area in the attacker image; and mapping the original attack image to a printer color space to obtain a target attack image, wherein the target attack image is printed and then used for forming attack materials for attacking the biological authentication system. By the scheme, the problem that the digital color space is not matched with the printer color space in the anti-attack test process can be solved, so that the effectiveness of physical attack is improved.

Description

Attack image generation method, training method of generated network and related device
Technical Field
The application relates to the field of digital image recognition, in particular to an attack image generation method, a training method for generating a network and a related device.
Background
Biometric authentication systems, such as face recognition systems, are currently used as tools for authentication, but also to protect important network resources and private data. To improve the security of the biometric authentication system and the privacy of the user, the security of the biometric authentication system is generally evaluated by using a challenge-resistant technique. In the existing anti-attack technical scheme, a target attack image generated according to an attacker image is directly printed by a printer and is attached to the attacker image to obtain attack materials, and then the attack materials are used for carrying out anti-attack testing on a biological authentication system. However, the solution does not fully consider the problem of attack information loss caused by mismatch between the digital color space and the printer color space, resulting in failure of physical attack, so that the reliability of the effective attack resistance test cannot be low.
Disclosure of Invention
The application at least provides an attack image generation method, a training method for generating a network and a related device, which can improve the effectiveness of physical attack.
The first aspect of the present application provides a method for generating an attack image, including: obtaining an attacker image; generating an original attack image with a style matched with that of an attacked image based on the attacker image, wherein the original attack image contains image contents of a corresponding attack area in the attacker image; and mapping the original attack image to a printer color space to obtain a target attack image, wherein the target attack image is printed and then used for forming attack materials for attacking the biological authentication system.
Mapping the original attack image to a printer color space to obtain a target attack image, wherein the method comprises the following steps: and adjusting the pixel value of the original attack image from the first pixel value corresponding to the digital color space to the second pixel value corresponding to the printer color space by utilizing the mapping relation between the digital color space and the printer color space so as to obtain the target attack image.
Wherein generating an original attack image having a style matching the attacked image based on the attacked image, comprises: performing style adjustment on the attacker image to obtain a target style image with the style matched with the attacker image; acquiring an image part of a local area in the target style image to obtain an original attack image, wherein the local area comprises an attack area; or generating an original attack image with a style matched with that of the attacked image based on the attacker image, wherein the original attack image comprises; acquiring an image part of a local area in an attacker image to obtain an image to be adjusted, wherein the local area comprises an attack area; and carrying out style adjustment on the image to be adjusted to obtain an original attack image with the style matched with the image of the attacked person.
The method for performing style adjustment on the attacker image to obtain a target style image with a style matched with the attacker image, or performing style adjustment on an image to be adjusted to obtain an original attack image with a style matched with the attacker image comprises the following steps: carrying out forward convolution processing on the first image to obtain a feature vector; performing reverse convolution processing on the feature vector to obtain a second image; the first image is an attacker image, the second image is a target style image, or the first image is an image to be adjusted, and the second image is an original attack image.
Wherein the target attack image corresponds to a local area of the attacker image; acquiring an image part of a local area in a target style image to obtain an original attack image, or acquiring an image part of a local area in an attacker image to obtain an image to be adjusted, wherein the local area comprises the attack area and comprises: acquiring a mask image, wherein the size of the mask image corresponds to that of the third image, the pixel value of a corresponding local area in the mask image is an effective pixel value, and the pixel value of a corresponding non-attack area in the mask image is an ineffective pixel value; using the mask image to screen out pixels corresponding to effective pixel values of the mask image from the third image to form a fourth image; the third image is a target style image, the fourth image is an original attack image, or the third image is an attacker image, and the fourth image is an image to be adjusted.
The target attack image corresponds to a local area of the attacker image, the target attack image is printed to obtain a first printed image, the attacker image is printed to obtain a second printed image, and the first printed image is superimposed on a corresponding position on the second printed image to form attack materials.
The target attack image is obtained by using a generating network, wherein the generating network is used for adjusting the style of the image, and the style of the target attack image is determined based on the adjustment of the style of the image; the method further comprises the steps of: generating a sample original attack image based on the first attacker sample image, wherein the content of the sample original attack image comprises the image content of a sample attack area in the first attacker sample image, the style of the sample original attack image is matched with that of the attacked sample image, and the sample original attack image is determined by adjusting the image style of a generating network; mapping the original sample attack image to a printer color space to obtain a sample target attack image; based on the sample target attack image, network parameters of the generated network are adjusted.
Wherein, based on the sample target attack image, adjusting the network parameters of the generated network, including: superposing the sample target attack image in at least one second attacker sample image to obtain at least one third attacker sample image, wherein the second attacker sample image and the first attacker sample image are selected from the attacker sample image set; utilizing the first feature similarity between each third attacker sample image and the attacked sample image to adjust the network parameters of the generated network; and/or adjusting network parameters of the generated network based on the sample target attack image, including: superposing the sample target attack image in at least one second attacker sample image to obtain at least one fourth attacker sample image, wherein the second attacker sample image and the first attacker sample image are selected from the attacker sample image set; based on the living body detection results of the fourth attacker sample images, the network parameters of the generated network are adjusted.
Wherein, utilizing the first feature similarity between each third attacker sample image and the attacked sample image, adjusting the network parameters of the generated network comprises: selecting a current use model from a plurality of element learning models, wherein the current use model comprises at least one current training model, and the number of the current training models is consistent with the number of the third attacker sample images; respectively forming an image pair by each third attacker sample image and each attacked sample image, and respectively extracting first characteristics of a corresponding group of image pairs by each training model; adjusting network parameters of the generated network based on the similarity between the first features of the image pairs; and repeatedly executing the steps in response to the generation network not meeting the iteration ending condition currently.
The use model further comprises a test model; and in response to the generation network not currently meeting the iteration end condition, repeatedly executing the steps, including: regenerating a sample original attack image by using the regulated generation network, and obtaining a new third attacker sample image based on the regenerated sample original attack image; extracting second characteristics of a new third attacker sample image and a new attacked sample image by using the test model; and in response to the similarity between the new third attacker sample image and the second feature of the attacked sample image not meeting the similarity requirement, determining that the generating network does not currently meet the iteration ending condition, and repeatedly executing the steps.
Wherein adjusting the network parameters of the generated network based on the living body detection results of the fourth attacker sample images comprises: performing living body detection on each fourth attacker sample image by using a living body detection network to obtain a first living body detection result of each fourth attacker sample image; performing living body detection on the reference images corresponding to the fourth attacker sample images by using a living body detection network to obtain second living body detection results of the reference images corresponding to the fourth attacker sample images; and adjusting network parameters of the generated network based on the difference between the first living body detection result and the second living body detection result corresponding to each fourth attacker sample image.
Wherein generating a sample original attack image based on the first attacker sample image comprises: performing style adjustment on the first attacker sample image to obtain a sample style image with a style matched with that of the attacked sample image; obtaining an image part of a local area in a sample style image to obtain a sample original attack image, wherein the local area comprises a sample attack area; the method further comprises the steps of: extracting third features of the sample style image and the first attacker sample image by using a feature extraction network; based on the similarity between the sample-style image and the third feature of the first aggressor sample image, network parameters of the generated network are adjusted.
Wherein the third feature comprises a style feature and a content feature; the network parameters of the generated network are adjusted based on a first similarity between the sample-style image and the style features of the attacked sample image, and a second similarity between the sample-style image and the content features of the first attacker sample image.
The second aspect of the present application provides a training method for generating a network, including: generating a sample original attack image based on the first attacker sample image, wherein the content of the sample original attack image comprises the image content of a sample attack area in the first attacker sample image, the style of the sample original attack image is matched with that of the attacked sample image, and the sample original attack image is determined by adjusting the image style by a generating network of an attack image generating model; mapping the original sample attack image to a printer color space by using a color mapping module of the attack image generation model to obtain a sample target attack image; based on the sample target attack image, network parameters of the generated network are adjusted.
A third aspect of the present application provides an electronic device comprising a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the attack image generation method in the first aspect or to implement the training method for generating a network in the second aspect.
A fourth aspect of the present application provides a computer-readable storage medium having stored thereon program instructions which, when executed by a processor, implement the attack image generation method in the first aspect described above, or implement the training method of generating a network in the second aspect described above.
According to the scheme, the original attack image with the matching style with the attacked image is generated by the attacker image, the original attack image contains the image content of the attack area corresponding to the attacker image, then the original attack image is used for mapping from the digital color mapping space to the printer color space, and the target attack image is obtained, so that the problem of physical attack invalidation caused by unmatched digital color space and printer color space in the attack resisting test process is solved, and the effectiveness of the physical attack is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
FIG. 1 is a flow chart of an embodiment of an attack image generation method of the present application;
FIG. 2 is a schematic diagram of a color space mapping module according to an embodiment of the present application;
FIG. 3 is a flow chart of another embodiment of an attack image generation method of the present application;
FIG. 4 is a flow chart of an embodiment of a method for generating network training according to the present application;
FIG. 5 is a schematic diagram of a framework of one embodiment of an attack image generation model of the present application;
FIG. 6 is a schematic diagram of a framework of one embodiment of an attack material generation model of the present application;
FIG. 7 is a schematic diagram of a frame of an embodiment of an electronic device of the present application;
FIG. 8 is a schematic diagram of a frame of one embodiment of a computer-readable storage medium of the present application.
Detailed Description
The following describes embodiments of the present application in detail with reference to the drawings.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, interfaces, techniques, etc., in order to provide a thorough understanding of the present application.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship. Further, "a plurality" herein means two or more than two. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, may mean including any one or more elements selected from the group consisting of A, B and C.
Referring to fig. 1, fig. 1 is a flowchart of an embodiment of an attack image generation method according to the present application. Specifically, the method may include the steps of:
Step S110: an attacker image is acquired.
The application is mainly applied to the field of attack prevention, and after the target attack image is printed out, the target attack image is used as an attack material for attacking the biological authentication system so as to attack the biological authentication system. In order to ensure the effectiveness of the attack resistance test, before the target attack image is printed by the printer, the pixel value of the target attack image is mapped from the digital color space to the printer color space, so that the situation that the target attack image is directly printed by the printer to cause color deviation or other attack information loss of the target attack image can be avoided, and the physical attack is invalid. The biometric authentication system may be a face recognition system or the like, and is not particularly limited herein.
In some embodiments, an attacker image may be acquired using a capture device. In addition, an attacker image may be obtained from the attacker image set. The attacker image set at least comprises a series of images of an attacker, and further, the series of images of the attacker comprise a series of images of the attacker under different illumination conditions, different scenes, different expressions and the like. The images under different illumination conditions may be images of the same light source irradiated onto the face of the attacker from different angles or images of the same light source irradiated onto the face of the attacker from the same angle, the images under different scenes may be images of the attacker in movie theatres, images outdoors, images under trees, etc., and the images with different expressions may be images of the attacker when the attacker is happy, images of the attacker when crying, images when angered, etc., which are not particularly limited herein.
Step S120: based on the attacker image, generating an original attack image with the style matched with that of the attacked image, wherein the original attack image contains the image content of the corresponding attack area in the attacker image.
In some embodiments, the image synthesis software may be utilized to reference the attacker image and the attacked image, and generate the original attack image such that the style of the original attack image is consistent with the attacked image and the content of the original attack image is consistent with the attacked image. The style is understood to mean, among other things, wearing style, dressing style, etc. For example, since the attack area is an eye area, the eye area of the attacker image is not made up, and the eye area of the attacked image is made up with eye shadows, the eye area is made up with eye shadows, and thus the eye area is considered to be the style of the attacked image, or the attacked image is made up with glasses, the style of the attacked image, and the like, and is not particularly limited herein.
In addition, regarding the image content, the corresponding attack area in the attacker image is an eye area, and thus the eye area of the attacker image is regarded as one image content, and the eye area in the attacker image must also be included in the original attack image.
In some embodiments, to obtain an original attack image with a style matching with that of the attacked image, the attacked image may be subjected to style conversion, and then the attack region image after style conversion is retained, so as to obtain the original attack image. Specifically, step S1211 to step S1212 can be referred to.
Step S1211: and carrying out style adjustment on the attacker image to obtain a target style image with the style matched with the attacked image.
In some embodiments, the feature vector may be obtained by forward convolution processing of the attacker image using a generating network. And then, carrying out reverse convolution processing on the feature vector to obtain a target style image so that the image content of the target style image is consistent with that of the attacker image and the style of the attacker image is consistent. The target style image is generated by generating a network, so the target style image is not a real image, and a local area is selected for testing when the challenge test is carried out later, otherwise, if the whole target style image is attacked by the biological authentication system, the biological authentication system can easily recognize that the target style image is forged. The generating network is obtained through training and is mainly used for an attacked person to generate a target style image matched with the image style of the attacked person.
In addition, the image area of which the style is mainly embodied in the attacked image and the corresponding position area in the attacked image can be overlapped to generate the target style image.
Step S1212: and acquiring an image part of a local area in the target style image to obtain an original attack image, wherein the local area comprises the attack area.
In some embodiments, after the target style image is obtained, a masking method may be used to select an image portion of a local area (i.e., an attack area) from the selected target style image, so as to obtain an original attack image. The local area may be a designated area or an area obtained by learning a convolutional neural network, or may be a randomly selected area.
Specifically, a mask image is first acquired, wherein the size of the mask image corresponds to the target style image, the pixel value of the corresponding local area in the mask image is an effective pixel value, and the pixel value of the corresponding non-attack area in the mask image is an ineffective pixel value. And then overlapping the mask image with the target style image, so as to screen out pixels corresponding to the effective pixel values of the mask image from the target style image to form an original attack image.
It can be understood that, the method of extracting the region of interest in the target style image by using the full pooling method may be used as the original attack image, which is not limited herein.
In other embodiments, the attack area in the attacker image may be determined first, and then the original attacker image may be obtained. Specifically, please refer to step S1221 to step S1222.
Step S1221: and acquiring an image part of a local area in the attacker image to obtain an image to be adjusted, wherein the local area comprises the attack area.
In some embodiments, masking methods may also be employed to obtain the image to be adjusted. Specifically, a mask image may be acquired first, where the size of the mask image corresponds to the attacker image, the pixel values of the corresponding local areas in the mask image are valid pixel values, and the pixel values of the corresponding non-attack areas in the mask image are invalid pixel values. With the mask image, pixels corresponding to valid pixel values of the mask image are screened from the attacker image to constitute the image to be adjusted.
Step S1222: and carrying out style adjustment on the image to be adjusted to obtain an original attack image with the style matched with the image of the attacked person.
In some embodiments, the generating network may be used to forward convolve the image to be adjusted to obtain the feature vector. And performing reverse convolution processing on the feature vector to obtain an original attack image.
Step S130: and mapping the original attack image to a printer color space to obtain a target attack image, wherein the target attack image is printed and then used for forming attack materials for attacking the biological authentication system.
In some embodiments, to avoid the problem that the digital color space of the original attack image does not match the printer color space, the pixel value of the original attack image is adjusted from the first pixel value corresponding to the digital color space to the second pixel value corresponding to the printer color space by using the mapping relationship between the digital color space and the printer color space, so as to obtain the target attack image. The target attack image corresponds to a local area of the attacker image, the target attack image is printed to obtain a first printed image, the attacker image is printed to obtain a second printed image, and the first printed image is superimposed on a corresponding position on the second printed image to form attack materials. The printer color space may be LCh ("L" represents lightness, "C" is chromaticity, i.e., color saturation, "h" is hue, i.e., overall tendency of color), RGB (red) red, G (green) green, and B (blue), which are not specifically limited herein.
In particular, referring to FIG. 2, a color space mapping module may be utilized to map pixels of a digital color space of an original attack image to corresponding pixels of a printer color space.
In an application scenario, the attack image generation method of the application can be used for carrying out attack resistance test on the face recognition system. Specifically, referring to fig. 3, fig. 3 is a flowchart illustrating another embodiment of an attack image generation method according to the present application. Firstly, K attacker images and 1 attacked image are prepared, and one attacker image is randomly selected from the K attacker images to be input into a generation network. And carrying out forward convolution processing on the attacker image by using the generating network to obtain a feature vector. And then carrying out reverse convolution processing on the feature vector by using a generating network to obtain the target style image. And aligning the mask image with the target style image, overlapping the mask image with the target style image after aligning, and screening the image of the corresponding area in the target style image by using the pixel value of the corresponding local area in the mask image as an effective pixel value to obtain an original attack image. The style of the original attack image is matched with that of the attacked image and the image content of the corresponding attack area in the attacked image is contained. And then inputting the original attack image into a color space mapping module, and mapping the pixel value of the original attack image onto the pixel value of the printer color space to obtain the target attack image. And printing out the target attack image through a printer, and attaching the printed target attack image to K attacker images to form attack materials. And the attack material is utilized to attack the face recognition system so as to verify whether the attack material can pass the verification of the face recognition system.
In some embodiments, the target attack image is derived using a generation network, wherein the generation network is used to make image style adjustments, and the style of the target attack image is determined based on the image style adjustments. The generating network is obtained through training, and in particular, the training method can be referred to fig. 4, and fig. 4 is a flowchart of an embodiment of the generating network training method of the present application.
Step S410: based on the first attacker sample image, a sample raw attack image is generated.
The content of the sample original attack image comprises image content of a sample attack area in the first attacker sample image, the style of the sample original attack image is matched with that of the attacked sample image, and the sample original attack image is determined by adjusting the image style of the generation network.
Referring to fig. 5 in combination, the generating network 510 in the attack image generating model 500 may be used to perform style adjustment on the first attacker sample image, so as to obtain a sample style image with a style matching with that of the attacked sample image. And then, acquiring an image part of a local area in the sample style image to obtain a sample original attack image, wherein the local area comprises a sample attack area.
In addition, the mask image may be used to screen out pixels corresponding to valid pixel values of the mask image from the first attacker sample image to form the sample image to be adjusted. And then, carrying out style adjustment on the sample image to be adjusted by using the generating network to obtain a sample style image with the style matched with that of the sample image of the attacked person.
Step S420: and mapping the original sample attack image to a printer color space to obtain a sample target attack image.
After obtaining the sample original attack image, the color space mapping module 520 in the attack image generation model 500 may be used to map pixel values in the sample original attack image to the printer color space, thereby obtaining the sample target attack image. The style of the sample target attack image is consistent with that of the sample image of the attacked person, and the image content is partially consistent with that of the local area of the sample image of the first attacker.
Step S430: based on the sample target attack image, network parameters of the generated network are adjusted.
In some embodiments, the sample target attack image may be attached to a corresponding region of the attacker sample image to obtain a physical attack imitation sample image, and the similarity between the physical attack imitation sample image and the attacker sample image is used for adjusting network parameters of the generated network. Specifically, steps S4311 to S4312 may be participated.
Step S4311: the sample target attack image is superimposed in at least one second attacker sample image to obtain at least one third attacker sample image, both the second attacker sample image and the first attacker sample image being selected from the set of attacker sample images.
In some embodiments, an N Zhang Dier attacker sample image is randomly selected from the attacker sample image set, a sample target attack image is attached to the N Zhang Dier attacker sample image, and an N Zhang Disan attacker sample image is obtained, and a third attacker sample image is a physical attack imitation sample image.
Step S4312: utilizing the first feature similarity between each third attacker sample image and the attacked sample image to adjust the network parameters of the generated network;
In some embodiments, the first feature similarity between the third attacker sample image and the attacked sample image may be calculated using the meta-learning model 530. Specifically, a current use model may be selected from a plurality of meta-learning models, where the current use model includes at least one current training model, the number of the current training models is identical to the number of the third attacker sample images, and then each third attacker sample image and each attacked sample image are formed into an image pair, each current training model is used to extract a first feature corresponding to a group of image pairs, and based on similarity between the first features of each group of image pairs, network parameters of a generating network are adjusted, and the steps are repeatedly executed in response to the generating network currently not meeting an iteration ending condition.
In addition, the current usage model further includes a current test model, and after the network parameters of the generation network 510 are adjusted, the current test model can be used to check whether the adjusted generation network 510 meets the expectations. Specifically, the adjusted generation network 510 is utilized to regenerate the original sample attack image, a new third attacker sample image is obtained based on the regenerated original sample attack image, the second features of the new third attacker sample image and the attacker sample image are extracted by using the test model, and in response to the similarity between the new third attacker sample image and the second features of the attacker sample image not meeting the similarity requirement, it is determined that the generation network 510 does not meet the iteration end condition currently, and the steps are repeatedly executed.
In one embodiment, the meta-learning model 530 is utilized to simulate the anti-attack testing of the face recognition system. One model is arbitrarily selected from the plurality of meta-learning models 530 as the current test model, and N models are used as the current training model. And respectively forming an image pair by the N Zhang Disan attacker sample images and the attacked sample images, and inputting each group of image pairs into a training model. And outputting the identity characteristic corresponding to the third attacker sample image and the identity characteristic corresponding to the attacked sample image in the image pair through the training model, calculating the similarity between the identity characteristic corresponding to the third attacker sample image and the identity characteristic corresponding to the attacked sample image, and if the similarity is not 1, adjusting the network parameters of the generated network 510. And then, regenerating a sample original attack image by using the regulated generation network 510, obtaining a new third attacker sample image by the regenerated sample original attack image, extracting identity characteristics of the new third attacker sample image and the attacker sample image by using the current test model, and if the similarity between the identity characteristics of the new third attacker sample image and the attacker sample image is not 1, randomly selecting one model from a plurality of meta-learning models 530 again to be used as the current test model, using N models as the current training model, and repeatedly executing the steps.
The network structures of the meta-learning models 530 may be different, so that the third attacker sample image obtained by the generating network 510 may be adapted to the biometric authentication systems with different network structures, so as to improve the adaptability of the generating network 510.
In other embodiments, in order to avoid that the attacker sample image generated by the generating network 510 is easily recognized by the living body detection module in the biometric authentication system, the living body detection training is further required to perform the living body detection training on the attacker sample image generated by the generating network 510. Specifically, refer to steps S4321 to S4322.
Step S4321: the sample target attack image is superimposed in at least one second attacker sample image to obtain at least one fourth attacker sample image, both the second attacker sample image and the first attacker sample image being selected from the set of attacker sample images.
This step is the same as step S4311, and thus will not be repeated here.
Step S4322: based on the living body detection results of the fourth attacker sample images, the network parameters of the generated network are adjusted.
In some embodiments, the living body detection network 540 may be used to perform living body detection on each fourth attacker sample image to obtain a first living body detection result of each fourth attacker sample image, and the living body detection network 540 may be used to perform living body detection on each reference image corresponding to each fourth attacker sample image to obtain a second living body detection result of each reference image corresponding to each fourth attacker sample image, where the reference image is the second attacker sample image, and the reference image is used to test whether the living body detection network 540 has the living body detection capability. And then, utilizing the difference between the first living body detection result and the second living body detection result corresponding to each fourth attacker sample image, adjusting the network parameters of the generated network 510.
Specifically, the fourth attacker sample image and the reference image are input into the living body detection network 540 for living body detection, and if the detection result of the reference image is 1, the living body detection network 540 has the living body detection capability; if the output result of the fourth attacker sample image is 0, the network parameters of the generating network 510 need to be adjusted, and if the output result is 1, the network parameters of the generating network 510 need not be adjusted. Wherein, the output result of the living body detection network 540 may be 0, 1,0 indicating forgery, and 1 indicating reality.
In other embodiments, after the original attack image is obtained through the generation network 510, the original attack image and the corresponding attacker image may be input into the feature extraction network 550 in the attack image generation model 500 to perform a supervised optimization of the image generated by the generation network 510. For example, a third feature of the sample-style image and the first aggressor sample image is acquired using the feature extraction network 550, and network parameters of the generation network 510 are adjusted based on a similarity between the sample-style image and the third feature of the first aggressor sample image.
Specifically, the third feature includes a style feature and a content feature, and the network parameters of the generation network 510 are adjusted based on a first similarity between the style features of the sample style image and the attacked sample image, and a second similarity between the style features of the sample style image and the content features of the first attacker sample image. For example, the feature extraction network 550 may be used to determine whether a first similarity between the style features of the sample style image and the attacked sample image, and a second similarity between the style features of the sample style image and the content features of the first attacker sample image are 1, and if not, adjust the network parameters of the generation network 510. It is to be understood that the method for obtaining the similarity may be cosine similarity, and the like, which is not limited herein.
The application designs the color space mapping module in the attack material generation link, and considers and solves the problem of loss when mapping from the digital color space to the physical color space. The attack material is selected from an attacker image set, the attacker image set comprises a plurality of columns of images of an attacker under different illumination conditions, different scenes, different expressions and the like, a multi-scene data element learning strategy of the attack material is considered, and the problem that a target attack image is influenced by the expression of the attacker and the ambient light and fails is solved. Furthermore, the application also designs a living body detection network and a meta learning model based on the imitation physical attack, so that the target attack image can finish the attack on the living body detection and identification system at the same time, and the application has more robustness on a real biological authentication system.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
Referring to fig. 6, fig. 6 is a schematic diagram of a framework of an attack material generation model according to an embodiment of the present application. The attack material generation model 600 includes an acquisition module 610, a style generation module 620, and a patch generation module 630. The acquisition module 610 performs acquisition of an attacker image. The style generation module 620 performs generating an original attack image whose style matches that of the attacked image based on the attacker image, the original attack image containing image contents of a corresponding attack region in the attacker image. The patch generation module 630 performs mapping of the original attack image to the printer color space to obtain a target attack image, where the target attack image is printed to form attack materials for attacking the biometric authentication system.
Referring to fig. 7, fig. 7 is a schematic diagram of a frame of an electronic device 70 according to an embodiment of the application. The electronic device 70 comprises a memory 71 and a processor 72 coupled to each other, the processor 72 being adapted to execute program instructions stored in the memory 71 to implement the steps of any of the attack image generation method embodiments described above and/or to implement the steps of any of the generation network training method embodiments described above. In one particular implementation scenario, electronic device 70 may include, but is not limited to: the microcomputer and the server, and the electronic device 70 may also include a mobile device such as a notebook computer and a tablet computer, which is not limited herein.
In particular, the processor 72 is configured to control itself and the memory 71 to implement the steps of any of the attack image generation method embodiments described above and/or to implement the steps of any of the generation network training method embodiments described above. The processor 72 may also be referred to as a CPU (Central Processing Unit ). The processor 72 may be an integrated circuit chip having signal processing capabilities. The Processor 72 may also be a general purpose Processor, a digital signal Processor (DIGITAL SIGNAL Processor, DSP), an Application SPECIFIC INTEGRATED Circuit (ASIC), a Field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, a discrete gate or transistor logic device, a discrete hardware component. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 72 may be commonly implemented by an integrated circuit chip.
Referring to FIG. 8, FIG. 8 is a schematic diagram of a computer readable storage medium 80 according to an embodiment of the application. The computer readable storage medium 80 stores program instructions 801 that can be executed by a processor, the program instructions 801 being configured to implement steps of any of the attack image generation method embodiments described above and/or to implement steps of any of the generation network training method embodiments described above.
In some embodiments, functions or modules included in an apparatus provided by the embodiments of the present disclosure may be used to perform a method described in the foregoing method embodiments, and specific implementations thereof may refer to descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
The foregoing description of various embodiments is intended to highlight differences between the various embodiments, which may be the same or similar to each other by reference, and is not repeated herein for the sake of brevity.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical, or other forms.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.

Claims (16)

1. An attack image generation method, characterized by comprising:
Obtaining an attacker image;
Generating an original attack image with a style matched with that of an attacked image based on the attacker image, wherein the original attack image contains image contents of a corresponding attack area in the attacker image;
And mapping the original attack image to a printer color space to obtain a target attack image, wherein the target attack image is printed and then used for forming attack materials for attacking the biological authentication system.
2. The method of claim 1, wherein mapping the original attack image to a printer color space results in a target attack image, comprising:
And adjusting the pixel value of the original attack image from a first pixel value corresponding to the digital color space to a second pixel value corresponding to the printer color space by utilizing the mapping relation between the digital color space and the printer color space so as to obtain the target attack image.
3. The method of claim 1, wherein generating an original attack image having a style matching an attacked image based on the attacker image comprises:
Performing style adjustment on the attacker image to obtain a target style image with a style matched with that of the attacked image;
Acquiring an image part of a local area in the target style image to obtain the original attack image, wherein the local area comprises the attack area;
or generating an original attack image with the style matched with that of the attacked image based on the attacker image, wherein the original attack image comprises;
Acquiring an image part of a local area in the attacker image to obtain an image to be adjusted, wherein the local area comprises the attack area;
and carrying out style adjustment on the image to be adjusted to obtain an original attack image with the style matched with the image of the attacked person.
4. The method of claim 3, wherein the performing style adjustment on the attacker image to obtain a target style image with a style matching the attacked image, or performing style adjustment on the image to be adjusted to obtain an original attack image with a style matching the attacked image, comprises:
carrying out forward convolution processing on the first image to obtain a feature vector;
Performing reverse convolution processing on the feature vector to obtain a second image;
The first image is an attacker image, the second image is a target style image, or the first image is an image to be adjusted, and the second image is an original attack image.
5. A method according to claim 3, wherein the target attack image corresponds to a local region of the attacker image;
The step of obtaining the image part of the local area in the target style image to obtain the original attack image, or the step of obtaining the image part of the local area in the attacker image to obtain the image to be adjusted, wherein the local area comprises the attack area and comprises the following steps:
obtaining a mask image, wherein the size of the mask image corresponds to that of a third image, the pixel value of the local area corresponding to the mask image is an effective pixel value, and the pixel value of the non-attack area corresponding to the mask image is an ineffective pixel value;
using the mask image to screen pixels from the third image corresponding to valid pixel values of the mask image to form a fourth image;
the third image is the target style image, the fourth image is the original attack image, or the third image is an attacker image, and the fourth image is the image to be adjusted.
6. The method of claim 1, wherein the target attack image corresponds to a localized area of the attacker image, the target attack image being printed to obtain a first printed image, the attacker image being printed to obtain a second printed image, the first printed image being superimposed on a corresponding location on the second printed image to form the attack material.
7. The method of claim 1, wherein the target attack image is obtained using a generation network, wherein the generation network is configured to make an image style adjustment, and wherein the style of the target attack image is determined based on the image style adjustment;
the method further comprises the steps of:
Generating a sample original attack image based on a first attacker sample image, wherein the content of the sample original attack image comprises the image content of a sample attack area in the first attacker sample image, the style of the sample original attack image is matched with that of an attacked sample image, and the sample original attack image is determined by adjusting the image style of the generation network;
Mapping the sample original attack image to a printer color space to obtain a sample target attack image;
And adjusting network parameters of the generated network based on the sample target attack image.
8. The method of claim 7, wherein adjusting network parameters of the generated network based on the sample target attack image comprises:
Superposing the sample target attack image in at least one second attacker sample image to obtain at least one third attacker sample image, wherein the second attacker sample image and the first attacker sample image are selected from an attacker sample image set;
adjusting network parameters of the generated network by utilizing first feature similarity between each third attacker sample image and the attacked sample image;
and/or, the adjusting the network parameters of the generated network based on the sample target attack image includes:
Superposing the sample target attack image in at least one second attacker sample image to obtain at least one fourth attacker sample image, wherein the second attacker sample image and the first attacker sample image are selected from an attacker sample image set;
and adjusting network parameters of the generated network based on the living body detection result of each fourth attacker sample image.
9. The method of claim 8, wherein said adjusting network parameters of the generated network using the first feature similarities between each of the third attacker sample images and the attacked sample image comprises:
Selecting a current use model from a plurality of element learning models, wherein the current use model comprises at least one current training model, and the number of the current training models is consistent with the number of the third attacker sample images;
respectively forming an image pair by each third attacker sample image and each attacked sample image, and respectively extracting first features of a group of corresponding image pairs by using each current training model;
adjusting network parameters of the generated network based on the similarity between the first features of each set of the image pairs;
and repeating the steps in response to the generation network not meeting the iteration ending condition currently.
10. The method of claim 9, wherein the current usage model further comprises a current test model;
and in response to the generation network not currently meeting the iteration ending condition, repeatedly executing the steps, including:
Regenerating a sample original attack image by using the adjusted generation network, and obtaining a new third attacker sample image based on the regenerated sample original attack image;
Extracting second characteristics of the new third attacker sample image and the attacker sample image by using the current test model;
and in response to the similarity between the new third attacker sample image and the second feature of the attacked sample image not meeting the similarity requirement, determining that the generating network does not currently meet the iteration ending condition, and repeatedly executing the steps.
11. The method of claim 8, wherein adjusting network parameters of the generated network based on the living detection results of each of the fourth aggressor sample images comprises:
Performing living body detection on each fourth attacker sample image by using a living body detection network to obtain a first living body detection result of each fourth attacker sample image; and
Performing living body detection on the reference images corresponding to the fourth attacker sample images by using the living body detection network to obtain second living body detection results of the reference images corresponding to the fourth attacker sample images;
And adjusting network parameters of the generated network based on the difference between the first living body detection result and the second living body detection result corresponding to each fourth attacker sample image.
12. The method of claim 7, wherein generating a sample raw attack image based on the first attacker sample image comprises:
Performing style adjustment on the first attacker sample image to obtain a sample style image with a style matched with that of the attacked sample image;
acquiring an image part of a local area in the sample style image to obtain the sample original attack image, wherein the local area comprises the sample attack area;
the method further comprises the steps of:
extracting third features of the sample style image and the first attacker sample image by using a feature extraction network;
Based on a similarity between the sample style image and a third feature of the first aggressor sample image, network parameters of the generated network are adjusted.
13. The method of claim 12, wherein the third feature comprises a style feature and a content feature;
Network parameters of the generation network are adjusted based on a first similarity between style characteristics of the sample style image and the attacked sample image, and a second similarity between content characteristics of the sample style image and the first attacked sample image.
14. A training method for generating a network, comprising:
Generating a sample original attack image based on a first attacker sample image, wherein the content of the sample original attack image comprises the image content of a sample attack area in the first attacker sample image, the style of the sample original attack image is matched with that of an attacked sample image, and the sample original attack image is determined by adjusting the image style by a generation network of an attack image generation model;
Mapping the original sample attack image to a printer color space by using a color mapping module of the attack image generation model to obtain a sample target attack image;
And adjusting network parameters of the generated network based on the sample target attack image.
15. An electronic device comprising a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the attack image generation method of any of claims 1 to 13 and/or to implement the training method of the generation network of claim 14.
16. A computer readable storage medium having stored thereon program instructions, which when executed by a processor, implement the attack image generation method of any of claims 1 to 13 and/or implement the training method of generating a network of claim 14.
CN202410190531.5A 2024-02-20 2024-02-20 Attack image generation method, training method of generated network and related device Pending CN118196913A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410190531.5A CN118196913A (en) 2024-02-20 2024-02-20 Attack image generation method, training method of generated network and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410190531.5A CN118196913A (en) 2024-02-20 2024-02-20 Attack image generation method, training method of generated network and related device

Publications (1)

Publication Number Publication Date
CN118196913A true CN118196913A (en) 2024-06-14

Family

ID=91397276

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410190531.5A Pending CN118196913A (en) 2024-02-20 2024-02-20 Attack image generation method, training method of generated network and related device

Country Status (1)

Country Link
CN (1) CN118196913A (en)

Similar Documents

Publication Publication Date Title
Wu et al. Mantra-net: Manipulation tracing network for detection and localization of image forgeries with anomalous features
Jourabloo et al. Face de-spoofing: Anti-spoofing via noise modeling
CN107944379B (en) Eye white image super-resolution reconstruction and image enhancement method based on deep learning
Hsu et al. Camera response functions for image forensics: an automatic algorithm for splicing detection
Akhtar et al. Face spoof attack recognition using discriminative image patches
Ortega-Delcampo et al. Border control morphing attack detection with a convolutional neural network de-morphing approach
WO2019152983A2 (en) System and apparatus for face anti-spoofing via auxiliary supervision
Chen et al. Camera invariant feature learning for generalized face anti-spoofing
KR20200032206A (en) Face recognition unlocking method and device, device, medium
JP2020525947A (en) Manipulated image detection
KR102352345B1 (en) Liveness test method and apparatus
CN112883918B (en) Face detection method, face detection device, terminal equipment and computer readable storage medium
Chen et al. Mislgan: an anti-forensic camera model falsification framework using a generative adversarial network
CN113642639B (en) Living body detection method, living body detection device, living body detection equipment and storage medium
Rathgeb et al. Makeup presentation attacks: Review and detection performance benchmark
CN106169064A (en) The image-recognizing method of a kind of reality enhancing system and system
Bacchuwar et al. A jump patch-block match algorithm for multiple forgery detection
Zhao et al. A transferable anti-forensic attack on forensic CNNs using a generative adversarial network
Barni et al. Iris deidentification with high visual realism for privacy protection on websites and social networks
Ferrara et al. On the impact of alterations on face photo recognition accuracy
Sun et al. Understanding deep face anti-spoofing: from the perspective of data
WO2022199395A1 (en) Facial liveness detection method, terminal device and computer-readable storage medium
CN112488137A (en) Sample acquisition method and device, electronic equipment and machine-readable storage medium
El-Naggar et al. Which dataset is this iris image from?
Chen et al. A study on the photo response non-uniformity noise pattern based image forensics in real-world applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination