CN116342982A - Sample image generation method and device, electronic equipment and storage medium - Google Patents

Sample image generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116342982A
CN116342982A CN202310403771.4A CN202310403771A CN116342982A CN 116342982 A CN116342982 A CN 116342982A CN 202310403771 A CN202310403771 A CN 202310403771A CN 116342982 A CN116342982 A CN 116342982A
Authority
CN
China
Prior art keywords
image
model
image processing
obtaining
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310403771.4A
Other languages
Chinese (zh)
Inventor
孙海栋
郭维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202310403771.4A priority Critical patent/CN116342982A/en
Publication of CN116342982A publication Critical patent/CN116342982A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides a sample image generation method, a sample image generation device, electronic equipment and a storage medium, and relates to the technical field of artificial intelligence, in particular to the technical field of image processing. The specific implementation scheme is as follows: obtaining a first result of processing a first image based on a known image processing model, wherein the known image processing model is a neural network model which is a known model code and is trained, and the known image processing model has the same image processing function as a model to be tested; obtaining disturbance information based on the first result and the labeling information of the first image, wherein the processing result represented by the labeling information is different from the correct processing result of the first image aiming at the image processing function; adding disturbance information to the first image to obtain a second image; performing attack testing on the known image processing model by using the second image; and if the attack is successful, acquiring a sample image for carrying out attack test on the model to be tested based on the second image. By applying the scheme provided by the embodiment of the disclosure, a sample image can be generated for the model to be tested.

Description

Sample image generation method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of artificial intelligence, and in particular, to the field of image processing techniques.
Background
In various fields such as automatic driving and security, various tasks are completed based on the images, so that the images need to be processed according to task requirements. In addition, with the development of artificial intelligence technology, the situation that the neural network model is used for processing the image is more and more increased, however, the processing result of the neural network model on the image is easily affected by the image quality, and the situation that the processing result is wrong may occur. In order to improve the robustness of the neural network model, before the neural network model is put into use, various sample images are required to be used for carrying out attack test on the neural network model, so that a large number of sample images are required to be generated for the neural network model.
Disclosure of Invention
The present disclosure provides a sample image generation method, apparatus, electronic device, and storage medium.
According to an aspect of the present disclosure, there is provided a sample image generating method including:
obtaining a first result of processing a first image based on a known image processing model, wherein the known image processing model is a known model code and a trained neural network model, and the known image processing model has the same image processing function as a model to be tested;
Obtaining disturbance information based on the first result and the labeling information of the first image, wherein the processing result represented by the labeling information is different from the correct processing result of the first image aiming at the image processing function;
adding the disturbance information to the first image to obtain a second image;
performing an attack test on the known image processing model using the second image;
and if the attack is successful, acquiring a sample image for carrying out attack test on the model to be tested based on the second image.
According to another aspect of the present disclosure, there is provided a sample image generating apparatus including:
the first result obtaining module is used for obtaining a first result of processing a first image based on a known image processing model, wherein the known image processing model is a known model code and a trained neural network model, and the known image processing model has the same image processing function as a model to be tested;
the disturbance information obtaining module is used for obtaining disturbance information based on the first result and the labeling information of the first image, wherein the processing result represented by the labeling information is different from the correct processing result of the first image aiming at the image processing function;
The disturbance information adding module is used for adding the disturbance information to the first image to obtain a second image;
an attack test module for performing attack test on the known image processing model by using the second image;
and the sample image acquisition module is used for acquiring a sample image for carrying out attack test on the model to be tested based on the second image under the condition that the attack is successful.
According to still another aspect of the present disclosure, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the sample image generation method described above.
According to yet another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the above-described sample image generation method.
According to yet another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the sample image generation method described above.
From the above, in the solution provided in the embodiment of the present disclosure, after adding disturbance information to the first image to obtain the second image, the second image is used to perform an attack test on the known image processing model, and since the known image processing model has the same image processing function as the model to be tested, the second image that attacks the known image processing model successfully, and the aggressiveness of the model to be tested is also stronger, the second image that attacks the known image processing model successfully can be used as a sample image for performing an attack test on the model to be tested, thereby obtaining a large number of sample images.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a flowchart of a sample image generating method according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of another sample image generation method provided by an embodiment of the present disclosure;
FIG. 3 is a flow chart of yet another sample image generation method provided by an embodiment of the present disclosure;
FIG. 4 is a flow chart of yet another sample image generation method provided by an embodiment of the present disclosure;
FIG. 5 is a flow chart of a disturbance information obtaining method according to an embodiment of the present disclosure;
fig. 6 is a schematic structural view of a sample image generating device provided in an embodiment of the present disclosure;
fig. 7 is a block diagram of an electronic device used to implement a sample image generation method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Before various neural network models for processing images are put into use, in order to ensure that the neural network models have certain robustness, a sample image is required to be adopted for attack testing. For the test platform, when the test platform performs attack test on the model to be tested, an executable program of the model to be tested is often obtained, but codes of the model to be tested are not obtained, so that the test platform often randomly generates sample images, or obtains the sample images according to the functions of the model to be tested, and the model to be tested is tested by using the sample images.
For example, a sample image may be obtained by adding random noise to the image, or sensitive pixel points in the image may be determined according to the function of the model to be tested, and random disturbance information is added to the sensitive pixel points, so as to generate the sample image. Although the sample images can be generated by applying the method, the generated sample images have certain randomness, so that the test effect is poor when the sample images are used for testing the model to be tested.
The attack test is a test method, which inputs a sample image with interference information into a neural network model, tests whether the neural network model carries out correct processing on the sample image with the interference information, if the neural network model can not carry out correct processing on the sample image, the attack is considered to be successful, otherwise, the attack is considered to be failed. In view of this, attack tests can be used to evaluate the robustness of neural network models. The attack test can test the loopholes and the shortages of the neural network model, help personnel to improve the model, and further improve the reliability and the safety of the model.
The following describes an execution body of the solution provided by the embodiments of the present disclosure.
The execution subject of the solution provided by the embodiments of the present disclosure may be an electronic device such as a desktop computer, a notebook computer, a server, etc. for generating a sample image, for example, may be an electronic device of a test platform. For convenience of description, the electronic device will be referred to as a sample generation device.
The sample image generating method provided by the embodiment of the present disclosure is described in detail below by way of specific embodiments.
In one embodiment of the present disclosure, referring to fig. 1, fig. 1 provides a flow chart of a sample image generation method. The method includes the following steps S101-S105.
Step S101: a first result of processing the first image based on the known image processing model is obtained.
The known image processing model is a neural network model which is a known model code and is trained, and the known image processing model has the same image processing function as a model to be tested. For example, the known image processing model may be obtained by training a known open source neural network model. In order to make the known image processing model identical to the image processing function of the model to be tested, the open source neural network model may be trained to be the same known image processing model as the image processing function of the model to be tested in the process of obtaining the known image processing model. For example, when the image processing function of the model to be tested is image classification, a known image processing model whose function is image classification may be trained, in which case a first result obtained by processing the first image based on the known image processing model is a classification result for the first image, and further for example, when the image processing function of the model to be tested is target detection, a known image processing model whose function is target detection may be trained, in which case the above-mentioned first result is a target detection result for the first image. Further, the image processing function may be further divided in detail, for example, in the case where the image classification function of the model to be tested classifies the face, the open source neural network model may be trained to be a known image processing model that is functional for classifying the face, and in this case, the first result is a face classification result for the first image. .
The first image may be an original image used to generate the sample image, and in addition, the first image may be another image, which will be described in the following embodiments.
The implementation of step S101 is described below.
In one implementation, the sample generating device may take the feature of the first image as an input of a known image processing model, and obtain a processing result of the known image processing model on the feature of the first image as the first result.
For example, in the case where the known image processing model classifies a human, a cat, a dog, or the like, and the first image is an image of the human, the feature of the first image may be taken as an input to the known image processing model, the classification result obtained is a human, and then the human may be taken as the first result.
Other implementations of step S101 will be described in the following embodiments, which are not described in detail here.
Step S102: and obtaining disturbance information based on the first result and the labeling information of the first image.
The processing result of the annotation information representation is different from the correct processing result of the first image aiming at the image processing function. The above-described image processing functions represent the image processing functions of the known image processing model and the model to be tested. For example, in the case where the above-described image processing function is a function of classifying humans, cats, dogs, and the like, and the first image is an image of humans, the labeling information of the first image may be set as cats. In the case where the above-described image processing function is target detection and the first image is one car traveling on a road, the labeling information of the first image may be set so that no car is detected, or the labeling information of the first image may be set so that the passing of the car is detected, but the position in which the car is detected in the labeling information is different from the position in the first image.
The disturbance information may be understood as information that interferes with the image content, and may be, for example, information for making changes to pixel values of pixels in the image. The disturbance information can be used for being added into the first image, so that the pixel value in the first image added with the disturbance information changes, and the processing result of the first image added with the disturbance information for the image processing function is closer to the labeling information of the first image.
The implementation of step S102 is described in the following embodiments, which are not described in detail here.
Step S103: and adding disturbance information to the first image to obtain a second image.
The implementation of step S103 is described in the following embodiments, which are not described in detail here.
Step S104: and performing attack testing on the known image processing model by using the second image.
This step is described in different implementations below.
In one implementation, the sample generating device directly takes the obtained second image as an input of a known image processing model, inputs the known image processing model, and processes the second image by the known image processing model, so as to realize attack test on the known image processing model.
In another implementation manner, in the case that the number of the known image processing models is plural, the sample generating device may input the known image processing models using the obtained second images as inputs of each of the known image processing models, respectively, and process the second images by the known image processing models, thereby implementing attack tests on the plural known image processing models. In addition, other implementations in this case may refer to the embodiment provided below in fig. 3.
In still another implementation, in case that the number of first images is plural, steps S101 to S103 may be repeatedly performed until the second image corresponding to each first image is obtained. And carrying out attack testing on the known image processing model based on each second image.
Step S105: and under the condition that the attack is successful, obtaining a sample image for carrying out attack test on the model to be tested based on the second image.
Under the condition that the attack is successful, the second image can be directly used as a sample image for carrying out attack test on the model to be tested.
The implementation of step S105 is described in the following embodiments, which are not described in detail here.
From the above, in the solution provided in the embodiment of the present disclosure, after adding disturbance information to the first image to obtain the second image, the second image is used to perform an attack test on the known image processing model, and since the known image processing model has the same image processing function as the model to be tested, the second image that attacks the known image processing model successfully, and the aggressiveness of the model to be tested is also stronger, the second image that attacks the known image processing model successfully can be used as a sample image for performing an attack test on the model to be tested, thereby obtaining a large number of sample images. In addition, in the scheme provided by the embodiment of the disclosure, the disturbance information added in the first image is obtained based on the first result of processing the first image by the known image processing model and the labeling information of the first image, but is not randomly generated, so that the disturbance information is not random, and the second image is not random, and on the basis, the sample image obtained based on the second image is not random, so that compared with the prior art, the test effect is better when the sample image is adopted to carry out attack test on the model to be tested.
An implementation of obtaining the sample image in step S105 is explained below.
In one embodiment of the present disclosure, referring to fig. 2, fig. 2 provides a flow diagram of another sample image generation method. The above method includes the following steps S201-S206. Wherein step S105 may be implemented by the following steps S205-S206.
Step S201: a first result of processing the first image based on the known image processing model is obtained.
Step S202: and obtaining disturbance information based on the first result and the labeling information of the first image.
Step S203: and adding disturbance information to the first image to obtain a second image.
Step S204: and performing attack testing on the known image processing model by using the second image.
If the first cycle end condition has been satisfied, step S206 is directly executed, and if the first cycle end condition has not been satisfied, step S205 described below is executed.
Steps S201 to S204 are the same as steps S101 to S104 described above, and will not be described in detail here.
Step S205: and under the condition that the attack is successful, retraining the known image processing model by using the second image, and updating the first image into the second image.
After step S205 is executed, the process returns to step S201 until the first cycle end condition is satisfied.
The concept of attack success is described below.
And the attack results of the first and second images on the known image processing model are the same as the labeling information of the first image used for obtaining the second image. The attack result is as follows: and when the second image carries out attack test on the known image processing model, outputting a result of the known image processing model.
The attack result of the second image on the known image processing model is different from the correct processing result of the first image for the image processing function, which is used for obtaining the second image.
If the second image satisfies the first or second condition as a result of the attack test on the known image processing model, then the second image may be considered to attack the known image processing model successfully.
And under the condition of three, in the case that the number of the known image processing models is a plurality of, the second image attacks each known image processing model successfully.
If the number of known image processing models is one, when the result of the attack test of the second image on the known image processing models satisfies the above condition one or condition two, the second image attack can be considered successful. If the number of known image processing models is plural, the second image attack can be considered successful when the result of the attack test of the second image on the known image processing models satisfies the above condition three.
The implementation of step S205 is explained below.
In one implementation, the known image processing model may be obtained based on training an open source neural network model, and the sample generating device may further train the known image processing model to obtain a new known image processing model by using the second image as a training sample as a new image of the second image is obtained.
Further, the second image for retraining the known image processing model may be a second image of a successful attack obtained in steps S201 to S204, or may be a second image of a plurality of successful attacks obtained in steps S201 to S204 performed based on a plurality of different first images. The embodiments of the present disclosure are not limited in this regard.
When the first cycle end condition is not satisfied, after step S205 is performed, after returning to step S201 to obtain the first result, the current first image is the second image for retraining the known image processing model, and the current known image processing model is the retrained known image processing model.
Wherein the first cycle end condition may be: retraining the known image processing model a preset number of times. In the case where the number of known image processing models is plural, the first cycle end condition may be: after step S204 is performed, the above condition three is satisfied. The first cycle end condition may also be: the second image currently being generated has converged. For example, the degree of fitting of the current image may be evaluated by calculating an average value or a maximum value of errors of respective pixels between the second image and the original image, and determining whether the second image has converged.
Step S206: and determining the latest second image as a sample image for carrying out attack test on the model to be tested.
In this embodiment of the present disclosure, the second image obtained by the latest step is: in the case where the first cycle end condition is satisfied in step S205, the second image obtained in step S204.
Based on the second image that can be successfully attacked, the known image processing model is retrained, so that the robustness of the known image processing model is improved. Therefore, the known image processing model with higher robustness is more difficult to generate the first result which is the same as the labeling information of the first image, so that the difference between the first result and the labeling information is larger, disturbance information with stronger disturbance capability can be obtained based on the first result and the labeling information of the first image, further, a second image with stronger disturbance capability is obtained, the process is repeated until the first cycle end condition is met, the second image with stronger disturbance capability can be obtained every time, the latest second image is used as a sample image, and the disturbance capability of the sample image is further improved, so that the generated sample image can be used for testing the model to be tested with better effect.
In one embodiment of the present disclosure, referring to fig. 3, fig. 3 provides a flow chart of yet another sample image generation method. The method includes the following steps S301-S306. Step S101 may also be implemented in the following step S301, and another implementation of step S104 may be implemented in the following step S305.
Step S301: a first result of processing the first image based on an image processing model that is not traversed among a plurality of known image processing models is obtained.
In one implementation, the sample generation device may use a known image processing model to process the first image to obtain the first result if it determines during the traversal that the known image processing model has not been used.
In this case, the first image may be an original image used to generate the sample image. The first image may also be a second image obtained by processing the last first image using the last known image processing model, that is, the second image obtained by performing step S305 last time.
Step S302: and obtaining disturbance information based on the first result and the labeling information of the first image.
Step S303: and adding disturbance information to the first image to obtain a second image.
Step S302 is the same as step S102, and step S303 is the same as step S103, and will not be described in detail here.
In the case where the traversal of the plurality of known image processing models is completed, step S305 is directly performed, and otherwise, step S304 described below is performed.
Step S304: in the case where the plurality of known image processing models do not traverse, one image processing model is updated to an image processing model that is not traversed among the plurality of known image processing models, and the first image is updated to the second image.
After step S304 is executed, the flow returns to step S301.
In one implementation, if the sample generating device determines that there is an image processing model that is not traversed during the traversing process, the sample generating device may update an image processing model for processing the first image to the image processing model that is not traversed.
Specifically, after one image processing model is used, the identification of the image processing model may be recorded in a table, and if the identification of the known image processing model is not recorded in the table, it may be determined that a plurality of known image processing models have not completed traversal. If each known image processing model is traversed, and then the identification of each known image processing model is determined to be recorded in the table, then it can be determined that a plurality of known image processing models have completed the traversal.
Step S305: and carrying out attack testing on each known image processing model by using the latest obtained second image.
The second image used in step S305 is: in the case where it is determined that the plurality of known image processing models have completed the traversal, the second image newly obtained in step S304.
The implementation of step S305 is the same as the other implementation in step S104.
Step S306: and under the condition that the attack is successful, obtaining a sample image for carrying out attack test on the model to be tested based on the second image.
Step S306 is the same as step S105 and will not be described in detail here.
From the above, the image is processed by using a plurality of known image processing models, a plurality of results can be obtained for each known image processing model in turn, and a plurality of disturbance information is added to the image in turn based on the plurality of results and the labeling information, so that the superimposed disturbance information can interfere the plurality of known image processing models, therefore, the sample image added with the plurality of disturbance information has better mobility to a plurality of neural network models with the same functions as the known image processing models, and the effect of testing the model to be tested by the generated sample image is improved.
In an embodiment of the present disclosure, in the embodiment provided in fig. 1 above, step a may further include the following step a after step S103.
Step A: if the number of times of obtaining the second image for the known image processing model does not reach the preset number of times, updating the first image to the second image, and returning to step S101.
When the number of times of obtaining the second image for the known image processing model does not reach the preset number of times, updating the first image to the second image, and returning to step S101, wherein the second image used in step S104 is: when the number of times the second image is obtained for the known image processing model reaches the preset number of times, in step S103, the second image is obtained.
In another embodiment of the present disclosure, in the embodiment provided in fig. 2 above, step a may also be included after step S203. In this case, when step a is performed, if the number of times the second image is obtained for the known image processing model does not reach the preset number of times, the first image is updated to the second image, and step S201 is returned. The second image used in step S204 is: when the number of times the second image has been obtained for the currently known image processing model has reached the preset number of times, the second image obtained in step S203 is further obtained.
In addition, in the case where step S205 is performed and step S201 is returned, the known image processing model referred to in step a has become a retrained known image processing model, and in this case, that is, in the case where the known image processing model in step a has changed, step a needs to be newly performed, that is, the number of times the second image is obtained for the known image processing model is newly calculated.
In still another embodiment of the present disclosure, in the embodiment provided in fig. 3, after step S303, step a may also be included before step S304, in which case, when step a is performed, if the number of times of obtaining the second image for the known image processing model does not reach the preset number of times, the first image is updated to the second image, and step S301 is returned. The latest second image used in step S305 is: in the case where it is determined in step S304 that the plurality of known image processing models have completed the traversal, and the number of times the second image has been obtained for the current known image processing model has reached the preset number of times, the second image obtained in step S302.
In addition, similarly to another embodiment of the present disclosure described above, in the case where the known image processing model in step a is changed, it is necessary to recalculate the number of times the second image is obtained for the known image processing model when step a is performed. For example, in the case where the number of times the second image is obtained for the current known image processing model has reached the preset number of times, but it is determined in step S304 that the plurality of known image processing models have not been traversed, the known image processing model is updated to one image processing model that has not been traversed, so when step a is performed again, it is necessary to recalculate the number of times the second image is obtained for the known image processing model.
In yet another embodiment of the present disclosure, referring to fig. 4, fig. 4 provides a flow diagram of yet another sample image generation method. The above method includes the following steps S401 to S408.
Step S401: a first result of processing the first image based on an image processing model that is not traversed among a plurality of known image processing models is obtained.
Step S402: and obtaining disturbance information based on the first result and the labeling information of the first image.
Step S403: and adding disturbance information to the first image to obtain a second image.
If the number of times the second image is obtained for the known image processing model reaches the preset number of times, step S405 is executed, otherwise, step S404 described below is executed. In the case where the number of times the second image is obtained for the known image processing models reaches the preset number of times and the plurality of known image processing models complete the traversal, step S406 is performed.
Step S404: in case the number of times the second image is obtained for the known image processing model does not reach the preset number of times, the first image is updated to the second image,
after step S404 is executed, the process returns to step S401.
In this step, if the known image processing model is changed because of the execution of step S405 or S407, the number of times the second image is obtained for the known image processing model needs to be recalculated if step S404 needs to be executed again.
Step S405: in the case where the plurality of known image processing models do not traverse, one image processing model is updated to an image processing model that is not traversed among the plurality of known image processing models, and the first image is updated to the second image.
After step S405 is executed, the flow returns to step S401.
In this step, if the plurality of known image processing models become the latest retrained plurality of known image processing models because step S407 is performed, each of the latest retrained plurality of known image processing models needs to be re-traversed when step S405 is performed again.
Step S406: and carrying out attack testing on each known image processing model by using the latest obtained second image.
In this step, the second image that is newly obtained is: the number of times the second image has been obtained for the current known image processing model has reached the preset number of times in step S404, and in the case where it is determined in step S405 that the plurality of known image processing models have completed the traversal, the second image obtained in step S403.
If the first cycle end condition is satisfied, step S408 is executed, and if the first cycle end condition is not satisfied, step S407 described below is executed.
Step S407: and under the condition that the attack is successful, retraining the known image processing model by using the second image, and updating the first image into the second image.
After step S407 is performed, the process returns to step S401 until the first cycle end condition is satisfied.
Step S408: and determining the latest second image as a sample image for carrying out attack test on the model to be tested.
Steps S401 to S403 are identical to steps S101 to S103, step S404 is identical to step a, steps S405 to S406 are identical to steps S304 to S305, and steps S407 to S408 are identical to steps S205 to S206, which will not be described in detail.
From the above, the disturbance information of the preset times is repeatedly added to the image for a known image processing model, so that the image obtained for the known image processing model can better interfere with the known image processing model, and the effect of testing the model to be tested by the generated sample image is improved.
In one embodiment of the present disclosure, referring to fig. 5, fig. 5 provides a flow chart of a disturbance information obtaining method. The method includes the following steps S501-S503.
Step S501: and obtaining the loss value of the known image processing model for carrying out image processing on the first image based on the loss value calculation mode recorded in the model code of the known image processing model, the first result and the labeling information of the first image.
In one implementation, the loss value may be calculated by taking the first result and the labeling information of the first image into the loss value calculation mode based on the loss value calculation mode.
For example, if the above-described loss value calculation method is a binary cross entropy loss function, the loss value of the known image processing model for performing image processing on the first image may be obtained by the following expression:
Figure BDA0004182794120000131
wherein,,
Figure BDA0004182794120000132
representing the loss value, y representing the labeling information +.>
Figure BDA0004182794120000133
The first result is shown.
Step S502: and carrying out back propagation on the loss value according to a gradient rising mode to obtain a gradient value.
The loss value here characterizes that, since the labeling information of the first image is different from the correct processing result of the first image for the image processing function: the loss between the processing result of the currently known image processing model and the annotation information. Therefore, the gradient values here can characterize the extent to which each pixel contributes to the loss value. When the pixel values are adjusted based on the gradient values, the loss can be reduced, and thus, the adjusted image is processed again by the known image processing model to obtain a result closer to the labeling information.
Step S503: disturbance information is obtained based on the gradient values.
In one implementation, perturbation information is obtained based on the gradient values and learning rates corresponding to known image processing models.
Specifically, the gradient value and the learning rate corresponding to the known image processing model can be multiplied, and disturbance information can be obtained through calculation.
The disturbance information is determined through the learning rate, and the information of the known image processing model is combined, so that the excessive or insufficient adjustment of pixel values in the image is avoided, the image distortion is avoided, or the image iteration is slow or even can not be converged, and the efficiency of generating the sample image is improved on the premise of ensuring the quality of the generated sample image.
The loss value is obtained, the loss value is reversely propagated in a gradient rising mode to obtain the gradient value, disturbance information is obtained based on the gradient value, the disturbance information can be aimed at appointed marking information, the sample image with the targeted disturbance information is added to directionally interfere with the model to be tested, so that the use of the countermeasure test is more targeted, and the effect of the sample image generated by the scheme on the attack test of the model to be tested is improved.
The implementation of the above step S103 is explained below.
In one implementation, step S103 may be implemented by the following steps B and C.
And (B) step (B): and according to the position of the pixel point in the first image, information of the same position in the disturbance information is overlapped to the pixel value of the pixel point in the first image.
In one manner, the sample generation device may perform normalization processing on the first image to obtain the first image feature. In this case, the obtained disturbance information may be a disturbance information feature having the same format as the first image feature, the disturbance information feature is directly added to the first image feature, and the obtained added first image feature is subjected to inverse normalization processing to obtain a superimposed first image. Wherein the inverse normalization process and the normalization process are reciprocal processes.
For example, the generating device may perform normalization processing on the first image may include: the sample generating device may first extract the tensor of the first image, and normalize the tensor to obtain the initial tensor. In this case, the obtained disturbance information is also in the form of tensors, and the sample generating device may directly add the tensor of the disturbance information to the initial tensor to obtain the tensor after the disturbance is added. And the sample generation equipment performs inverse normalization on the tensor added with the disturbance and converts the tensor into an image to obtain a superimposed first image. In addition, the normalizing the first image may further include: and performing operations such as interception, transposition and the like on the tensor of the first image. For example, the tensor is truncated and transposed into a 224×224×3 tensor so that the tensor conforms to the input format of the known image processing model.
In addition, the normalization process may be performed before the first result is obtained, and the inverse normalization process may be performed during the process of obtaining the second image in the following step C, and the embodiment of the present disclosure does not limit the execution order of the normalization process and the inverse normalization process.
In another way, the disturbance information carries the position of each disturbance in the image, and the sample generating device may directly superimpose the disturbance information on the first image according to the position.
Step C: cutting off the overlapped pixel values to a preset range; based on the truncated pixel values, a second image is obtained.
Specifically, the sample generating device may traverse the pixel value of each pixel in the superimposed first image, and if it is determined that the pixel value exceeds a preset range, truncate the pixel value of the pixel to be within the preset range. For example, a pixel value closest to the pixel value in the preset range may be selected as the truncated pixel value.
For example, if each pixel value in the first image is represented by 0 to 255, a preset range of [15,240] can be assumed, then if one pixel value traversed to the first image is 254, that pixel value can be truncated to 240. In the case of the above-mentioned one mode of the step B, if the normalization processing is performed on the first image, the preset range may be [0.1,0.8], and if a pixel value traversed to the first image is 0.9, the pixel value may be truncated to 0.8.
This may result in a distortion of the first image, since the pixel value after adding the disturbance information may be too high or too low. In contrast, in an attack test or in a real attack scenario, the sample image used is usually recognizable to the human eye, that is to say, the sample image used is not severely distorted. Therefore, the pixel value after superposition is truncated to a preset range, the pixel value in the first image after superposition can be limited to be in a reasonable range, the serious distortion of the sample image is reduced, the generated countermeasure sample is attached to an actual application scene, and the test effect of the model to be tested by using the generated sample image is improved.
Other implementations of obtaining the second image in step S103 are described below.
In another implementation manner, the sample generating device may limit the variation value of each pixel in the first image to a preset variation range based on the step B, and obtain the second image based on the limited pixel value.
Specifically, the pixel value of each pixel in the superimposed first image may be subtracted from the pixel value of each pixel in the first image, where no disturbance is added, to obtain a change value of each pixel in the first image, and the change value is traversed, and if it is determined that the corresponding change value in the superimposed pixel value exceeds the preset change range, the superimposed pixel value is adjusted to limit the change value of the superimposed pixel value. For example, if each pixel value in the first image is represented by 0 to 255, it may be assumed that the preset variation range is [ -200,200], then if the variation value of one pixel value traversed to the first image is-210, then this pixel value may be increased by 10, and if the variation value of one pixel value traversed to the first image is 210, then this pixel value may be decreased by 10. Similarly, in the case of the above-mentioned one mode of the step B, if the normalization processing is performed on the first image, the preset range may be [ -0.8,0.8], and if the change value of the tensor of the pixel traversed to the first image is-0.9, the tensor of the pixel may be increased by 0.1, and if the change value of the tensor of the pixel traversed to the first image is 0.9, the tensor of the pixel may be decreased by 0.1.
In still another implementation manner, the sample generating device may further directly use the superimposed first image as the second image based on the step B.
It should be noted that, in the embodiments provided in fig. 2, 3 and 4, step S103 may be implemented through step B and step C in each process of obtaining the second image, and step S103 may also be implemented through step B and step C in the last process of obtaining the second image, which is not limited in the embodiments of the present disclosure.
In one embodiment of the present disclosure, the model to be tested is an image contrast model.
The image comparison model can compare the similarity among a plurality of images, and can be applied to the fields of image searching, face recognition, object detection and the like. For example, in the field of face recognition, an image comparison model may be used as a face recognition model, and facial features in an image are extracted to determine whether the image can be matched with facial features in a comparison interface, so as to recognize a face in the image. For another example, in the field of object detection, image contrast models may be used to compare whether objects in different scenes are the same or similar.
Therefore, in the image comparison model, especially the safety requirement of the face recognition model is high, the requirement of the face recognition model on the robustness is high, and the sample image is generated by using the known image processing model with the same function as the image comparison model, so that the generated sample image is more fit with the actual application scene, the generated sample image can be applied to the attack test of the image comparison model such as the face recognition model, and the test effect of the generated sample image to the image comparison model to be tested is improved.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user accord with the regulations of related laws and regulations, and the public order colloquial is not violated.
It should be noted that, the two-dimensional face image in this embodiment is derived from the public data set.
In one embodiment of the present disclosure, referring to fig. 6, fig. 6 provides a schematic structural diagram of a sample image generating device, the device includes:
the first result obtaining module 601 is configured to obtain a first result of processing a first image based on a known image processing model, where the known image processing model is a known model code and a trained neural network model, and the known image processing model has the same image processing function as a model to be tested.
The disturbance information obtaining module 602 is configured to obtain disturbance information based on the first result and labeling information of the first image, where a processing result represented by the labeling information is different from a correct processing result of the first image for an image processing function;
the disturbance information adding module 603 is configured to add disturbance information to the first image to obtain a second image.
An attack testing module 604 for performing attack testing on the known image processing model using the second image.
The sample image obtaining module 605 is configured to obtain, based on the second image, a sample image for performing an attack test on the model to be tested, in a case that the attack is successful.
From the above, in the solution provided in the embodiment of the present disclosure, after adding disturbance information to the first image to obtain the second image, the second image is used to perform an attack test on the known image processing model, and since the known image processing model has the same image processing function as the model to be tested, the second image that attacks the known image processing model successfully, and the aggressiveness of the model to be tested is also stronger, the second image that attacks the known image processing model successfully can be used as a sample image for performing an attack test on the model to be tested, thereby obtaining a large number of sample images. In addition, in the scheme provided by the embodiment of the disclosure, the disturbance information added in the first image is obtained based on the first result of processing the first image by the known image processing model and the labeling information of the first image, but is not randomly generated, so that the disturbance information is not random, and the second image is not random, and on the basis, the sample image obtained based on the second image is not random, so that compared with the prior art, the test effect is better when the sample image is adopted to carry out attack test on the model to be tested.
In one embodiment of the present disclosure, the sample image acquisition module 605 is specifically configured to retrain the known image processing model using the second image; updating the first image into the second image, triggering a first result obtaining module 601 until a first cycle ending condition is met; and determining the latest second image as a sample image for carrying out attack test on the model to be tested.
Based on the second image that can be successfully attacked, the known image processing model is retrained, so that the robustness of the known image processing model is improved. Therefore, the known image processing model with higher robustness is more difficult to generate the first result which is the same as the labeling information of the first image, so that the difference between the first result and the labeling information is larger, disturbance information with stronger disturbance capability can be obtained based on the first result and the labeling information of the first image, further, a second image with stronger disturbance capability is obtained, the process is repeated until the first cycle end condition is met, the second image with stronger disturbance capability can be obtained every time, the latest second image is used as a sample image, and the disturbance capability of the sample image is further improved, so that the generated sample image can be used for testing the model to be tested with better effect.
In one embodiment of the present disclosure, in the case where there are a plurality of known image processing models, the first result obtaining module 601 is specifically configured to obtain a first result of processing the first image based on one image processing model that is not traversed among the plurality of known image processing models;
the sample image generating device further includes:
the model and image updating module is configured to update one image processing model to an image processing model that is not traversed in the plurality of known image processing models, update a first image to a second image, and trigger the first result obtaining module 601, in a case where the plurality of known image processing models do not traverse;
the attack testing module 604 is specifically configured to perform attack testing on each known image processing model by using the latest obtained second image.
From the above, the image is processed by using a plurality of known image processing models, a plurality of results can be obtained for each known image processing model in turn, and a plurality of disturbance information is added to the image in turn based on the plurality of results and the labeling information, so that the superimposed disturbance information can interfere the plurality of known image processing models, therefore, the sample image added with the plurality of disturbance information has better mobility to a plurality of neural network models with the same functions as the known image processing models, and the effect of testing the model to be tested by the generated sample image is improved.
In one embodiment of the present disclosure, the sample image generating device further includes:
an image updating module, configured to update the first image to the second image and trigger the first result obtaining module 601 when the number of times of obtaining the second image for the known image processing model does not reach the preset number of times.
From the above, the disturbance information of the preset times is repeatedly added to the image for a known image processing model, so that the image obtained for the known image processing model can better interfere with the known image processing model, and the effect of testing the model to be tested by the generated sample image is improved.
In one embodiment of the present disclosure, the disturbance information obtaining module 602 includes:
the loss value obtaining unit is used for obtaining a loss value of the known image processing model for performing image processing on the first image based on a loss value calculation mode recorded in a model code of the known image processing model, a first result and labeling information of the first image;
the gradient obtaining unit is used for carrying out counter-propagation on the loss value according to a gradient rising mode to obtain a gradient value;
and the disturbance information obtaining unit is used for obtaining disturbance information based on the gradient value.
The loss value is obtained, the loss value is reversely propagated in a gradient rising mode to obtain the gradient value, disturbance information is obtained based on the gradient value, the disturbance information can be aimed at appointed marking information, the sample image with the targeted disturbance information is added to directionally interfere with the model to be tested, so that the use of the countermeasure test is more targeted, and the effect of the sample image generated by the scheme on the attack test of the model to be tested is improved.
In one embodiment of the disclosure, the disturbance information obtaining unit is specifically configured to obtain the disturbance information based on the gradient value and a learning rate corresponding to the known image processing model.
The disturbance information is determined through the learning rate, and the information of the known image processing model is combined, so that the excessive or insufficient adjustment of pixel values in the image is avoided, the image distortion is avoided, or the image iteration is slow or even can not be converged, and the efficiency of generating the sample image is improved on the premise of ensuring the quality of the generated sample image.
In one embodiment of the present disclosure, the disturbance information adding module 603 is specifically configured to superimpose, according to a position of a pixel in the first image, information of a same position in the disturbance information onto a pixel value of the pixel in the first image; cutting off the overlapped pixel values to a preset range; based on the truncated pixel values, a second image is obtained.
This may result in a distortion of the first image, since the pixel value after adding the disturbance information may be too high or too low. In contrast, in an attack test or in a real attack scenario, the sample image used is usually recognizable to the human eye, that is to say, the sample image used is not severely distorted. Therefore, the pixel value after superposition is truncated to a preset range, the pixel value in the first image after superposition can be limited to be in a reasonable range, the serious distortion of the sample image is reduced, the generated countermeasure sample is attached to an actual application scene, and the test effect of the model to be tested by using the generated sample image is improved.
In one embodiment of the present disclosure, the model to be tested is an image contrast model.
Therefore, in the image comparison model, especially the safety requirement of the face recognition model is high, the requirement of the face recognition model on the robustness is high, and the sample image is generated by using the known image processing model with the same function as the image comparison model, so that the generated sample image is more fit with the actual application scene, the generated sample image can be applied to the attack test of the image comparison model such as the face recognition model, and the test effect of the generated sample image to the image comparison model to be tested is improved.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
According to still another aspect of the present disclosure, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the sample image generation method described above.
According to yet another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the above-described sample image generation method.
According to yet another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the sample image generation method described above.
Fig. 7 illustrates a schematic block diagram of an example electronic device 700 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the apparatus 700 includes a computing unit 701 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the device 700 may also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in device 700 are connected to I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, etc.; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, an optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 701 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 701 performs the respective methods and processes described above, such as method sample image generation. For example, in some embodiments, the method sample image generation may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 700 via ROM 702 and/or communication unit 709. When a computer program is loaded into RAM 703 and executed by computing unit 701, one or more steps of the method sample image generation described above may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured to perform the method sample image generation by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (19)

1. A sample image generation method, comprising:
obtaining a first result of processing a first image based on a known image processing model, wherein the known image processing model is a known model code and a trained neural network model, and the known image processing model has the same image processing function as a model to be tested;
obtaining disturbance information based on the first result and the labeling information of the first image, wherein the processing result represented by the labeling information is different from the correct processing result of the first image aiming at the image processing function;
Adding the disturbance information to the first image to obtain a second image;
performing an attack test on the known image processing model using the second image;
and if the attack is successful, acquiring a sample image for carrying out attack test on the model to be tested based on the second image.
2. The method of claim 1, wherein the obtaining a sample image of the pattern under test for attack based on the second image comprises:
retraining the known image processing model using the second image;
updating the first image into the second image, and returning to the step of obtaining a first result of processing the first image based on the known image processing model until a first cycle end condition is met;
and determining the latest second image as a sample image for carrying out attack test on the model to be tested.
3. The method of claim 1, wherein, in the presence of a plurality of known image processing models, the obtaining a first result of processing the first image based on the known image processing models comprises:
obtaining a first result of processing the first image based on an image processing model not traversed among the plurality of known image processing models;
After the adding the disturbance information to the first image to obtain a second image, the method further comprises:
if the known image processing models do not traverse, updating the image processing model to an image processing model which is not traversed in the known image processing models, updating the first image to the second image, and returning to the step of obtaining a first result of processing the first image based on the known image processing models;
the using the second image to perform attack testing on the known image processing model includes:
and carrying out attack testing on each known image processing model by using the latest obtained second image.
4. A method according to any of claims 1-3, further comprising, after said adding the perturbation information to the first image resulting in a second image:
if the number of times of obtaining the second image for the known image processing model does not reach the preset number of times, updating the first image to the second image, and returning to the step of obtaining the first result of processing the first image based on the known image processing model.
5. A method according to any of claims 1-3, wherein the obtaining perturbation information based on the first result and annotation information for the first image comprises:
obtaining a loss value of the known image processing model for performing image processing on the first image based on a loss value calculation mode recorded in a model code of the known image processing model, the first result and labeling information of the first image;
counter-propagating the loss value according to a gradient rising mode to obtain a gradient value;
and obtaining disturbance information based on the gradient value.
6. The method of claim 5, wherein the obtaining perturbation information based on the gradient values comprises:
and obtaining disturbance information based on the gradient value and the learning rate corresponding to the known image processing model.
7. A method according to any of claims 1-3, wherein said adding the perturbation information to the first image results in a second image, comprising:
according to the position of the pixel point in the first image, information of the same position in the disturbance information is overlapped to the pixel value of the pixel point in the first image;
Cutting off the overlapped pixel values to a preset range;
based on the truncated pixel values, a second image is obtained.
8. The method according to any one of claim 1 to 3, wherein,
the model to be tested is an image comparison model.
9. A sample image generation apparatus comprising:
the first result obtaining module is used for obtaining a first result of processing a first image based on a known image processing model, wherein the known image processing model is a known model code and a trained neural network model, and the known image processing model has the same image processing function as a model to be tested;
the disturbance information obtaining module is used for obtaining disturbance information based on the first result and the labeling information of the first image, wherein the processing result represented by the labeling information is different from the correct processing result of the first image aiming at the image processing function;
the disturbance information adding module is used for adding the disturbance information to the first image to obtain a second image;
an attack test module for performing attack test on the known image processing model by using the second image;
and the sample image acquisition module is used for acquiring a sample image for carrying out attack test on the model to be tested based on the second image under the condition that the attack is successful.
10. The apparatus of claim 9, wherein,
the sample image obtaining module is specifically configured to retrain the known image processing model using the second image; updating the first image into the second image, and triggering the first result obtaining module until a first cycle ending condition is met; and determining the latest second image as a sample image for carrying out attack test on the model to be tested.
11. The apparatus of claim 9, wherein,
in the case where there are a plurality of known image processing models, the first result obtaining module is specifically configured to obtain a first result of processing the first image based on one image processing model that is not traversed among the plurality of known image processing models;
the sample image generating device further includes:
a model and image updating module, configured to update the one image processing model to an image processing model that is not traversed in the plurality of known image processing models, update the first image to the second image, and trigger the first result obtaining module if the plurality of known image processing models is not traversed;
The attack test module is specifically configured to perform attack test on each known image processing model by using the latest obtained second image.
12. The apparatus according to any one of claims 9-11, the sample image generating apparatus further comprising:
and the image updating module is used for updating the first image into the second image and triggering the first result obtaining module when the number of times of obtaining the second image aiming at the known image processing model does not reach the preset number of times.
13. The apparatus of any of claims 9-11, wherein the disturbance information obtaining module comprises:
a loss value obtaining unit, configured to obtain a loss value of the known image processing model for performing image processing on the first image based on a loss value calculation manner recorded in a model code of the known image processing model, the first result, and labeling information of the first image;
the gradient obtaining unit is used for carrying out counter-propagation on the loss value according to a gradient rising mode to obtain a gradient value;
and the disturbance information obtaining unit is used for obtaining disturbance information based on the gradient value.
14. The apparatus of claim 13, wherein,
The disturbance information obtaining unit is specifically configured to obtain disturbance information based on the gradient value and a learning rate corresponding to the known image processing model.
15. The device according to any one of claims 9-11, wherein,
the disturbance information adding module is specifically configured to superimpose information at the same position in the disturbance information onto a pixel value of a pixel in the first image according to the position of the pixel in the first image; cutting off the overlapped pixel values to a preset range; based on the truncated pixel values, a second image is obtained.
16. The device according to any one of claims 9-11, wherein,
the model to be tested is an image comparison model.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
18. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-8.
19. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any of claims 1-8.
CN202310403771.4A 2023-04-14 2023-04-14 Sample image generation method and device, electronic equipment and storage medium Pending CN116342982A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310403771.4A CN116342982A (en) 2023-04-14 2023-04-14 Sample image generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310403771.4A CN116342982A (en) 2023-04-14 2023-04-14 Sample image generation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116342982A true CN116342982A (en) 2023-06-27

Family

ID=86889465

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310403771.4A Pending CN116342982A (en) 2023-04-14 2023-04-14 Sample image generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116342982A (en)

Similar Documents

Publication Publication Date Title
CN110349147B (en) Model training method, fundus macular region lesion recognition method, device and equipment
EP3933708A2 (en) Model training method, identification method, device, storage medium and program product
CN113674421B (en) 3D target detection method, model training method, related device and electronic equipment
CN113869449A (en) Model training method, image processing method, device, equipment and storage medium
US20220351398A1 (en) Depth detection method, method for training depth estimation branch network, electronic device, and storage medium
WO2022257614A1 (en) Training method and apparatus for object detection model, and image detection method and apparatus
CN114186632A (en) Method, device, equipment and storage medium for training key point detection model
CN113205041B (en) Structured information extraction method, device, equipment and storage medium
US11756288B2 (en) Image processing method and apparatus, electronic device and storage medium
CN113902956A (en) Training method of fusion model, image fusion method, device, equipment and medium
CN113627361B (en) Training method and device for face recognition model and computer program product
CN115457329B (en) Training method of image classification model, image classification method and device
CN114724144B (en) Text recognition method, training device, training equipment and training medium for model
CN116342982A (en) Sample image generation method and device, electronic equipment and storage medium
CN113537309B (en) Object identification method and device and electronic equipment
CN113591969B (en) Face similarity evaluation method, device, equipment and storage medium
CN115457365A (en) Model interpretation method and device, electronic equipment and storage medium
CN113361455A (en) Training method of face counterfeit identification model, related device and computer program product
CN114093006A (en) Training method, device and equipment of living human face detection model and storage medium
CN114220163A (en) Human body posture estimation method and device, electronic equipment and storage medium
CN110705695B (en) Method, device, equipment and storage medium for searching model structure
CN113313049A (en) Method, device, equipment, storage medium and computer program product for determining hyper-parameters
US20210334519A1 (en) Information processing apparatus, method, and non-transitory storage medium
CN116563665B (en) Training method of target detection model, target detection method, device and equipment
US20220222941A1 (en) Method for recognizing action, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination