CN114648673A - Method and device for generating confrontation sample - Google Patents

Method and device for generating confrontation sample Download PDF

Info

Publication number
CN114648673A
CN114648673A CN202210196939.4A CN202210196939A CN114648673A CN 114648673 A CN114648673 A CN 114648673A CN 202210196939 A CN202210196939 A CN 202210196939A CN 114648673 A CN114648673 A CN 114648673A
Authority
CN
China
Prior art keywords
target object
confidence
loss function
category
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210196939.4A
Other languages
Chinese (zh)
Inventor
田伟娟
王洋
吕中厚
黄英仁
张华正
干逸显
高梦晗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202210196939.4A priority Critical patent/CN114648673A/en
Publication of CN114648673A publication Critical patent/CN114648673A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides a generation method and a generation device of a confrontation sample, and relates to image processing, target detection and deep learning in artificial intelligence. The specific implementation scheme is as follows: the method comprises the steps of obtaining an original image and an initial countermeasure sample generated according to the original image, carrying out target detection on a target object in the original image and the initial countermeasure sample to obtain detection information, constructing a loss function of the target object according to the detection information, the original image and the initial countermeasure sample, adjusting the initial countermeasure sample based on the loss function of the target object to obtain a final countermeasure sample, avoiding the defect of low accuracy caused by the fact that the final countermeasure sample is generated in a manual mode in correlation, improving the accuracy and reliability of the generated final countermeasure sample, and improving the generation efficiency.

Description

Method and device for generating confrontation sample
Technical Field
The present disclosure relates to image processing, target detection, and deep learning in artificial intelligence, and more particularly, to a method and an apparatus for generating a countermeasure sample.
Background
To test the effectiveness of target detection algorithms in the field of Artificial Intelligence (AI) security, it can be implemented by generating challenge samples by disturbing the sample images.
In the related art, the interference information may be artificially added to the sample image to generate the challenge sample.
However, in the above manner, there is a technical problem that the accuracy of generating the challenge sample is low.
Disclosure of Invention
The present disclosure provides a method and apparatus for generating a challenge sample for improving accuracy of generating the challenge sample.
According to a first aspect of the present disclosure, there is provided a method of generating a challenge sample, comprising:
acquiring an original image and an initial confrontation sample generated according to the original image;
performing target detection on the original image and a target object in the initial confrontation sample to obtain detection information;
and constructing a loss function of the target object according to the detection information, the original image and the initial confrontation sample, and adjusting the initial confrontation sample based on the loss function of the target object to obtain a final confrontation sample.
According to a second aspect of the present disclosure, there is provided a generation apparatus of a countermeasure sample, including:
the device comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring an original image and an initial confrontation sample generated according to the original image;
the detection unit is used for carrying out target detection on the original image and a target object in the initial confrontation sample to obtain detection information;
a construction unit, configured to construct a loss function of the target object according to the detection information, the original image, and the initial countermeasure sample;
and the adjusting unit is used for adjusting the initial confrontation sample based on the loss function of the target object to obtain a final confrontation sample.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method according to the first aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising: a computer program, stored in a readable storage medium, from which at least one processor of an electronic device can read the computer program, execution of the computer program by the at least one processor causing the electronic device to perform the method of the first aspect.
According to the technical scheme, the loss function of the target object is constructed according to the detection information, the original image and the initial confrontation sample, and the final confrontation sample is obtained through adjustment based on the constructed loss function, so that the defect of low accuracy caused by the fact that the final confrontation sample is generated manually in correlation can be avoided, the accuracy and the reliability of the generated final confrontation sample are improved, and the generation efficiency is improved.
It should be understood that the statements in this section are not intended to identify key or critical features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure;
FIG. 2 is a schematic diagram according to a second embodiment of the present disclosure;
FIG. 3 is a schematic diagram according to a third embodiment of the present disclosure;
FIG. 4 is a schematic diagram according to a fourth embodiment of the present disclosure;
FIG. 5 is a schematic diagram according to a fifth embodiment of the present disclosure;
FIG. 6 is a schematic diagram according to a sixth embodiment of the present disclosure;
FIG. 7 is a schematic diagram according to a seventh embodiment of the present disclosure;
fig. 8 is a block diagram of an electronic device for implementing the method of generating countermeasure samples of an embodiment of the disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of embodiments of the present disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Countersamples (adaptive algorithms) are widely used in the field of artificial intelligence security, and refer to samples formed by intentionally adding fine interference information to an input data set, so that a network model gives an erroneous output with high confidence.
For example, including a sample image in the data set, adding a sample formed by subtle perturbations in the sample image may be referred to as a challenge sample. That is, the interference information is added to the sample image, and the obtained sample image including the interference information is a countermeasure sample.
In some embodiments, the countermeasure sample may be obtained by manually setting interference information and manually adding the interference information to the sample image.
However, since the generation of the countermeasure sample in this way requires human intervention, the set interference information is easily interfered by human factors, which causes a disadvantage that the accuracy of generating the countermeasure sample is low.
In other embodiments, the challenge samples may be generated in a manner desired by affine transformation.
For example, an affine transformation expectation of the sample image, i.e., a biased estimate (biased estimate) of the pose of the sample image, may be calculated to generate the antagonistic sample according to the affine transformation expectation.
However, the required input size of the network model may not be consistent with the sample image size, and therefore, additional learning parameters need to be introduced to meet the required input size of the model network, which results in slow convergence of the network model and increased convergence difficulty of the network model, thereby causing a disadvantage of low efficiency in generating the countermeasure sample.
In other embodiments, the countermeasure samples may be generated in the form of a Transformation Expectation (EoT).
The transformation expectation refers to a plurality of different transformations which can successfully attack the same image against noise, so that the network model predicts errors on the transformed images.
For example, when the generation of the confrontation sample is executed, the data enhancement is carried out by executing affine transformation, and meanwhile, the change in the actual scene is described, so that the learning performance of the confrontation sample is enhanced.
However, affine transformations are mainly directed to the challenge area, not to the complete challenge sample finally constructed, which ignores the sample transformation that removes the challenge patch (patch) area, and does not achieve the purpose of data enhancement.
To avoid one or more of the above problems, the inventors of the present disclosure have made creative efforts to obtain the inventive concept of the present disclosure: and carrying out target detection on the original image and the initial confrontation sample to obtain detection information, and constructing a loss function of the target object by combining the detection information, the original image and the initial confrontation sample, so as to continuously learn and obtain a final confrontation sample.
Based on the inventive concept, the present disclosure provides a generation method and apparatus of a countermeasure sample, which are applied to image processing, target detection and deep learning in artificial intelligence to achieve reliability of the generated countermeasure sample.
Fig. 1 is a schematic diagram of a first embodiment of the present disclosure, and as shown in fig. 1, a method for generating a challenge sample of an embodiment of the present disclosure includes:
s101: an original image is acquired, and an initial challenge sample is generated from the original image.
For example, the execution subject of this embodiment may be a generation apparatus of the countermeasure sample (hereinafter, simply referred to as a generation apparatus), the generation apparatus may be a server (such as a local server, or a cloud server, or a server cluster), the generation apparatus may be a computer, the generation apparatus may also be a terminal device, the generation apparatus may also be a processor, the generation apparatus may also be a chip, and the like, which is not limited in this embodiment.
The original image may be understood as the image used to generate the final challenge sample. As combined with the above example, the original image may be a sample image, so that after the sample image is subjected to a series of processes, a countermeasure sample is obtained.
The initial confrontation sample refers to a confrontation sample obtained by preprocessing an original image, and the specific implementation method is not limited in this embodiment.
For example, the raw image may be preprocessed based on the application requirements of the generated challenge sample (e.g., the network model to which the final challenge sample is applied) to obtain an initial challenge sample.
Accordingly, an initial countermeasure sample can be understood as a countermeasure sample that satisfies the input parameter requirements of the network model.
For example, the raw image is preprocessed based on the input parameter requirements of the network model, resulting in confrontation samples that satisfy the input parameter requirements of the network model.
Wherein the original image and the initial confrontation sample comprise the target object.
The target object is an interfered object, that is, the countermeasure sample is a sample image for interfering the target object. The target object may be different based on different network models.
For example, if the network model is a face recognition model for face recognition, the target object is a face; if the network model is a recognition model for traffic lights, the target object is a traffic light, and so on, which are not listed here.
S102: and carrying out target detection on the original image and the target object in the initial confrontation sample to obtain detection information.
The method for detecting the target is not limited in this embodiment, and for example, the target detection model may be used to detect the target, so as to obtain the detection information.
For example, the target detection model including end-to-end target detection algorithm (YOLO) may be used for target detection, thereby obtaining detection information. And specifically, a target detection model of YOLO of the paddley framework can be adopted for target detection, so as to obtain detection information. And the output (YOLO-head) of the target detection model can be processed by an output function (sigmoid) to obtain detection information.
Correspondingly, the detection information is the output of the target detection model, and comprises: category (determined based on industry standards for object detection), confidence (likelihood of characterizing a category), coordinate location (which may be understood as the coordinate location of the detection box).
S103: and constructing a loss function of the target object according to the detection information, the original image and the initial confrontation sample, and adjusting the initial confrontation sample based on the loss function of the target object to obtain a final confrontation sample.
The loss function of the target object can be understood as a loss function for the target object attack.
In this embodiment, the loss function of the target object is constructed by combining the three-dimensional contents of the detection information, the original image and the initial confrontation sample, so that the constructed loss function has stronger reliability, and further, when the final confrontation sample is obtained by adjusting based on the loss function, the final confrontation sample has stronger confrontation capability.
Correspondingly, when the network model is trained by combining the final confrontation sample, the trained network model has stronger anti-interference capability, and the recognition capability of the network model is further improved.
Based on the above analysis, an embodiment of the present disclosure provides a generation method of a challenge sample, including: in the embodiment, by combining the detection information, constructing a loss function of the target object according to the detection information, the original image and the initial countermeasure sample, and adjusting to obtain the technical characteristics of the final countermeasure sample based on the constructed loss function, the defect of low accuracy caused by manually generating the final countermeasure sample in correlation can be avoided, and the accuracy and reliability of the generated final countermeasure sample are improved, and the generation efficiency is improved.
Fig. 2 is a schematic diagram of a second embodiment of the present disclosure, and as shown in fig. 2, the method for generating a challenge sample of the embodiment of the present disclosure includes:
s201: and acquiring an initial disturbance area of the original image.
It should be understood that, regarding the technical features of the present embodiment that are the same as those of the above embodiments, the present embodiment is not described again.
The disturbance area refers to an interfered area in the original image, such as an area to which interference information is added.
For example, the disturbance region may be a predefined region, such as a region including interference information defined in an artificial-based manner. Alternatively, the interference region may be a region including interference information that is previously arranged by the generation device based on a demand, a history, a test, or the like.
Wherein the disturbance area has size information and position information. The size information can be understood as the proportion of the disturbance area in the original image; the position information may be understood as the pixel position of the disturbance area in the original image.
S202: and preprocessing the disturbance area according to a preset preprocessing function to obtain a preprocessed disturbance area.
Correspondingly, the preprocessed disturbance area has size information and position information of the preprocessed disturbance area.
The preprocessing function may be a size normalization processing function, an image size scaling processing (resize), or the like, which is not listed here.
Wherein the preprocessed perturbation regions are used to determine the initial challenge sample. In this embodiment, the preprocessed disturbance region is determined by combining with the preprocessing function, and the initial challenge sample is determined by combining with the preprocessed disturbance region, so that the problem of low efficiency caused by size processing can be avoided, and the efficiency of generating the challenge sample is improved.
S203: and performing parameter initialization processing on the preprocessed disturbance area according to the target model to obtain the disturbance area of the target.
By way of example, this step may be understood as: and acquiring a target model applied by the original image, wherein the target model can be a human face recognition model and the like in the embodiment. When an original image is used as a sample image (i.e., input data) of a target model, the size of the original image, such as the size of the original image, may be different from the required input size of the target model, and parameter initialization processing needs to be performed on the original image to obtain an initialization parameter that the original image is suitable for the target model, that is, an initialization parameter that satisfies the required input size of the target model. Correspondingly, parameter initialization processing can be performed on the preprocessed disturbance area based on the required input size of the target model, so that initialization parameters of the preprocessed disturbance area meeting the required input size of the target model are obtained, subsequent replacement processing is completed, and an initial countermeasure sample is obtained.
That is to say, different network models are used in different application scenarios, and different network models may have different requirements on parameters such as the size of an image, so that the target model may initialize an implementation manner according to its own parameters to obtain a perturbation region of the target (a hyper-parameter with size information and a hyper-parameter with position information).
Wherein the perturbation zone of the target is used to determine the initial challenge sample.
It should be noted that, in the embodiment, the initial countermeasure sample is obtained by combining the target model, so as to introduce the disadvantage of extra learning, so as to accelerate the convergence speed of the target model and improve the technical effect of the efficiency of the generated final countermeasure sample.
S204: and replacing the original disturbance area in the original image by the disturbance area of the target to obtain an original countermeasure sample.
S205: and inputting the original image into a target detection model to obtain the real category (groudtuth) of the target object in the original image.
S206: and inputting the initial confrontation sample into a target detection model to obtain the detection information of the target object.
The detection information comprises detection frames of the initial confrontation sample and confidence degrees of each detection frame under each category.
In connection with the above embodiments, the target object has a real category, and accordingly, the non-real category may be referred to as other category. That is, the detection information may include the confidence of the target object in the real category, and the confidence of the target object in the other categories.
In this embodiment, the detection information includes contents of two dimensions (i.e., the confidence level of the target object in the real category and the confidence level of the target object in other categories), so when the loss function of the target object is constructed in combination with the detection information of the contents of two dimensions, the constructed loss function can have higher reliability and attack resistance, and the generated final countermeasure sample has higher reliability.
S207: the difference information between the original image and the initial challenge sample is determined.
Wherein the difference information is used for constructing a loss function of the target object.
In some embodiments, the difference information Diff (x') may be determined based on equation 1, equation 1:
Diff(x′)=λ1(x′-x)2
wherein λ is1The preset coefficient can be set based on the needs, history, tests and the like, wherein x' is an initial confrontation sample, and x is an original image.
In the embodiment, the loss function of the target object is constructed by combining the difference information so as to fully consider the difference between the original graph and the initial confrontation sample, thereby constructing the loss function which has relatively strong attack resistance and is used for the attack of the target object.
S208: the smoothing information for the initial challenge sample is determined.
The smoothing information characterizes the difference of the confidence degrees of the initial confrontation samples in the same pixel position and different categories and the difference of the confidence degrees of the initial confrontation samples in the same category and different pixel positions. The smoothing information is used to construct a loss function for the target object.
In some embodiments, the difference information TV (x') may be determined based on equation 2, equation 2:
TV(x′)=λ2i,j((xi,j′-xi,j+1′)2+(xi,j′-xi+1,j′)2)1/2
where x' is the initial challenge sample, λ2The coefficient may be set based on demand, history, and experiments, where i is a category and j is a pixel position.
In this embodiment, by constructing the loss function of the target object in combination with the smooth information, the content of the generated final challenge sample can be prevented from being abrupt, so that the final challenge sample has strong reality.
S209: and constructing an inter-class difference loss function according to the confidence degrees of the target object in other classes.
Wherein, the inter-class difference loss function is the sum of the confidences of the target object under other classes. And the inter-class difference loss function is used for constructing a loss function of the target object.
In this embodiment, the inter-class difference loss function is used to construct the loss function of the target object, so that the constructed loss function has a high correlation with the confidence of the target object in other classes, that is, the constructed loss function has the capability of distinguishing the real class from other classes, thereby improving the reliability of the final countermeasure sample.
In some embodiments, S209 may include the steps of:
the first step is as follows: and sequentially acquiring N maximum confidences from the confidences of the target object in other categories.
Wherein N is a positive integer greater than 1.
The second step is as follows: and constructing an inter-class difference loss function according to the obtained N maximum confidence coefficients.
For example, N is equal to 3, that is, 3 maximum confidences are selected from the confidences of the target object in the other classes, and an inter-class difference loss function is constructed based on the 3 maximum confidences.
In this embodiment, by constructing the inter-class difference loss function based on the N maximum confidence coefficients, the tedious calculation can be avoided, and the efficiency of generating the final countermeasure sample is improved.
In some embodiments, an inter-class variability loss function L may be constructed based on equation 32(x'), formula 3:
Figure BDA0003526244200000091
where x' is the initial countermeasure sample, t is the true category, i is the category, j is the pixel location, Σi≠tcmaxiFor the confidence of the target object under other categories, the sum of the first N values after the arrangement from big to small,
Figure BDA0003526244200000101
the confidence of the category i at different pixel positions is arranged from high to low as the sum of the first N values, wherein the category i can also be determined based on needs, history, experiments, and the like.
S210: and constructing a loss function of the target object according to the difference information, the confidence coefficient of the target object in the real category, the difference loss function between the categories and the smooth information.
In some embodiments, the loss function L (x') for the target object may be constructed based on equation 4, equation 4:
L(x′)=L2(x′)-L1(x′)+Diff(x′)+TV(x′)
where x' is the initial challenge sample, x is the original image, L2(x') is a function of the loss of inter-class variability, L2(x') is a function of the loss of inter-class variability, L1(x ') is the confidence of the target object in the true category, Diff (x ') is the disparity information, and TV (x ') is the smooth information.
S211: and minimizing the loss function of the target object to obtain a final confrontation sample.
For example, the initial confrontation sample, and specifically the size information and the position information of the perturbation region of the initial confrontation sample, are adjusted, so that when the adjusted initial confrontation sample satisfies the minimization of the loss function of the target object, the initial confrontation sample is determined as the final confrontation sample.
In this embodiment, by combining the loss function of the minimized target object to obtain the final confrontation sample, the final confrontation sample may not be easily recognized, so as to improve the reliability of the final confrontation sample, and when the network model is trained by combining the final confrontation sample, the performance of the network model may be improved, for example, if the network model is a face recognition model, the recognition performance of the face recognition model may be improved, that is, the accuracy and reliability of the recognition may be improved.
In some embodiments, the confidence of the final countermeasure sample in the real category may be determined (which may be implemented in the manner described above, and is not described herein again), and if the determined confidence is smaller than a preset threshold (e.g., 0.5), it indicates that the target object attack is successful, and the target object message is determined as another category by mistake.
Fig. 3 is a schematic diagram of a third embodiment of the present disclosure, and as shown in fig. 3, the method for generating a challenge sample of the embodiment of the present disclosure includes:
s301: an original image is acquired, and an initial challenge sample is generated from the original image.
Similarly, the technical features of this embodiment that are the same as those of the above embodiment will not be described again.
For example, regarding the implementation principle of S301, reference may be made to the description of S101, and reference may be made to the descriptions of S201 to S204.
S302: and carrying out target detection on the target object in the original image to obtain the real category of the target object in the original image and the corresponding confidence of the target object under each category.
In combination with the above analysis, in the above embodiment, the real category of the target object is determined based on the original image, and the confidence level of the target object corresponding to each category is determined based on the initial countermeasure sample, but in the present embodiment, the confidence level of the target object corresponding to each category may also be determined based on the original image, so as to improve the flexibility and diversity of the determination confidence level.
S303: and determining the confidence coefficient of the target object in the real category according to the corresponding confidence coefficient of the target object in each category.
Similarly, in combination with the above analysis, in the above embodiment, the confidence of the target object in the real category is determined based on the initial confrontation sample, while in the present embodiment, the confidence of the target object in the real category is determined based on the original image, so that the diversity and flexibility of determining the confidence of the target object in the real category can be improved.
In some embodiments, S303 may include the steps of:
the first step is as follows: and determining the confidence coefficient of the target object in the false judgment category of the preset target object according to the respective corresponding confidence coefficient of the target object in each category.
The misjudgment category may be determined based on a demand, a history, a test, and the like. In connection with the above analysis, the false positive category may be understood as one of other categories (i.e., non-true categories), and the false positive category is a category in which the true category of the target object is expected to be erroneously identified as one of the other categories.
For example, if the target object is a traffic light, the real category is a traffic light, and the misjudgment category may be a vehicle, etc.
The second step is as follows: and acquiring a target confidence degree from the confidence degrees of the target object corresponding to each category, wherein the target confidence degree is greater than the confidence degree in the false judgment category.
The third step: and determining the confidence of the target object under the real category according to the target confidence.
For example, the target confidence may be weighted and summed to obtain the confidence of the target object under the real category.
In this embodiment, a final countermeasure sample that can misjudge the target object into a specific category (i.e., misjudgment category) can be obtained according to the generation requirement, so as to improve the diversification of generating the final countermeasure sample, so that the final countermeasure sample has a strong pertinence.
In some embodiments, the location of the target object under the real category may be determined based on equation 5Confidence level L1(x'), formula 5:
L1(x′)=∑iεiL(x′),where F(x′)i>F(x′)c
where x ' is the initial confrontation sample, i is the class, F (x ') i is the confidence of class i, F (x ')cIs the confidence of the category c, and the category c is the false positive category.
S304: and carrying out target detection on the initial confrontation sample to obtain the confidence of the target object in other categories except the real category.
The detection information comprises the confidence coefficient of a misjudged object corresponding to the target object in a preset category and the confidence coefficient of the target object in other categories except the real category.
S305: the difference information between the original image and the initial challenge sample is determined.
S306: the smoothing information for the initial challenge sample is determined.
The smoothing information characterizes the difference of the confidence degrees of the initial confrontation samples in the same pixel position and different categories and the difference of the confidence degrees of the initial confrontation samples in the same category and different pixel positions.
S307: and constructing a loss function of the target object according to the difference information, the confidence coefficient of the target object in the real category, the difference loss function between the categories and the smooth information.
S308: and adjusting the initial confrontation sample based on the loss function of the target object to obtain a final confrontation sample.
Fig. 4 is a schematic diagram of a countermeasure sample generation apparatus 400 according to a fourth embodiment of the disclosure, as shown in fig. 4, including:
a first obtaining unit 401, configured to obtain an original image and an initial confrontation sample generated from the original image.
A detecting unit 402, configured to perform target detection on the original image and the target object in the initial challenge sample, so as to obtain detection information.
A constructing unit 403, configured to construct a loss function of the target object according to the detection information, the original image, and the initial confrontation sample.
An adjusting unit 404, configured to adjust the initial challenge sample based on the loss function of the target object, so as to obtain a final challenge sample.
Fig. 5 is a schematic diagram of a fifth embodiment of the present disclosure, and as shown in fig. 5, an apparatus 500 for generating a challenge sample according to an embodiment of the present disclosure includes:
a first obtaining unit 501 is configured to obtain an original image and an initial confrontation sample generated from the original image.
The detecting unit 502 is configured to perform target detection on the original image and the target object in the initial challenge sample to obtain detection information.
As can be seen in fig. 5, in some embodiments, the detecting unit 502 includes:
the first detecting subunit 5021 is configured to perform target detection on a target object in an original image to obtain a real category of the target object.
The second detecting unit 5022 is configured to perform target detection on the target object in the initial confrontation sample to obtain a confidence of the target object in the real category and a confidence of the target object in other categories, where the other categories are categories other than the real category, and the detection information includes the confidence of the target object in the real category and the confidence of the target object in the other categories.
As can be seen in fig. 5, in some embodiments, the detecting unit 502 includes:
the third detecting subunit 5023 is configured to perform target detection on the target object in the original image, so as to obtain a real category of the target object and corresponding confidence levels of the target object in each category.
The second detecting subunit 5024 is configured to perform target detection on the target object in the initial confrontation sample to obtain the confidence of the target object in the other categories except the real category.
The detection information comprises the confidence coefficient of a misjudged object corresponding to the target object in a preset category and the confidence coefficient of the target object in other categories except the real category.
A determining unit 503, configured to determine smoothing information of the initial confrontation sample, where the smoothing information represents a difference between confidences of the initial confrontation sample at the same pixel position and different classes, and a difference between confidences of the initial confrontation sample at the same class and different pixel positions.
A constructing unit 504, configured to construct a loss function of the target object according to the detection information, the original image, and the initial confrontation sample.
In some embodiments, the constructing unit 504 is configured to construct the loss function of the target object according to the difference information, the confidence of the target object under the real category, the inter-category difference loss function, and the smoothing information.
As can be seen in conjunction with fig. 5, in some embodiments, the building unit 504 includes:
the first constructing subunit 5041 is configured to construct an inter-class difference loss function according to the confidences of the target object in the other classes, where the inter-class difference loss function is a sum of the confidences of the target object in the other classes.
In some embodiments, first building subunit 5041, includes:
the first obtaining module is used for sequentially obtaining N maximum confidence degrees from the confidence degrees of the target object in other categories, wherein N is a positive integer greater than 1.
And the first constructing module is used for constructing an inter-class difference loss function according to the obtained N maximum confidence coefficients.
A second constructing subunit 5042, configured to construct a loss function of the target object according to the confidence in the real category, the inter-class difference loss function, the original image, and the initial confrontation sample.
In some embodiments, second building subunit 5042, comprises:
the first determining module is used for determining difference information between the original image and the initial confrontation sample.
And the second construction module is used for constructing a loss function of the target object according to the difference information and the detection information.
As can be seen in conjunction with fig. 5, in some embodiments, the building unit 504 includes:
the first calculating subunit 5043 is configured to calculate a difference between the inter-class difference loss function and the confidence of the target object in the real class.
A second calculating sub-unit 5044 for calculating a sum between the difference information and the smoothing information.
A third constructing sub-unit 5045, configured to construct a loss function of the target object according to a difference between the inter-class difference loss function and the confidence of the target object in the real class, and a sum between the difference information and the smooth information.
As can be seen in conjunction with fig. 5, in some embodiments, the building unit 504 includes:
the first determining subunit 5046 is configured to determine, according to the respective corresponding confidence degrees of the target object in each category, a confidence degree of the target object in the real category.
In some embodiments, the first determining subunit 5046, includes:
and the second determining module is used for determining the confidence coefficient of the target object in the preset misjudgment category of the target object according to the corresponding confidence coefficient of the target object in each category.
And the second acquisition module is used for acquiring a target confidence coefficient from the confidence coefficients of the target object corresponding to the categories, wherein the target confidence coefficient is greater than the confidence coefficient in the misjudgment category.
And the third determining module is used for determining the confidence coefficient of the target object under the real category according to the target confidence coefficient.
A fourth constructing subunit 5047, configured to construct an inter-class difference loss function according to the confidence degrees of the target object in the other classes except the real class, where the inter-class difference loss function is the sum of the confidence degrees of the target object in the other classes.
A fifth constructing subunit 5048, configured to construct a loss function of the target object according to the confidence in the real category, the inter-class difference loss function, the original image, and the initial confrontation sample.
An adjusting unit 505, configured to adjust the initial challenge sample based on the loss function of the target object, so as to obtain a final challenge sample.
As can be seen from fig. 5, in some embodiments, the adjusting unit 505 includes:
an adjusting sub-unit 5051 is configured to adjust the initial confrontation sample to satisfy that the loss function of the target object is smaller than a preset loss threshold.
A second determining subunit 5052 is configured to determine the adjusted initial confrontation sample satisfying the target object when the loss function is smaller than the loss threshold as a final confrontation sample.
Fig. 6 is a schematic diagram of a sixth embodiment of the present disclosure, and as shown in fig. 6, an apparatus 600 for generating a challenge sample according to an embodiment of the present disclosure includes:
a second obtaining unit 601, configured to obtain an initial disturbance area of the original image.
A replacing unit 602, configured to perform replacement processing on the initial disturbance area in the original image according to the target model applied by the final confrontation sample, so as to obtain an initial confrontation sample.
As can be seen in fig. 6, in some embodiments, the replacing unit 602 includes:
and the processing subunit 6021 is configured to perform parameter initialization processing on the initial disturbance region according to the target model to obtain a disturbance region of the target.
And the replacing subunit 6022 is configured to perform replacement processing on the initial disturbance region in the original image by the disturbance region of the target, so as to obtain an initial confrontation sample.
A first obtaining unit 603, configured to obtain an original image and an initial confrontation sample generated from the original image.
The detecting unit 604 is configured to perform target detection on the original image and the target object in the initial challenge sample to obtain detection information.
A constructing unit 605, configured to construct a loss function of the target object according to the detection information, the original image, and the initial confrontation sample.
An adjusting unit 606, configured to adjust the initial challenge sample based on the loss function of the target object, so as to obtain a final challenge sample.
Fig. 7 is a schematic diagram according to a seventh embodiment of the present disclosure, and as shown in fig. 7, an electronic device 700 in the present disclosure may include: a processor 701 and a memory 702.
A memory 702 for storing programs; the Memory 702 may include a volatile Memory (RAM), such as a Static Random Access Memory (SRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (DDR SDRAM), and the like; the memory may also comprise a non-volatile memory, such as a flash memory. The memory 702 is used to store computer programs (e.g., applications, functional modules, etc. that implement the above-described methods), computer instructions, etc., which may be stored in one or more of the memories 702 in a partitioned manner. And the above-described computer programs, computer instructions, data, and the like, can be called by the processor 701.
The computer programs, computer instructions, etc. described above may be stored in one or more memories 702 in a partitioned manner. And the above-mentioned computer program, computer instruction, or the like can be called by the processor 701.
A processor 701 configured to execute the computer program stored in the memory 702 to implement the steps of the method according to the above embodiments.
Reference may be made in particular to the description relating to the preceding method embodiment.
The processor 701 and the memory 702 may be separate structures or may be integrated structures integrated together. When the processor 701 and the memory 702 are separate structures, the memory 702 and the processor 701 may be coupled via a bus 703.
The electronic device of this embodiment may execute the technical solution in the method, and the specific implementation process and the technical principle are the same, which are not described herein again.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the related user personal information (such as human faces and the like) all conform to the regulations of related laws and regulations and do not violate the customs of public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
According to an embodiment of the present disclosure, the present disclosure also provides a computer program product comprising: a computer program, stored in a readable storage medium, from which at least one processor of the electronic device can read the computer program, the at least one processor executing the computer program causing the electronic device to perform the solution provided by any of the embodiments described above.
FIG. 8 illustrates a schematic block diagram of an example electronic device 800 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The calculation unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
A number of components in the device 800 are connected to the I/O interface 805, including: an input unit 806, such as a keyboard, a mouse, or the like; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, or the like; and a communication unit 809 such as a network card, modem, wireless communication transceiver, etc. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Computing unit 801 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The calculation unit 801 executes the respective methods and processes described above, such as the generation method of the countermeasure sample. For example, in some embodiments, the method of generating the challenge sample may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 808. In some embodiments, part or all of the computer program can be loaded and/or installed onto device 800 via ROM 802 and/or communications unit 809. When the computer program is loaded into the RAM 803 and executed by the computing unit 801, one or more steps of the method of generating a challenge sample described above may be performed. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the method of generating the challenge sample in any other suitable manner (e.g., by way of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program code, when executed by the processor or controller, causes the functions/acts specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (29)

1. A method of generating a challenge sample, comprising:
acquiring an original image and an initial confrontation sample generated according to the original image;
carrying out target detection on the original image and a target object in the initial confrontation sample to obtain detection information;
and constructing a loss function of the target object according to the detection information, the original image and the initial confrontation sample, and adjusting the initial confrontation sample based on the loss function of the target object to obtain a final confrontation sample.
2. The method of claim 1, wherein target detection of the target object in the original image and the initial challenge sample, resulting in detection information, comprises:
carrying out target detection on a target object in the original image to obtain the real category of the target object;
performing target detection on a target object in the initial confrontation sample to obtain a confidence level of the target object in the real category and confidence levels of the target object in other categories, wherein the other categories are categories other than the real category, and the detection information includes the confidence level of the target object in the real category and the confidence levels of the target object in the other categories.
3. The method of claim 2, wherein constructing a loss function of the target object from the detection information, the original image, and the initial challenge sample comprises:
constructing an inter-class difference loss function according to the confidence degrees of the target object in other classes, wherein the inter-class difference loss function is the sum of the confidence degrees of the target object in other classes;
and constructing a loss function of the target object according to the confidence coefficient under the real category, the inter-category difference loss function, the original image and the initial confrontation sample.
4. The method of claim 3, wherein constructing an inter-class variability loss function according to the confidence of the target object in the other classes comprises:
sequentially acquiring N maximum confidence degrees from the confidence degrees of the target object in other categories, wherein N is a positive integer greater than 1;
and constructing the inter-class difference loss function according to the obtained N maximum confidence degrees.
5. The method of claim 3 or 4, wherein constructing a loss function of the target object from the detection information, the original image, and the initial challenge sample comprises:
determining difference information between the original image and the initial challenge sample;
and constructing a loss function of the target object according to the difference information and the detection information.
6. The method of claim 5, further comprising:
determining smoothing information of the initial confrontation sample, wherein the smoothing information characterizes a difference of confidence of the initial confrontation sample at the same pixel position and different classes and a difference of confidence of the initial confrontation sample at the same class and different pixel positions;
and constructing a loss function of the target object according to the difference information and the detection information, wherein the loss function comprises the following steps: and constructing a loss function of the target object according to the difference information, the confidence degree of the target object under the real category, the inter-category difference loss function and the smooth information.
7. The method of claim 6, wherein constructing the loss function of the target object according to the variance information, the confidence of the target object in the true category, the inter-class variance loss function, and the smoothing information comprises:
calculating a difference between the inter-class difference loss function and the confidence of the target object in the real class;
calculating a sum between the difference information and the smoothing information;
and constructing a loss function of the target object according to the difference between the inter-class difference loss function and the confidence of the target object in the real class and the sum of the difference information and the smooth information.
8. The method of any one of claims 1-7, wherein target detection of the target object in the original image and the initial challenge sample, resulting in detection information, comprises:
performing target detection on a target object in the original image to obtain the real category of the target object and the corresponding confidence of the target object under each category;
performing target detection on a target object in the initial confrontation sample to obtain confidence degrees of the target object in other categories except the real category;
the detection information comprises a confidence level of a misjudged object corresponding to the target object in a preset category and a confidence level of the target object in other categories except the real category.
9. The method of claim 8, wherein constructing a loss function of the target object from the detection information, the original image, and the initial challenge sample comprises:
determining the confidence coefficient of the target object in the real category according to the corresponding confidence coefficient of the target object in each category;
constructing an inter-class difference loss function according to the confidence degrees of the target object in other classes except the real class, wherein the inter-class difference loss function is the sum of the confidence degrees of the target object in the other classes;
and constructing a loss function of the target object according to the confidence coefficient under the real category, the inter-category difference loss function, the original image and the initial confrontation sample.
10. The method of claim 9, wherein determining the confidence level of the target object in the real category according to the confidence level of the target object in each category comprises:
determining the confidence coefficient of the target object in the preset misjudgment category of the target object according to the respective corresponding confidence coefficient of the target object in each category;
acquiring target confidence degrees from the respective corresponding confidence degrees of the target object in each category, wherein the target confidence degrees are greater than the confidence degrees in the false judgment categories;
and determining the confidence of the target object under the real category according to the target confidence.
11. The method of any one of claims 1-10, wherein adjusting the initial challenge sample based on the loss function of the target object to obtain a final challenge sample comprises:
and adjusting the initial confrontation sample to meet the condition that the loss function of the target object is smaller than a preset loss threshold, and determining the adjusted initial confrontation sample meeting the condition that the loss function of the target object is smaller than the loss threshold as a final confrontation sample.
12. The method of any of claims 1-11, prior to acquiring an initial challenge sample generated from the raw image, the method further comprising:
acquiring an initial disturbance area of the original image;
and replacing the initial disturbance area in the original image according to a target model applied by the final confrontation sample to obtain the initial confrontation sample.
13. The method of claim 12, wherein the replacing the initial perturbation regions in the original image according to the target model applied by the final confrontation sample to obtain the initial confrontation sample comprises:
performing parameter initialization processing on the initial disturbance area according to the target model to obtain a disturbance area of a target;
and replacing the original disturbance area in the original image by the disturbance area of the target to obtain the original confrontation sample.
14. A challenge sample generating device comprising:
the device comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring an original image and an initial confrontation sample generated according to the original image;
the detection unit is used for carrying out target detection on the original image and a target object in the initial confrontation sample to obtain detection information;
a construction unit, configured to construct a loss function of the target object according to the detection information, the original image, and the initial countermeasure sample;
and the adjusting unit is used for adjusting the initial confrontation sample based on the loss function of the target object to obtain a final confrontation sample.
15. The apparatus of claim 14, wherein the detection unit comprises:
the first detection subunit is used for carrying out target detection on a target object in the original image to obtain the real category of the target object;
a second detecting unit, configured to perform target detection on a target object in the initial countermeasure sample, so as to obtain a confidence level of the target object in the real category and a confidence level of the target object in another category, where the another category is a category other than the real category, and the detection information includes the confidence level of the target object in the real category and the confidence level of the target object in the another category.
16. The apparatus of claim 15, wherein the building unit comprises:
a first constructing subunit, configured to construct, according to the confidences of the target object in the other categories, an inter-class difference loss function, where the inter-class difference loss function is a sum of the confidences of the target object in the other categories;
a second constructing subunit, configured to construct a loss function of the target object according to the confidence in the real category, the inter-class difference loss function, the original image, and the initial confrontation sample.
17. The apparatus of claim 16, wherein the first building subunit comprises:
the first obtaining module is used for sequentially obtaining N maximum confidence degrees from the confidence degrees of the target object in other categories, wherein N is a positive integer greater than 1;
and the first construction module is used for constructing the inter-class difference loss function according to the obtained N maximum confidence coefficients.
18. The apparatus of claim 16 or 17, wherein the second building subunit comprises:
a first determining module for determining difference information between the original image and the initial confrontation sample;
and the second construction module is used for constructing a loss function of the target object according to the difference information and the detection information.
19. The apparatus of claim 18, the apparatus further comprising:
a determining unit, configured to determine smoothing information of the initial confrontation sample, where the smoothing information characterizes a difference in confidence of the initial confrontation sample at a same pixel position and different classes, and a difference in confidence of the initial confrontation sample at a same class and different pixel positions;
and the construction unit is used for constructing a loss function of the target object according to the difference information, the confidence of the target object in the real category, the inter-category difference loss function and the smooth information.
20. The apparatus of claim 19, wherein the building unit comprises:
a first calculating subunit, configured to calculate a difference between the inter-class difference loss function and the confidence of the target object in the true class;
a second calculation subunit operable to calculate a sum between the difference information and the smoothing information;
a third constructing subunit, configured to construct a loss function of the target object according to a difference between the inter-class difference loss function and the confidence of the target object in the real class and a sum between the difference information and the smoothing information.
21. The apparatus of any one of claims 14-20, wherein the detection unit comprises:
the third detection subunit is configured to perform target detection on the target object in the original image, so as to obtain a real category of the target object and respective corresponding confidence levels of the target object in the categories;
the second detection subunit is configured to perform target detection on the target object in the initial confrontation sample to obtain confidence levels of the target object in other categories except the true category;
the detection information comprises the confidence of a misjudged object corresponding to the target object in a preset category and the confidence of the target object in other categories except the real category.
22. The apparatus of claim 21, wherein the building unit comprises:
the first determining subunit is configured to determine, according to the respective corresponding confidence degrees of the target object in each category, a confidence degree of the target object in the real category;
a fourth constructing subunit, configured to construct, according to the confidence degrees of the target object in the other categories except the real category, an inter-class difference loss function, where the inter-class difference loss function is a sum of the confidence degrees of the target object in the other categories;
a fifth constructing subunit, configured to construct a loss function of the target object according to the confidence in the real category, the inter-class difference loss function, the original image, and the initial confrontation sample.
23. The apparatus of claim 22, wherein the first determining subunit comprises:
the second determining module is used for determining the confidence coefficient of the target object in the preset misjudgment category of the target object according to the respective corresponding confidence coefficient of the target object in each category;
the second obtaining module is used for obtaining a target confidence coefficient from the confidence coefficient corresponding to the target object in each category, wherein the target confidence coefficient is greater than the confidence coefficient in the misjudgment category;
and the third determining module is used for determining the confidence of the target object in the real category according to the target confidence.
24. The apparatus according to any one of claims 14-23, wherein the adjusting unit comprises:
an adjusting subunit, configured to adjust the initial countermeasure sample to satisfy that a loss function of the target object is smaller than a preset loss threshold;
and the second determining subunit is used for determining the adjusted initial confrontation sample meeting the target object when the loss function is smaller than the loss threshold as a final confrontation sample.
25. The apparatus of any of claims 14-24, further comprising:
the second acquisition unit is used for acquiring an initial disturbance area of the original image;
and the replacing unit is used for replacing the initial disturbance area in the original image according to the target model applied by the final confrontation sample to obtain the initial confrontation sample.
26. The apparatus of claim 25, wherein the replacement unit comprises:
the processing subunit is used for carrying out parameter initialization processing on the initial disturbance area according to the target model to obtain a disturbance area of a target;
and the replacing subunit is used for replacing the original disturbed area in the original image by the disturbed area of the target to obtain the original confrontation sample.
27. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the first and the second end of the pipe are connected with each other,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-13.
28. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-13.
29. A computer program product comprising a computer program which, when executed by a processor, carries out the steps of the method of any one of claims 1 to 13.
CN202210196939.4A 2022-03-01 2022-03-01 Method and device for generating confrontation sample Pending CN114648673A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210196939.4A CN114648673A (en) 2022-03-01 2022-03-01 Method and device for generating confrontation sample

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210196939.4A CN114648673A (en) 2022-03-01 2022-03-01 Method and device for generating confrontation sample

Publications (1)

Publication Number Publication Date
CN114648673A true CN114648673A (en) 2022-06-21

Family

ID=81992938

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210196939.4A Pending CN114648673A (en) 2022-03-01 2022-03-01 Method and device for generating confrontation sample

Country Status (1)

Country Link
CN (1) CN114648673A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115330579A (en) * 2022-08-03 2022-11-11 北京百度网讯科技有限公司 Model watermark construction method, device, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115330579A (en) * 2022-08-03 2022-11-11 北京百度网讯科技有限公司 Model watermark construction method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112949767B (en) Sample image increment, image detection model training and image detection method
CN112907552A (en) Robustness detection method, device and program product for image processing model
CN115294332B (en) Image processing method, device, equipment and storage medium
CN114792355B (en) Virtual image generation method and device, electronic equipment and storage medium
CN114565513A (en) Method and device for generating confrontation image, electronic equipment and storage medium
CN115359308B (en) Model training method, device, equipment, storage medium and program for identifying difficult cases
CN113591736A (en) Feature extraction network, training method of living body detection model and living body detection method
CN114882321A (en) Deep learning model training method, target object detection method and device
CN114648673A (en) Method and device for generating confrontation sample
CN114387642A (en) Image segmentation method, device, equipment and storage medium
CN113177497B (en) Training method of visual model, vehicle identification method and device
CN114821063A (en) Semantic segmentation model generation method and device and image processing method
CN113902899A (en) Training method, target detection method, device, electronic device and storage medium
CN113643260A (en) Method, apparatus, device, medium and product for detecting image quality
CN115330579B (en) Model watermark construction method, device, equipment and storage medium
CN114663980B (en) Behavior recognition method, and deep learning model training method and device
CN114549904B (en) Visual processing and model training method, device, storage medium and program product
CN114399513A (en) Method and device for training image segmentation model and image segmentation
CN113920273B (en) Image processing method, device, electronic equipment and storage medium
CN114581711A (en) Target object detection method, apparatus, device, storage medium, and program product
CN113947146A (en) Sample data generation method, model training method, image detection method and device
CN114707638A (en) Model training method, model training device, object recognition method, object recognition device, object recognition medium and product
CN115333783A (en) API call abnormity detection method, device, equipment and storage medium
CN115170919A (en) Image processing model training method, image processing device, image processing equipment and storage medium
CN113936158A (en) Label matching method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination