CN114898091A - Image countermeasure sample generation method and device based on regional information - Google Patents
Image countermeasure sample generation method and device based on regional information Download PDFInfo
- Publication number
- CN114898091A CN114898091A CN202210388711.5A CN202210388711A CN114898091A CN 114898091 A CN114898091 A CN 114898091A CN 202210388711 A CN202210388711 A CN 202210388711A CN 114898091 A CN114898091 A CN 114898091A
- Authority
- CN
- China
- Prior art keywords
- image
- pixel
- loss
- interference
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000013528 artificial neural network Methods 0.000 claims abstract description 15
- 238000004364 calculation method Methods 0.000 claims description 7
- 239000011159 matrix material Substances 0.000 claims description 5
- 238000004590 computer program Methods 0.000 claims description 4
- 230000000694 effects Effects 0.000 abstract description 9
- 230000011218 segmentation Effects 0.000 description 12
- 238000013136 deep learning model Methods 0.000 description 11
- 238000003709 image segmentation Methods 0.000 description 11
- 238000013135 deep learning Methods 0.000 description 9
- 238000012360 testing method Methods 0.000 description 8
- 238000012549 training Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 238000002474 experimental method Methods 0.000 description 3
- 230000007123 defense Effects 0.000 description 2
- 238000003745 diagnosis Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 206010028980 Neoplasm Diseases 0.000 description 1
- 240000007594 Oryza sativa Species 0.000 description 1
- 235000007164 Oryza sativa Nutrition 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000003042 antagnostic effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000002238 attenuated effect Effects 0.000 description 1
- 210000004204 blood vessel Anatomy 0.000 description 1
- 201000011510 cancer Diseases 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000001575 pathological effect Effects 0.000 description 1
- 210000001525 retina Anatomy 0.000 description 1
- 230000004256 retinal image Effects 0.000 description 1
- 235000009566 rice Nutrition 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/273—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion removing elements interfering with the pattern to be recognised
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method and equipment for generating image confrontation samples based on regional information, wherein the method comprises the following steps: (1) inputting a given image into a pre-trained deep neural network, and outputting a corresponding predicted image by the deep neural network; (2) calculating the loss L of the predicted image and the target image on the regional scale RW And calculating the loss L of the predicted image and the real label on the pixel scale dice (ii) a (3) And respectively calculating the gradient values of the two losses, performing weighted addition to obtain an interference value, reversely adding the interference value serving as noise into the original input image, and iterating to obtain a target countermeasure sample. The RDAG method generated by combining the pixel scale and the area scale is divided into two partsCompared with the traditional attack method, the attack effect on the cutting task is obviously improved.
Description
Technical Field
The invention belongs to the field of image segmentation, and particularly relates to a method and equipment for generating an image confrontation sample based on regional information.
Background
Deep learning has achieved enormous achievements in medical fields, such as medical image segmentation, lesion tissue detection, medical image reconstruction, auxiliary disease diagnosis, and the like. There is a prevailing theory that attempts are being made to replace manual labor with fully automated deep learning models in order to reduce the labor costs of medical institutions (e.g., doctors, nurses, domain experts, etc.). However, in most medical scenarios, such as medical image segmentation task, only a small amount of data can be used as training data, while the relative deep learning model has extremely large parameters, which results in the model having memory to the training data and low predictability to the test data or unknown data. Meanwhile, due to the low interpretability of the deep learning model, the deep learning model always suffers from the problem of 'black box scheme', and people cannot acquire accurate operation logic of the operation of the neural network. This can be extremely fatal in the medical field, because the data obtained by medical devices is usually very noisy, which can lead to uninterpretable errors in the deep learning model, and in many cases can cause significant interference in medical diagnosis. Meanwhile, recent research also shows that deep learning is easily artificially manufactured and is not easily influenced by disturbance, such as attack sample resistance, which brings about great medical hidden danger and causes irretrievable consequences.
To prevent the deep learning network from posing a potential danger, a method is needed to evaluate the stability of the deep learning model, for example, its robustness in the face of attack samples. Recently, some countersample generation methods have been proposed in the segmentation field and used to evaluate the robustness of the pre-trained deep learning model. Two widely used challenge sample generation methods, such as Fast Gradient Signal Method (FGSM) and Project Gradient Descemet (PGD), are known. However, the FGSM only performs a single, pixel-dimensional, update along the gradient direction, and such an attack method tends to have a limited impact on the non-linear model (e.g., most deep neural networks). While PGD applies multi-step updates, it is still less aggressive in the image segmentation domain. At present, no better attack method for the image segmentation field exists, so that a more effective anti-attack method aiming at image segmentation is urgently needed to be developed, and a corresponding defense means is researched on the basis to improve the robustness of a model.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the defects of the prior art, the invention provides an image confrontation sample generation method and device based on regional information, and the robustness of a deep learning network is improved.
The technical scheme is as follows: an image confrontation sample generation method based on area information comprises the following steps:
(1) inputting a given image into a pre-trained deep neural network, and outputting a corresponding predicted image by the deep neural network;
(2) calculating the loss L of the predicted image and the target image on the regional scale RW And calculating the loss L of the predicted image and the real label on the pixel scale dice ;
(3) And respectively calculating the gradient values of the two losses, performing weighted addition to obtain an interference value, reversely adding the interference value serving as noise into the original input image, and iterating to obtain a target countermeasure sample.
Further, the loss L of the predicted image and the target image RW The calculation method is as follows:
where C is the number of objects in the input image X, rrwmap is a label matrix generated from the input image, and each element of the matrix represents the region Ω from the pixel i at the corresponding position of the input image to the nearest object C c Edge pixel b ic The distance of (c).
Further, the loss L of the predicted image and the real label dice The calculation method is as follows:
where N denotes the number of pixels of the input image, p cn Indicating that pixel n is classified as a predicted value of c, g cn Representing the label value of pixel n in class c.
Further, the interference value calculation method is as follows:
wherein r is m Representing the interference value, P, of the mth iteration m An input image representing the mth iteration, G being the label, G' being the desired confrontational target,indicating that the gradient values of the current image are calculated.
Further, the method further comprises: frobenius regularization is carried out on the interference value,wherein gamma is the interference value r determined in each round m After one iteration, the image P is input m Is changed into P m+1 =P m +r′ m 。
Further, the method further comprises: the pixels after each round of interference are sliced to ensure that they fall on [0,1 ]]Within the interval, the final total interference is r ═ Σ m r′ m 。
Further, the deep neural network adopts CE-Net or U-Net.
The present invention also provides a computer apparatus comprising: one or more processors; a memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the programs when executed by the processors implement the steps of the image fighting sample generation method based on region information as described above.
The present invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the image countermeasure sample generation method based on area information as described above.
Has the advantages that: in the pixel scale, a widely applied Loss function Dice Loss is used, and disturbance is applied in an iterative back propagation mode, so that each pixel in an image is subjected to error classification by a deep learning model. Meanwhile, in order to further interfere and segment the pixels in the label, a Loss function Region Loss on a Region scale is used, so that the interference weight of the pixels in the label is deepened, and the overall segmentation effect is influenced. Segmentation tests on the DRIVE and CELL data sets show that the method has obviously improved attack effect on segmentation tasks compared with the traditional attack method, and has little disturbance on images.
Drawings
FIG. 1 is a flowchart of a method for generating an image confrontation sample based on region information according to the present invention.
Detailed Description
Unlike the classification task, when attacking the deep learning model for the segmented anti-attack samples, the model needs to perform error identification on each pixel of the input picture. To achieve the purpose, the invention uses a dense confrontation sample generation method based on regional information. In order to make the technical solution of the present invention more clear, the related concepts will be first introduced.
The challenge sample: the countermeasure sample aims to add fine disturbance which cannot be identified by human eyes on the image so that the deep learning network can carry out error prediction. And setting a classifier of the deep learning network as F (X, theta), wherein X is an input picture, and theta is a trainable network parameter. For an input picture given a label Y, one confrontation sample X '═ X + r may be generated such that F (X, θ) ═ Y, F (X', θ) ═ Y ', Y' ≠ Y. At the same time, the interference r needs to be controlled to a relatively small extent so that it is not easily perceived by human eyes.
Dense confrontation sample Generation method (DAG): in the field of image segmentation, the aim of resisting attacks is to enable a pre-trained segmentation model to carry out error prediction on each pixel. On a pixel scale, the DAG encourages the deep learning model to carry out error segmentation by increasing the distance between the predicted value and the corresponding label value of each pixel and reducing the distance between the predicted value and the error label value, and the DAG has a loss functionCan be expressed by the following formula:
whereinRepresenting the segmentation target in the input image X,representing target t c Correctly classified as the correct label to which it correspondsIs predicted, andrepresenting target t c Is misclassified as its corresponding error labelThe predicted value of (2). By narrowing the value of this loss function, the deep learning model will be encouraged to mispredict for each pixel.
In order to further improve the interference performance, the invention provides a Region-based defense adaptive Generation (RDAG) method based on Region information, and the method can simultaneously obtain information of an input image on two dimensions and attack the information by combining a loss function of a Region scale and a pixel scale. Referring to fig. 1, the countermeasure sample generation method specifically includes the steps of:
(1) and inputting the given image into a pre-trained deep neural network, and outputting a corresponding prediction result by the deep neural network. The deep neural network can be CE-Net or U-Net, and has better performance for medical image segmentation. The training of the neural network is not the core of the present invention and will not be described in detail here. As shown in fig. 1, a fundus retinal image is subjected to extraction by a neural network to obtain a predicted image.
(2) The LOSS REGION-WISE LOSS of the predicted image and the target image and the LOSS DICE LOSS with the real label are calculated respectively. The target image is the target image that the user desires to disturb the result, and as shown in fig. 1, may be a completely black image, indicating that it is desirable that the network will not distinguish between background pixels and pixels to be segmented when predicting the input image. The real label refers to a label labeled by a doctor or an expert according to an original input image.
The invention adds the Region Loss (Region-wise Loss) on the basis of the pixel scale, and the Loss function can be expressed by the following formula:
wherein, C is the target number in the input image X, i.e. how many label categories to be segmented are in the input image; rrwmap is a label matrix generated from the input image, each element of which represents the area Ω from the pixel i at the corresponding position of the input image to its nearest target c c Edge pixel b ic The distance of (c). Meanwhile, in order to ensure the gradient sign stability during optimization, rrwmap sets the pixels outside all the regions to a constant value, which is set to 1 in the embodiment of the present invention。
The antagonistic sample generation method improves the calculation of the pixel scale, and adopts the Dice Coefficient Loss (Dice Coefficient Loss) to collect the information of the pixel scale. The degree of overlapping between the label and the segmentation result generated by the deep learning network is mainly calculated, and the correlation loss function is expressed as follows:
where N denotes the number of pixels of the input image, p cn A predicted value representing that the pixel n is classified as c, and the value is 0 to 1, g cn Representing the label value of pixel n in class c, which is 0/1.
(3) And respectively calculating the gradient values of the two losses, performing weighted addition to obtain an interference value, reversely adding the interference value serving as noise into the original input image, performing T-round iteration operation, and finally obtaining a target countermeasure sample.
Interference value r is bound L dice And L RW And performing inverse gradient calculation to obtain the value of (c), and in the mth iteration, using P to make the current input image (generated after m-1 iteration interference) m Shows the interference r of this iteration m Is represented by the following formula:
wherein G is a label, G' is a close confrontation target (such as a blank label) output by the network for the deep learning intended given by the user,indicating that the gradient values of the current image are calculated.
In order to limit the dimension of noise so as to prevent the resisting sample from obviously different from the original image and ensure the numerical stability of interference, the invention also gives the interference r m Frobenius regularization is carried outWherein gamma is the interference r determined for each round m The weight of (b) is usually set to 1. After one iteration, the image P is input m Is changed into P m+1 =P m +r′ m 。
In order to ensure that the disturbed pixels do not overflow, the invention also slices the pixels after each disturbance to ensure that the pixels fall on [0,1 ]]Within the interval. The final total interference is r ═ Σ m r′ m 。
And finally obtaining a target confrontation sample after T-round iteration, wherein the prediction performance of the target confrontation sample is obviously reduced compared with an original input image without noise when network prediction is carried out.
To verify the performance of the process of the invention, the following experiments were performed. The experiment used two open data sets, DRIVE and CELL, as test data. DRIVE is an image of the fundus retina in which blood vessels are manually labeled by experts in the relevant field and is widely used for research in medical image segmentation. It consists of twenty training images and twenty test images. CELL is a subset of NuCLS data set, covers thirty pathological image slices marked with cancer CELLs manually, and is divided into twenty training images and ten testing images. Both sets of data were cut to 448x448 size and pre-processed by random rotation, inversion, translation, etc.
The experiment uses CE-Net as a deep learning network, which has the leading performance in the field of medical image segmentation. ADAM was used as the optimizer at training time and pre-training was performed using Dice loss. And meanwhile, the dynamic learning rate is used, so that the learning rate is attenuated every 10epochs in the learning process to accelerate the convergence rate. The Dice coefficient is used as an important Index for evaluating the prediction accuracy of the model, and meanwhile, the Structural Similarity Index (SSIM), Mean Squared Error (MSE) and Peak Signal-to-Noise Ratio (PSNR) are used for auxiliary evaluation of the attack capacity of resisting the sample. In order to unify the degree of attack, the interference r is limited. For an input picture with K pixels, the interference level p is defined asWherein r is k For normalized interference for each channel (an RGB image typically has three channels), the interference level p for each pixel is set to 2.6e-3(DRIVE), 3.5e-3 (CELL).
Segmentation tests were performed on DRIVE and CELL datasets, and the results of the tests are shown in table 1 using the RDAG of the present invention and conventional attack means to attack the pre-trained model.
TABLE 1 DRIVE data set and CELL data set test results on CE-Net
It is evident that the pre-trained model produced very good segmentation predictions on both datasets (rice coefficients 0.7675 and 0.7736, respectively) when no attack was received. Correspondingly, the RDAG method of the invention also achieves the most advanced attack effect at present. It can be seen that the segmentation effect is significantly reduced by our attack, with the results of 0.0103 and 0.0945, which are significantly ahead of FGSM (0.6774 and 0.7372) and PGD (0.1804 and 0.3681), respectively. While also surpassing the original DAG approach (0.2411 and 0.2006). The method proves that the attack effect of the RDAG method generated by combining the pixel scale and the area scale on the segmentation task is obviously improved compared with the traditional attack method.
Meanwhile, the degree of the generated disturbance is also an important index for evaluating the attack performance. SSIM, RMSE, PSNR are used as the criteria for measuring the degree of disturbance. As shown in table 1, PSNR (42.809 and 36.748), SSIM (0.9673 and 0.9896) and RMSE (0.0073 and 0.0146) all indicate that the RDAG method of the present invention has a significant effect on image segmentation, and at the same time, the image disturbance is controlled to a very small range. The disturbance size is smaller than that of the FGSM method only adopting single iteration attack, but the attack effect is better than that of PGD and DAG of multiple iteration attacks, and the disturbance effect is very small and cannot be perceived by human eyes.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus (system), or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Claims (9)
1. An image confrontation sample generation method based on area information is characterized by comprising the following steps:
(1) inputting a given image into a pre-trained deep neural network, and outputting a corresponding predicted image by the deep neural network;
(2) calculating the loss L of the predicted image and the target image on the regional scale RW And calculating the loss L of the predicted image and the real label on the pixel scale dice ;
(3) And respectively calculating the gradient values of the two losses, performing weighted addition to obtain an interference value, reversely adding the interference value serving as noise into the original input image, and iterating to obtain a target countermeasure sample.
2. The method according to claim 1, wherein the predicted image and the target image have a loss L RW The calculation method is as follows:
where C is the number of objects in the input image X, rrwmap is a label matrix generated from the input image, and each element of the matrix represents the region Ω from the pixel i at the corresponding position of the input image to the nearest object C c Edge pixel b ic The distance of (c).
3. The method according to claim 1, wherein the predicted image and the true tag have a loss L dice The calculation method is as follows:
where N denotes the number of pixels of the input image, p cn Indicating that pixel n is classified as a predicted value of c, g cn Representing the label value of pixel n in class c.
4. The method as claimed in claim 1, wherein the interference value is calculated by:
6. The method as claimed in claim 5, wherein the method further comprises: the pixels after each round of interference are sliced to ensure that they fall on [0,1 ]]Within the interval, the final total interference is r ═ Σ m r′ m 。
7. The method for generating image countermeasure samples according to claim 1, wherein the deep neural network employs CE-Net or U-Net.
8. A computer device, comprising:
one or more processors;
a memory;
and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the programs when executed by the processors implement the steps of the image countermeasure sample generation method based on regional information as recited in any of claims 1-7.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the image countermeasure sample generation method based on region information according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210388711.5A CN114898091A (en) | 2022-04-14 | 2022-04-14 | Image countermeasure sample generation method and device based on regional information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210388711.5A CN114898091A (en) | 2022-04-14 | 2022-04-14 | Image countermeasure sample generation method and device based on regional information |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114898091A true CN114898091A (en) | 2022-08-12 |
Family
ID=82717806
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210388711.5A Pending CN114898091A (en) | 2022-04-14 | 2022-04-14 | Image countermeasure sample generation method and device based on regional information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114898091A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117576522A (en) * | 2024-01-18 | 2024-02-20 | 之江实验室 | Model training method and device based on mimicry structure dynamic defense |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110503654A (en) * | 2019-08-01 | 2019-11-26 | 中国科学院深圳先进技术研究院 | A kind of medical image cutting method, system and electronic equipment based on generation confrontation network |
CN110503650A (en) * | 2019-07-08 | 2019-11-26 | 南京航空航天大学 | Optical fundus blood vessel image segmentation fights sample generating method, segmentation network security evaluation method |
CN111932646A (en) * | 2020-07-16 | 2020-11-13 | 电子科技大学 | Image processing method for resisting attack |
US20200380300A1 (en) * | 2019-05-30 | 2020-12-03 | Baidu Usa Llc | Systems and methods for adversarially robust object detection |
CN113780123A (en) * | 2021-08-27 | 2021-12-10 | 广州大学 | Countermeasure sample generation method, system, computer device and storage medium |
CN114330652A (en) * | 2021-12-22 | 2022-04-12 | 杭州师范大学 | Target detection attack method and device |
-
2022
- 2022-04-14 CN CN202210388711.5A patent/CN114898091A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200380300A1 (en) * | 2019-05-30 | 2020-12-03 | Baidu Usa Llc | Systems and methods for adversarially robust object detection |
CN110503650A (en) * | 2019-07-08 | 2019-11-26 | 南京航空航天大学 | Optical fundus blood vessel image segmentation fights sample generating method, segmentation network security evaluation method |
CN110503654A (en) * | 2019-08-01 | 2019-11-26 | 中国科学院深圳先进技术研究院 | A kind of medical image cutting method, system and electronic equipment based on generation confrontation network |
CN111932646A (en) * | 2020-07-16 | 2020-11-13 | 电子科技大学 | Image processing method for resisting attack |
CN113780123A (en) * | 2021-08-27 | 2021-12-10 | 广州大学 | Countermeasure sample generation method, system, computer device and storage medium |
CN114330652A (en) * | 2021-12-22 | 2022-04-12 | 杭州师范大学 | Target detection attack method and device |
Non-Patent Citations (3)
Title |
---|
CIHANG XIE ET AL.: "Adversarial Examples for Semantic Segmentation and Object Detection", 《2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》, 22 October 2017 (2017-10-22) * |
YUTONG WANG ET AL.: "Adversarial attacks on Faster R-CNN object detector", 《NEUROCOMPUTING》, 21 March 2020 (2020-03-21) * |
李帅: "基于图像隐空间的黑盒攻击方法", 《硕士电子期刊》, 15 January 2022 (2022-01-15) * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117576522A (en) * | 2024-01-18 | 2024-02-20 | 之江实验室 | Model training method and device based on mimicry structure dynamic defense |
CN117576522B (en) * | 2024-01-18 | 2024-04-26 | 之江实验室 | Model training method and device based on mimicry structure dynamic defense |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Elangovan et al. | Glaucoma assessment from color fundus images using convolutional neural network | |
Zhang et al. | Brain tumor segmentation based on hybrid clustering and morphological operations | |
Warde-Farley et al. | 11 adversarial perturbations of deep neural networks | |
CN109345508B (en) | Bone age evaluation method based on two-stage neural network | |
US8139831B2 (en) | System and method for unsupervised detection and gleason grading of prostate cancer whole mounts using NIR fluorscence | |
Reddy et al. | Enhanced speckle noise reduction in breast cancer ultrasound imagery using a hybrid deep learning model | |
CN102063708B (en) | Image denoising method based on Treelet and non-local means | |
CN109859241B (en) | Adaptive feature selection and time consistency robust correlation filtering visual tracking method | |
Chen et al. | Diagnose like a pathologist: Weakly-supervised pathologist-tree network for slide-level immunohistochemical scoring | |
Popescu et al. | Retinal blood vessel segmentation using pix2pix gan | |
Zhu et al. | Data and feature mixed ensemble based extreme learning machine for medical object detection and segmentation | |
Springenberg et al. | From modern CNNs to vision transformers: Assessing the performance, robustness, and classification strategies of deep learning models in histopathology | |
Nadeem et al. | Fuzzy logic based computational model for speckle noise removal in ultrasound images | |
CN117015796A (en) | Method for processing tissue images and system for processing tissue images | |
Bhardwaj et al. | A Novel Method for Despeckling of Ultrasound Images Using Cellular Automata-Based Despeckling Filter | |
Ma et al. | Fuzzy clustering with non-local information for image segmentation | |
CN116912568A (en) | Noise-containing label image recognition method based on self-adaptive class equalization | |
CN115880523A (en) | Image classification model, model training method and application thereof | |
CN114898091A (en) | Image countermeasure sample generation method and device based on regional information | |
Singh et al. | Deep attention network for pneumonia detection using chest X-ray images | |
CN116934747A (en) | Fundus image segmentation model training method, fundus image segmentation model training equipment and glaucoma auxiliary diagnosis system | |
CN113887699A (en) | Knowledge distillation method, electronic device and storage medium | |
Ma et al. | Edge-guided cnn for denoising images from portable ultrasound devices | |
Yuan et al. | Data privacy protection domain adaptation by roughing and finishing stage | |
CN105160666A (en) | SAR (synthetic aperture radar) image change detection method based on non-stationary analysis and conditional random field |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |