CN110503650A - Optical fundus blood vessel image segmentation fights sample generating method, segmentation network security evaluation method - Google Patents

Optical fundus blood vessel image segmentation fights sample generating method, segmentation network security evaluation method Download PDF

Info

Publication number
CN110503650A
CN110503650A CN201910608656.4A CN201910608656A CN110503650A CN 110503650 A CN110503650 A CN 110503650A CN 201910608656 A CN201910608656 A CN 201910608656A CN 110503650 A CN110503650 A CN 110503650A
Authority
CN
China
Prior art keywords
blood vessel
network
optical fundus
fundus blood
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910608656.4A
Other languages
Chinese (zh)
Inventor
张道强
徐梦婷
张涛
李仲年
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN201910608656.4A priority Critical patent/CN110503650A/en
Publication of CN110503650A publication Critical patent/CN110503650A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention discloses optical fundus blood vessel image segmentation confrontation sample generating method and segmentation network security evaluation method, wherein confrontation sample generating method comprising steps of 1, establish optical fundus blood vessel image segmentation network;2, original optical fundus blood vessel image is acquired, the blood vessel in acquisition image is marked, constructs training sample to train optical fundus blood vessel image segmentation network;3, building generates disturbance network to generate to resisting sample image;4, the confrontation sample image of generation is input in trained optical fundus blood vessel image segmentation network, obtains segmentation result, calculating target function, by minimizing objective function come the parameter of more newly-generated disturbance network, the generation optimized disturbs network;5, disturbance is generated using the generation of optimization disturbance network, be added on original optical fundus blood vessel image, obtain confrontation sample image.This method can obtain with the indistinguishable confrontation sample image of original image human eye, and obtain generation disturbance network can preferably learn to segmentation network feature.

Description

Optical fundus blood vessel image segmentation fights sample generating method, segmentation network security evaluation Method
Technical field
The invention belongs to technical field of medical image processing, and in particular to a kind of optical fundus blood vessel image segmentation is raw to resisting sample At method and segmentation network security evaluation method.
Background technique
In recent years, the deep learning algorithm driven by new network structure and big data progress is in many artificial intelligence systems Show surprising high-performance, such as image recognition and semantic segmentation.Deep learning is also extremely made us in clinical medical application Excitement, in medical diagnosis, deep learning algorithm seems with doctor in radiology, pathology, dermatology and ophthalmology Aspect has reached phase same level.2018, food and drug administration (FDA) had approved first autonomous artificial intelligence medical treatment Diagnostic system, and them are indicated just in one new inspection framework of active development, to promote the innovation in this field.
However, Szegedy et al. find deep neural network realize image classification and in terms of exist One weakness.Although they show that deep learning algorithm has reached very high accuracy rate, present depth network is easily To attack resistance caused by the microvariations that can not be almost discovered by human visual system, this attack can make deep neural network Classifier changes its prediction about image completely, and worse, model under attack indicates high to error prediction result Degree is trusted, and identical image disturbances can cheat multiple networks.The profound influence of these results causes researcher couple Broad interest to attack resistance and cause their thinkings to deep learning model robustness and defence method.
Moosavi-Dezfooli et al. proposed the concept of universal disturbance in 2016 first, and general disturbance is one solid Fixed disturbance may mislead the pre-training model of most of images after being added to natural image.Mopuri et al. is proposed A kind of method unrelated with primary data generates general disturbance, and motivation is to maximize multilayer when inputting general disturbance The average activation primitive value of network, although this method does not need the relevant information of training data, its result is not As the method for Moosavi-Dezfooli is so good.Metzen et al. proposes a kind of for the general of generative semantics parted pattern The method of type target attack.The method that their method is similar to Moosavi-Dezfooli, they are related by addition image Type, which disturbs and shears result, creates universal disturbance to meet norm constraint.Moosavi-Dezfooli et al. in 2017 again Propose a kind of side of robustness progress quantitative analysis for fighting general disturbance to classifier based on the geometric properties of decision boundary Method.
For creation image relationship type disturbance, it has been suggested at present there are many method.Method based on optimization, such as The method of Szegedy et al. and Carlini et al. are reached by disturbance norm constraint and model loss function to define cost function To optimization purpose, although these methods can obtain preferably than other methods as a result, their inference time is very slow. Goodfellow et al. proposed in 2015 a kind of Fast Field notation method (Fast Gradient Sign Method, FGSM it) generates to resisting sample, this method calculates the gradient of the loss function of each element, be then based on gradient descent direction A mobile small step, although this method is fast quickly, the single direction based on loss function linear approximation, which is used only, can usually be led Cause sub-optimal result.Based on this work, Moosavi-Dezfooli et al. proposes a kind of iterative algorithm, by assuming that loss letter Number can linearize in each iteration around current data point to calculate to disturbance rejection.Kurakin et al. proposes one kind and changes Generation it is minimum may class method, this is a kind of method based on iterative gradient, select prediction class most unlikely as target class, They, which also discuss how effectively to be added in the training process to resisting sample, improves model robustness.
But these creations proposed at present are mostly applied in natural image the method for disturbance rejection, to segmentation require compared with High medical image is without reference to and medical image deep learning model safety and robustness requirement are higher, how to evaluate doctor The safety for learning picture depth learning model is also urgent problem.
Summary of the invention
Goal of the invention: the present invention is intended to provide a kind of optical fundus blood vessel image segmentation fights sample generating method, for generating Resisting sample is come to carry out attack experiment to segmentation network;The side that segmentation internet security is evaluated using attack experiment is provided simultaneously Method, this method can the safety intuitively to optical fundus blood vessel image segmentation network evaluate.
Technical solution: one aspect of the present invention discloses a kind of optical fundus blood vessel image segmentation confrontation sample generating method, including Step:
(1) optical fundus blood vessel image segmentation network f is established;
(2) original optical fundus blood vessel image is acquired, acquired image blood vessel is marked;With original optical fundus blood vessel image The optical fundus blood vessel image segmentation network is trained with corresponding blood vessel label, (does not use classification in experiment Label) obtain trained optical fundus blood vessel image segmentation network f ();
(3) building generates disturbance network G, and the generation disturbance network is generated to disturbance rejection v, carries out model to disturbance rejection v Number constraint obtains small sample perturbations u;Small sample perturbations are added on original optical fundus blood vessel image x, confrontation sample image a is obtained;
(4) the confrontation sample image a of generation is input in trained optical fundus blood vessel image segmentation network f (), is obtained To segmentation result f (a), calculating target function is obtained by minimizing objective function come the parameter of more newly-generated disturbance network G Generation after optimization disturbs network G ();
(5) disturbance u is generated using generation disturbance network G () after optimization, it is added to original optical fundus blood vessel image x On, obtain confrontation sample image a=x+u.
The optical fundus blood vessel image segmentation network is U-Net network, including four lower convolutional layers, four upper convolutional layers and one A sigmoid operation layer;One lower convolutional layer includes two convolutional layers and a pond layer;One upper convolutional layer includes two volumes Lamination, a warp lamination and an attended operation.
Generation described in step (3) disturbs network using ResNet network as framework, including sequentially connected first ReflectionPad operation layer, four convolutional layers, six residual error network blocks, two warp laminations, the 2nd ReflectionPad Operation layer and the 5th convolutional layer.
Objective function described in step (4) are as follows:
WhereinIndicate original optical fundus blood vessel image x in pixel space RdLabel; Indicate that segmentation network f () fights sample image a in the segmentation result label of pixel space to x;D () is apart from degree Amount;LfFor the loss function between the segmentation result label and true class label value of the output of optical fundus blood vessel image segmentation network.
As a kind of selection, loss function LfAre as follows:
Lf=Lnon-targeted(θ)=log (E (ca,cx))
Wherein θ indicates to generate the parameter of disturbance network;E () indicates BCELoss;Indicate original Optical fundus blood vessel image x is in pixel space RdLabel;Indicate segmentation network f () to the confrontation sample graph of x As a is in the segmentation result label of pixel space.
Alternatively, loss function LfAre as follows:
Lf=Ltargeted(θ)=log (E (ca,ct))
Wherein θ indicates to generate the parameter of disturbance network;E () indicates BCELoss;Indicating will confrontation Sample image a is identified as specified classification c in pixel spacetIndicate segmentation network f () to resisting sample Segmentation result label of the image a in pixel space.
The input of generation disturbance network in the present invention is original optical fundus blood vessel image or steady noise z, obtains confrontation sample This image.
The norm constraint are as follows:
Wherein ε is preset distance threshold;||·||pFor LpNorm.
On the other hand, the invention discloses a kind of optical fundus blood vessel image segmentation network security evaluation methods, comprising:
(9.1) pair of original optical fundus blood vessel image is generated using above-mentioned optical fundus blood vessel image segmentation confrontation sample generating method Resisting sample image;
(9.2) confrontation sample image is split with trained optical fundus blood vessel image segmentation network, calculates Dice system Number;Dice coefficient is higher, and optical fundus blood vessel image segmentation internet security is higher.
The utility model has the advantages that compared with prior art, optical fundus blood vessel image segmentation disclosed by the invention fights sample generating method, Can obtain with the indistinguishable confrontation sample image of original image human eye, and obtain generation disturbance network can preferably learn The feature for practising segmentation network carries out effective attack to optical fundus blood vessel image segmentation network to verify the safety of segmentation network Property.
Detailed description of the invention
Fig. 1 is that optical fundus blood vessel image segmentation is generated in embodiment 1 to the model framework schematic diagram of resisting sample;
Fig. 2 is the structural schematic diagram of optical fundus blood vessel image segmentation network in embodiment 1;
Fig. 3 is the structural schematic diagram that disturbance network is generated in embodiment 1
Fig. 4 is the structural schematic diagram for generating residual error network block in disturbance network;
Fig. 5 is that optical fundus blood vessel image segmentation is generated in embodiment 2 to the model framework schematic diagram of resisting sample;
Fig. 6 is the effect contrast figure that image relationship type disturbs non-targeted attack in embodiment 1;
Fig. 7 is the effect contrast figure that image relationship type disturbs target attack in embodiment 1;
Fig. 8 is the effect contrast figure of the universal non-targeted attack of disturbance in embodiment 2;
Fig. 9 is universal disturbance targeting effects comparison diagram in embodiment 2.
Specific embodiment
With reference to the accompanying drawings and detailed description, the present invention is furture elucidated.
Embodiment 1:
The invention discloses a kind of optical fundus blood vessel image segmentations to fight sample generating method, includes the following steps:
Step 1 establishes optical fundus blood vessel image segmentation network f;
In the present embodiment, optical fundus blood vessel image segmentation network is U-Net network, as shown in Fig. 2, including four lower convolutional layers D, four upper convolutional layer U and a sigmoid operation layer;One lower convolutional layer includes two convolutional layers and a pond layer;One A upper convolutional layer includes two convolutional layers, a warp lamination and an attended operation.
The present invention sets kernelsize as 3, padding 1 in original U-Net network structure, by convolution operation, protects Demonstrate,proved image size after convolution operation size it is constant.Maximum pond method, kernel 2 are chosen in pondization operation.Wherein It is 2 that deconvolution, which operates kernelsize, and attended operation is copy and crop in U-Net network;Finally pass through in the network After the convolution operation that kernelsize is 1, then a Sigmoid operation is added, which is intended to that network is allowed to export each picture The probabilistic forecasting value of vegetarian refreshments.
Step 2, the original optical fundus blood vessel image of acquisition, are marked acquired image blood vessel;With original optical fundus blood vessel Image and corresponding blood vessel label are trained the optical fundus blood vessel image segmentation network, (do not use in experiment Class label) obtain trained optical fundus blood vessel image segmentation network f ();
X is enabled to be expressed as optical fundus blood vessel image in pixel space RdInterior distribution, then piece image is represented by x=(x1, ...xd), for optical fundus blood vessel image, the blood vessel that needs are split is 0 remaining context marker labeled as 1.In Training segmentation network f () on image x, segmentation result are prediction probability value f (x)=(f of each pixel on image (x1),...f(xd)), it enablesThe correct label of x is expressed as, f (x in the present embodimentiThe element marking of) > 0.5 It is 1, is otherwise labeled as 0, then { 0,1 } f (x) ∈d
Step 3, building generate disturbance network G, and the generation disturbance network is generated to disturbance rejection v, carried out to disturbance rejection v Norm constraint obtains small sample perturbations u;Small sample perturbations are added on original optical fundus blood vessel image x, confrontation sample image a is obtained;This The structure of disturbance network is generated in embodiment as shown in figure 3, using ResNet network as framework, including sequentially connected first ReflectionPad operation layer, four convolutional layers, six residual error network blocks, two warp laminations, the 2nd ReflectionPad Operation layer and the 5th convolutional layer.The structure of residual error network block is as shown in Figure 4.Wherein ReflectionPad operates existing meaning It is first to expand picture size so that it can guarantee that size is constant when undergoing convolution operation below.
The confrontation sample image a of generation is input in trained optical fundus blood vessel image segmentation network f () by step 4, Segmentation result f (a) is obtained, calculating target function is obtained by minimizing objective function come the parameter of more newly-generated disturbance network G Generation after to optimization disturbs network G ();
Step 5 generates disturbance u using generation disturbance network G () after optimization, is added to original optical fundus blood vessel image On x, confrontation sample image a=x+u is obtained.
Fig. 1 is to realize to generate optical fundus blood vessel image segmentation to the model framework schematic diagram of resisting sample.Original optical fundus blood vessel figure It is disturbed in network G as x is input to generation, generates disturbance u, be added on original image x, confrontation sample image a is obtained, using instruction The optical fundus blood vessel image segmentation network f () perfected is split a, obtains segmentation result f (a).
The present embodiment generates disturbance network G original graph image field X and is converted into confrontation so that image relationship type is to attack resistance as an example Image area A is defined as follows about this attack:
X is enabled to be expressed as optical fundus blood vessel image in pixel space RdInterior distribution, f are the segmentation moulds for having high-accuracy on X Type.Image relationship type is to find the generation that one can be converted to original image x confrontation image a to disturb to the target of attack resistance The parameter of network G, G meets:
Requirement that generating, confrontation image a=G (x)+x and original image x is as similar as possible, so d (G (x)+x, x) Upper bound ε wants sufficiently small, and wherein d () is distance metric.It is worth noting that, in the disturbance of image relationship type, for any There is a corresponding disturbance G (x) in image x.
There are two types of different searchings to generate the parameter of disturbance network so that its method for meeting formula (1) constraint.
First method uses such as undefined objective function:
L (a)=- Lf(ca,cx)+d(a,x) (2)
Wherein d () is distance metric, LfFor the segmentation result label of optical fundus blood vessel image segmentation network output and true Loss function between real class label value uses BCELoss in the present embodiment;Indicate original optical fundus blood vessel Image x is in pixel space RdLabel;Indicate segmentation network f () to confrontation sample image a in pixel sky Between segmentation result label.
First part-L in formula (2)f(ca,cx) ensure that resisting sample can go cheat in advance trained segmentation mould Type, second part d (a, x) ensure that this disturbance is that human eye is sightless, i.e., sufficiently small.
Another method, generates the parameter θ of disturbance network G by optimization to generate to disturbance rejection v, to disturbance rejection v into Row LpNorm constraint obtains small sample perturbations u, and u is added on original image x and obtains confrontation sample image a.In the present embodiment, model Number constraint are as follows:
Wherein ε is preset distance threshold;||·||pFor LpNorm.
For non-targeted to attack resistance, i.e., image specific category after not given attack, only require it is different from original image, so It is required that its class label caIt will be with original tag cxDifference enables E () indicate BCELoss, then image relationship type is non-targeted right The loss function of attack resistance can be defined by the formula:
Lf=Lnon-targeted(θ)=log (E (ca,cx)) (3)
Target Countermeasure is attacked, i.e., image pixel specific category after given attackThen image relationship type The loss function of Target Countermeasure attack can be defined by the formula:
Lf=Ltargeted(θ)=log (E (ca,ct)) (4)
Image relationship type is disturbed, the present embodiment is using the non-targeted loss function such as formula (3) attacked with target attack (4) it defines, chooses L=5,10,20 as the norm constraint in pixel value [0,255], using Adam as generation Disturbance Model The method that parameter updates.By generation the present invention is acted on to resisting sample before trained segmentation network model.
It is the effect contrast figure that image relationship type disturbs non-targeted attack in Fig. 6, wherein Fig. 6 (a)-(e) is respectively original Confrontation sample image and confrontation sample after optical fundus blood vessel image, the segmented image of original image, noise disturbance image, addition disturbance The segmented image of this image.It can be seen that the image and original image after addition disturbance are substantially without significant difference;But it is divided Image graph 6 (e) is but far from each other with original segmented image Fig. 6 (b).
Fig. 7 is the effect contrast figure that image relationship type disturbs target attack, and the present invention uses target image such as Fig. 7 (b) institute Show.Fig. 7 (a)-(e) is respectively the confrontation sample after original optical fundus blood vessel image, target image, noise disturbance image, addition disturbance The segmented image of this image and confrontation sample image.
Embodiment 2:
The present embodiment is to be generated by universal disturbance to resisting sample.Universal disturbance is a fixed disturbance, with The corresponding disturbance of each image is different, and a universal disturbance can act on all original images and go to cheat their pre- Training parted pattern.
X is enabled to be expressed as optical fundus blood vessel image in pixel space RdInterior distribution, f are the segmentation moulds for having high-accuracy on X Type.The target of general disturbance is to find a fixed mode u ∈ Rd, so that forMeet:
f(x+u)≠f(x),s.t.||u||p≤σ(5)
Parameter σ is preset threshold value, indicates that general disturbance u carries out LpUpper bound when norm constraint.In order to make disturbance people Eye is imperceptible, and σ wants sufficiently small.As shown in figure 5, to generate optical fundus blood vessel image segmentation in the present embodiment to the model of resisting sample Block schematic illustration.
The parameter θ that disturbance network G is generated by training obtains the life for steady noise z being converted to universal disturbance u At disturbance network, then u is added on original image x and is generated to resisting sample a, a is input to preparatory trained segmentation C is exported in modela.Here the loss function used is identical as the definition in image relationship type loss function formula (3), formula (4).
In the present embodiment, norm constraint are as follows:
For universal disturbance, the present embodiment, which is used, disturbs identical loss function with image relationship type, chooses L=5, 10,20 as the norm constraint in pixel value [0,255], using Adam as the method for generating the update of Disturbance Model parameter.Fig. 8 For the effect contrast figure of the universal non-targeted attack of disturbance, Fig. 8 (a)-(e) is respectively original image, original segmented image, noise Confrontation sample image and confrontation sample image segmented image after disturbing image, addition disturbance.It is disturbed in Fig. 8 it can be seen that being added to Segmented image after dynamic.
For target attack, the present invention use and image relationship type disturbance in identical target image, as shown in figure 9, being Universal disturbance targeting effects comparison diagram, Fig. 9 (a)-(e) be respectively original image, target image, noise disturbance image, Confrontation sample image and confrontation sample image segmented image after addition disturbance.Observe Fig. 9 it is available, generate network generate with Target image similar noise deceives segmentation network
The result of Examples 1 and 2 is strong demonstrate it is proposed by the present invention towards optical fundus blood vessel segmentation to resisting sample The validity of generation method.
Embodiment 3:
The present embodiment uses the safety of the Dice factor evaluation optical fundus blood vessel image segmentation network of attack front and back, for non- Target attack, as shown in table 1.
Dice coefficient (initial value 68.5%) after the non-targeted attack U-Net network of table 1
As can be seen from Table 1 compared to parted pattern before output with manual markings picture Dice coefficient 68.5% for, After attack, Dice coefficient is decreased obviously, and illustrates that dividing network makes the picture for being added to the invisible disturbance of human eye of input Very different segmentation result, this explanation for segmentation U-Net network to very fragile to disturbance rejection, be highly prone to attack It hits, after under attack, Dice coefficient sharply declines.Such as in LWhen=20, for universal disturbance, Dice coefficient from 68.5% has decreased to 25.6%;Image relationship type is disturbed, has decreased to 10.4% from 68.5%.
For target attack, Dice coefficient is as shown in table 2.
Dice coefficient (initial value 68.5%) after 2 target attack parted pattern of table
As can be seen that assessment segmentation picture and the Dice coefficient of manual markings picture similarity are in LWhen=20, for logical It is disturbed with type, Dice coefficient has decreased to 34.9% from 68.5%;Image relationship type is disturbed, is had decreased to from 68.5% 19.7%.
In addition, the validity for generating disturbance network can also be evaluated with accuracy for target attack, accuracy is defined The ratio of the segmented image gross area is accounted for for the overlapping area of segmented image and target image.As shown in table 3, for target attack, From table 3 it is observed that assessment segmentation picture and Target Photo accuracy are risen to by 63.7% for universal disturbance 91.9%;Assessment segmentation picture and Target Photo accuracy are disturbed for image relationship type and risen to 90.2% by 63.7%; That is the confrontation sample image feature of the method for the present invention generation is similar to target image, and attack is effective.
3 target attack of table deceives the accuracy (making comparisons with target class, initial value 63.7%) after parted pattern

Claims (9)

1. optical fundus blood vessel image segmentation fights sample generating method, which is characterized in that comprising steps of
(1) optical fundus blood vessel image segmentation network f is established;
(2) original optical fundus blood vessel image is acquired, class label is added to acquired image, and mark to the blood vessel in image Note;The optical fundus blood vessel image segmentation network is carried out with original optical fundus blood vessel image and corresponding class label, blood vessel label Training, obtains trained optical fundus blood vessel image segmentation network f ();
(3) building generates disturbance network G, and the generation disturbance network is generated to disturbance rejection v, carries out norm about to disturbance rejection v Beam obtains small sample perturbations;Small sample perturbations are added on original optical fundus blood vessel image x, confrontation sample image a is obtained;
(4) the confrontation sample image a of generation is input in trained optical fundus blood vessel image segmentation network f (), is divided Result f (a) is cut, calculating target function is optimized by minimizing objective function come the parameter of more newly-generated disturbance network G Generation afterwards disturbs network G ();
(5) disturbance u is generated using generation disturbance network G () after optimization, is added on original optical fundus blood vessel image x, is obtained To confrontation sample image a=x+u.
2. optical fundus blood vessel image segmentation according to claim 1 fights sample generating method, which is characterized in that the eyeground It is U-Net network, including four lower convolutional layers, four upper convolutional layers and a sigmoid operation layer that blood-vessel image, which divides network,; One lower convolutional layer includes two convolutional layers and a pond layer;One upper convolutional layer includes two convolutional layers, a deconvolution Layer and an attended operation.
3. optical fundus blood vessel image segmentation according to claim 1 fights sample generating method, which is characterized in that step (3) Described in generate disturbance network using ResNet network as framework, including sequentially connected first ReflectionPad operation layer, four A convolutional layer, six residual error network blocks, two warp laminations, the 2nd ReflectionPad operation layer and the 5th convolutional layer.
4. optical fundus blood vessel image segmentation according to claim 3 fights sample generating method, which is characterized in that step (4) Described in objective function are as follows:
L (a)=- Lf(ca,cx)+d(a,x)
WhereinIndicate original optical fundus blood vessel image x in pixel space RdLabel;It indicates Segmentation network f () fights sample image a in the segmentation result label of pixel space to x's;D () is distance metric, Lf For the loss function between the segmentation result label and true class label value of the output of optical fundus blood vessel image segmentation network.
5. optical fundus blood vessel image segmentation according to claim 4 fights sample generating method, which is characterized in that the loss Function are as follows:
Lf=Lnon-targeted(θ)=log (E (ca,cx))
Wherein θ indicates to generate the parameter of disturbance network;E () indicates BCELoss;Indicate original eyeground Blood-vessel image x is in pixel space RdLabel;Indicate that segmentation network f () exists to the confrontation sample image a of x The segmentation result label of pixel space.
6. optical fundus blood vessel image segmentation according to claim 4 fights sample generating method, which is characterized in that the loss Function are as follows:
Lf=Ltargeted(θ)=log (E (ca,ct))
Wherein θ indicates to generate the parameter of disturbance network;E () indicates BCELoss;Expression will be to resisting sample Image a is identified as specified classification c in pixel spacetIndicate segmentation network f () to confrontation sample image Segmentation result label of a in pixel space.
7. optical fundus blood vessel image segmentation according to claim 1 fights sample generating method, which is characterized in that the generation The input for disturbing network is original optical fundus blood vessel image or steady noise z, obtains confrontation sample image.
8. optical fundus blood vessel image segmentation according to claim 1 fights sample generating method, which is characterized in that the norm Constraint are as follows:
Wherein ε is preset distance threshold;||·||pFor LpNorm.
9. optical fundus blood vessel image segmentation network security evaluation method characterized by comprising
(9.1) it is generated using optical fundus blood vessel image segmentation of any of claims 1-8 confrontation sample generating method former The confrontation sample image of beginning optical fundus blood vessel image;
(9.2) confrontation sample image is split with trained optical fundus blood vessel image segmentation network, calculates Dice coefficient; Dice coefficient is higher, and optical fundus blood vessel image segmentation internet security is higher.
CN201910608656.4A 2019-07-08 2019-07-08 Optical fundus blood vessel image segmentation fights sample generating method, segmentation network security evaluation method Pending CN110503650A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910608656.4A CN110503650A (en) 2019-07-08 2019-07-08 Optical fundus blood vessel image segmentation fights sample generating method, segmentation network security evaluation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910608656.4A CN110503650A (en) 2019-07-08 2019-07-08 Optical fundus blood vessel image segmentation fights sample generating method, segmentation network security evaluation method

Publications (1)

Publication Number Publication Date
CN110503650A true CN110503650A (en) 2019-11-26

Family

ID=68585479

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910608656.4A Pending CN110503650A (en) 2019-07-08 2019-07-08 Optical fundus blood vessel image segmentation fights sample generating method, segmentation network security evaluation method

Country Status (1)

Country Link
CN (1) CN110503650A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340066A (en) * 2020-02-10 2020-06-26 电子科技大学 Confrontation sample generation method based on geometric vector
CN112750128A (en) * 2019-12-13 2021-05-04 腾讯科技(深圳)有限公司 Image semantic segmentation method and device, terminal and readable storage medium
CN113378118A (en) * 2020-03-10 2021-09-10 百度在线网络技术(北京)有限公司 Method, apparatus, electronic device, and computer storage medium for processing image data
CN114444509A (en) * 2022-04-02 2022-05-06 腾讯科技(深圳)有限公司 Method, device and equipment for testing named entity recognition model and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296692A (en) * 2016-08-11 2017-01-04 深圳市未来媒体技术研究院 Image significance detection method based on antagonism network
CN108322349A (en) * 2018-02-11 2018-07-24 浙江工业大学 The deep learning antagonism attack defense method of network is generated based on confrontation type
US20190005386A1 (en) * 2017-07-01 2019-01-03 Intel Corporation Techniques for training deep neural networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296692A (en) * 2016-08-11 2017-01-04 深圳市未来媒体技术研究院 Image significance detection method based on antagonism network
US20190005386A1 (en) * 2017-07-01 2019-01-03 Intel Corporation Techniques for training deep neural networks
CN108322349A (en) * 2018-02-11 2018-07-24 浙江工业大学 The deep learning antagonism attack defense method of network is generated based on confrontation type

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
OMID POURSAEED等: "Generative Adversarial Perturbations", 《IEEE》 *
田娟秀等: "医学图像分析深度学习方法研究与挑战", 《自动化学报》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112750128A (en) * 2019-12-13 2021-05-04 腾讯科技(深圳)有限公司 Image semantic segmentation method and device, terminal and readable storage medium
CN112750128B (en) * 2019-12-13 2023-08-01 腾讯科技(深圳)有限公司 Image semantic segmentation method, device, terminal and readable storage medium
CN111340066A (en) * 2020-02-10 2020-06-26 电子科技大学 Confrontation sample generation method based on geometric vector
CN111340066B (en) * 2020-02-10 2022-05-31 电子科技大学 Confrontation sample generation method based on geometric vector
CN113378118A (en) * 2020-03-10 2021-09-10 百度在线网络技术(北京)有限公司 Method, apparatus, electronic device, and computer storage medium for processing image data
CN113378118B (en) * 2020-03-10 2023-08-22 百度在线网络技术(北京)有限公司 Method, apparatus, electronic device and computer storage medium for processing image data
CN114444509A (en) * 2022-04-02 2022-05-06 腾讯科技(深圳)有限公司 Method, device and equipment for testing named entity recognition model and storage medium
CN114444509B (en) * 2022-04-02 2022-07-12 腾讯科技(深圳)有限公司 Method, device and equipment for testing named entity recognition model and storage medium

Similar Documents

Publication Publication Date Title
CN110503650A (en) Optical fundus blood vessel image segmentation fights sample generating method, segmentation network security evaluation method
Waheed et al. Covidgan: data augmentation using auxiliary classifier gan for improved covid-19 detection
US11842487B2 (en) Detection model training method and apparatus, computer device and storage medium
CN105718952B (en) The system that lesion classification is carried out to tomography medical image using deep learning network
CN110516695A (en) Confrontation sample generating method and system towards Medical Images Classification
US20230360313A1 (en) Autonomous level identification of anatomical bony structures on 3d medical imagery
Sirish Kaushik et al. Pneumonia detection using convolutional neural networks (CNNs)
Aamir et al. An adoptive threshold-based multi-level deep convolutional neural network for glaucoma eye disease detection and classification
WO2017207138A1 (en) Method of training a deep neural network
Estrada et al. Exploratory Dijkstra forest based automatic vessel segmentation: applications in video indirect ophthalmoscopy (VIO)
Jin et al. Construction of retinal vessel segmentation models based on convolutional neural network
Jaszcz et al. Lung x-ray image segmentation using heuristic red fox optimization algorithm
Li et al. Superpixel-guided label softening for medical image segmentation
Jain et al. Lung nodule segmentation using salp shuffled shepherd optimization algorithm-based generative adversarial network
Liu et al. ELHnet: a convolutional neural network for classifying cochlear endolymphatic hydrops imaged with optical coherence tomography
Shrivastava et al. A Comprehensive Analysis of Machine Learning Techniques in Biomedical Image Processing Using Convolutional Neural Network
Popescu et al. Retinal blood vessel segmentation using pix2pix gan
Boutillon et al. Combining shape priors with conditional adversarial networks for improved scapula segmentation in MR images
Liu et al. TSSK-Net: Weakly supervised biomarker localization and segmentation with image-level annotation in retinal OCT images
Mujeeb Rahman et al. Automatic screening of diabetic retinopathy using fundus images and machine learning algorithms
Zheng et al. Deep level set method for optic disc and cup segmentation on fundus images
CN111080676B (en) Method for tracking endoscope image sequence feature points through online classification
Wu et al. Oval Shape Constraint based Optic Disc and Cup Segmentation in Fundus Photographs.
Chen et al. Region-segmentation strategy for Bruch’s membrane opening detection in spectral domain optical coherence tomography images
Stolte et al. DOMINO: Domain-aware model calibration in medical image segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191126