CN117994266A - Low-quality fundus color illumination intelligent segmentation method based on antagonism domain adaptation - Google Patents

Low-quality fundus color illumination intelligent segmentation method based on antagonism domain adaptation Download PDF

Info

Publication number
CN117994266A
CN117994266A CN202311805255.0A CN202311805255A CN117994266A CN 117994266 A CN117994266 A CN 117994266A CN 202311805255 A CN202311805255 A CN 202311805255A CN 117994266 A CN117994266 A CN 117994266A
Authority
CN
China
Prior art keywords
domain
segmentation
image
picture
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311805255.0A
Other languages
Chinese (zh)
Inventor
徐晓
李甦雁
吴亮
杨旭
尹雨晴
牛亮
王奥运
牛强
余颖
廖弘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
FIRST PEOPLE'S HOSPITAL OF XUZHOU
China University of Mining and Technology CUMT
Original Assignee
FIRST PEOPLE'S HOSPITAL OF XUZHOU
China University of Mining and Technology CUMT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by FIRST PEOPLE'S HOSPITAL OF XUZHOU, China University of Mining and Technology CUMT filed Critical FIRST PEOPLE'S HOSPITAL OF XUZHOU
Priority to CN202311805255.0A priority Critical patent/CN117994266A/en
Publication of CN117994266A publication Critical patent/CN117994266A/en
Pending legal-status Critical Current

Links

Landscapes

  • Eye Examination Apparatus (AREA)

Abstract

The invention discloses a low-quality fundus color illumination intelligent segmentation method based on antagonism domain adaptation, which comprises the steps of firstly, data acquisition and pretreatment; then constructing and generating an countermeasure network, wherein the countermeasure network comprises a generator and a discriminator and is used for mapping among samples and judging authenticity, and countermeasure training comprises optimization of various loss functions; and constructing a UNet segmentation model, and performing pre-training and fine tuning to finally realize high-precision fundus image segmentation. The low-quality fundus color illumination intelligent segmentation method based on the contrast domain adaptation combines the contrast domain adaptation and the deep learning segmentation technology, not only can improve the adaptability and the robustness of the model in processing images with different quality, but also can improve the accuracy and the efficiency of segmentation, and in practical application, even images from imaging equipment with lower quality can be accurately segmented, so that more reliable diagnosis information can be provided for ophthalmologists.

Description

Low-quality fundus color illumination intelligent segmentation method based on antagonism domain adaptation
Technical Field
The invention relates to a fundus color illumination segmentation method, in particular to a low-quality fundus color illumination intelligent segmentation method based on antagonism domain adaptation, and belongs to the technical field of medical image processing and computer vision.
Background
In recent years, significant advances have been made in medical artificial intelligence, and deep neural networks have matched or exceeded the accuracy of clinical professionals in a variety of applications. Especially, the convolutional neural network is widely applied, and the convolutional neural network obtains impressive results on tasks such as image classification, target detection and segmentation, and the like, and also has strong potential in the field of fundus image analysis. Medical image segmentation has been widely recognized as a key procedure for clinical diagnosis, analysis, and treatment planning.
With the explosive growth of medical image data, traditional manual segmentation methods become increasingly impractical. Manual segmentation is not only time consuming, labor intensive, but also subjective to some extent, and is susceptible to operator skill and experience. The automated image segmentation method not only can save a great deal of time, but also can provide more consistent and objective results. The neural network plays a key role in fundus illumination segmentation recognition, and the essence of the segmentation task is pixel-level classification. But typically require a professional physician to make labels on high quality pictures when acquiring the training set, and labeling is labor intensive.
Fundus color illumination is an important medical image used for diagnosing and monitoring various fundus lesions. However, due to differences in imaging equipment and techniques, the resulting images often have significant differences in quality and characteristics. This discrepancy poses a significant challenge for automated image processing and lesion recognition. Particularly, in the process of automatically segmenting fundus images, images with different resolutions, contrasts, brightness and color saturation are required to be processed, and these factors directly influence the effect of a segmentation algorithm, so that the detection and classification of lesions are influenced. On the other hand, in practical medical practice, high-quality fundus color illumination is often difficult to acquire, the quantity is limited, the acquisition cost is high, and doctors often need to mark on such pictures, so that if network models are required to be directly migrated from high-quality pictures to low-quality pictures for segmentation, the effects are often unsatisfactory. For example, when fundus color photograph is taken, the photograph originates from professional fundus color photograph equipment and portable fundus color photograph equipment. The professional fundus color illumination equipment has high cost, is commonly used in trimethyl hospitals, has a large number of samples for research, and has higher quality of acquired pictures. Portable fundus illumination is often used in community hospitals, and it is often difficult to train artificial intelligence models directly using these pictures due to the low quality of the pictures that are acquired. In general, the segmentation network model can only be trained on professional fundus illumination equipment, and if the trained model is directly applied to a low-quality picture for segmentation, the effect is poor.
Disclosure of Invention
Aiming at the problems existing in the prior art, the invention provides a low-quality fundus color illumination intelligent segmentation method based on the adaptability domain, which can improve the efficiency and accuracy of fundus color illumination segmentation.
In order to achieve the purpose, the low-quality fundus color illumination intelligent segmentation method based on the antagonism domain adaptation specifically comprises the following steps:
Step1, preprocessing an input fundus image through an image preprocessing module;
step2, representing the data distribution from the data source X as X-p data (X), using Training samples representing data source X, the data distribution from data source Y being represented as Y-p data (Y), use/>Training samples representing a data source Y, constructing and generating an countermeasure network;
Step3, performing countermeasure training to obtain a loss function of the final countermeasure training;
Step4, constructing a UNet segmentation model and carrying out segmentation training;
step5, joint fine tuning to obtain a final segmentation loss function;
Step6, putting into use, firstly, carrying out integral identification on the portable fundus color illumination equipment by a doctor, obtaining the identification result of the whole fundus image, then carrying out manual identification and redrawing on a small area with a problem on the identification result by the doctor, and directly sending the corrected picture into a UNet part for relearning and fine adjustment.
Further, step2 specifically comprises the following steps:
The specific process of Step2 is as follows:
S2-1: building a generator G and a generator F, wherein the generator G aims at mapping the picture of the domain X to the domain Y, and the generator F aims at mapping the picture of the domain Y to the domain X;
S2-2: constructing discriminators D X and D Y, wherein D X is used for discriminating an image { X } and a domain generated image { F (y) }, and distinguishing whether a picture is from a picture generated by a generator F or from a picture of an original domain X; d Y is used to distinguish between the image { Y } and the domain-generated image { G (x) }, whether the picture is from the picture generated by generator G or from the picture of the original domain Y.
Further, step3 specifically comprises the following steps:
S3-1, for the map generator G, and its arbiter D Y, the construction penalty is as follows:
LG(G,DY,X,Y)=Ey~pdata(y)[logDY(y)]+Ex~pdata(x)[log(1-DY(G(x))]
Wherein G is used to generate an image G (x) similar to the Y domain; d Y is used to distinguish between the generated sample G (x) and the real sample Y; x represents a sample derived from domain X; y represents a sample derived from domain Y;
S3-2, for the map generator F, and its arbiter D X, the construction penalty is as follows:
LG(F,DX,Y,X)=Ex~pdata(x)[logDX(x)]+Ey~pdata(y)[log(1-DX(F(y))]
Wherein F is used to generate an image F (y) similar to the X domain; d X is used to distinguish between the generated sample F (y) and the real sample x; x represents a sample derived from domain X; y represents a sample derived from domain Y;
S3-3, constructing joint consistency loss:
LC(G,F)=Ex~pdata(x)[||F(G(x))-x||1]+Ey~pdata(y)[||G(F(y))-y||1]
Wherein G, F represent two generators; x represents a sample derived from domain X; y represents a sample derived from domain Y; i, 1 represents an L1 norm;
S3-4, the final loss function of the countermeasure training is as follows:
L(G,F,DX,DY)=LG(G,DY,X,Y)+LG(F,DX,Y,X)+λLC(G,F)
wherein λ is a hyper-parameter used to control joint consistency loss importance;
then, the final goal of the countermeasure training is:
Wherein G, F represent the initial two generators; d X is used to discriminate the image { x } and the domain-generated image { F (y) }; d Y is used to distinguish between the image { y } and the domain generated image { G (x) }, G x, F x representing the two generators that result.
Further, step4 specifically comprises the following steps:
S4-1, constructing a UNet segmentation model, wherein the segmentation model is marked as h (·, ω), and ω is the weight of the segmentation model;
S4-2, constructing a cross entropy loss function: l h (x, y) = -ylog (h (x, ω))
Wherein X represents an original picture derived from domain X; y represents the segmentation truth value picture corresponding to x;
s4-3, segmentation training: pictures from domain X are sent Unet for pre-training.
Further, step5 specifically comprises the following steps:
S5-1, sending the domain X into a generator G to obtain G (X), sending the G (X) into UNet for segmentation, and finally obtaining a segmentation result h (G (X), omega);
S5-2, defining cross entropy loss: l h (G (x), y) = -ylog (h (G (x), ω))
Wherein X represents an original picture derived from domain X; g to generate an image G (x) similar to the Y domain; y represents the segmentation truth value picture corresponding to x;
s5-3, defining a combined loss function
Ltotal=L(G,F,DX,DY)+αLh(G(x),y)
=LG(G,DY,X,Y)+LG(F,DX,Y,X)+λLC(G,F)-αylog(h(G(x),ω))
Wherein L (G, F, D X,DY) represents an antagonistic training loss, including an antagonistic loss and a joint consistency loss; l h (G (x), y) is the cross entropy loss of the segmentation model; alpha is the weight used by the hyper-parameters to control Unet the segmentation penalty.
Compared with the prior art, the low-quality fundus color illumination intelligent segmentation method based on the antagonism domain adaptation has the following advantages:
1. strengthening generalization capability: the method can effectively process fundus color photographs from different equipment and different qualities through domain adaptation technology, not only improves the application flexibility of the model on diversified data, but also ensures that the model can keep stable performance in different clinical environments.
2. The segmentation precision is improved: segmentation algorithms designed for fundus illumination exhibit high accuracy in identifying blood vessels and other critical structures, which is critical to early diagnosis of fundus lesions and formulation of treatment plans.
3. Reducing reliance on high quality equipment: the method can reduce the dependence on high-end fundus photography equipment, so that high-quality fundus lesion detection and analysis can be performed in a resource-limited environment.
Drawings
FIG. 1 is a flow chart of a network architecture of the present invention;
FIG. 2 is a diagram of data preprocessing of the present invention;
FIG. 3 is a block diagram of the generation of an countermeasure network model of the present invention;
fig. 4 is a diagram of the UNet model structure of the present invention;
Fig. 5 is a diagram showing the recognition effect of the present invention.
Detailed Description
The invention solves the problem that the effect is poor when the high-quality picture obtained by the professional fundus color illumination equipment is directly applied to the segmentation of the low-quality picture after the segmentation network model is trained by utilizing the antagonism domain adaptation technology through generating an antagonism network (GAN) frame, and the antagonism network comprises two main components: a generator and a discriminator, the object of the generator being to produce a realistic image that is not distinguished by the discriminator, which attempts to distinguish the realistic image from the image produced by the generator. In fundus illumination applications, this technique can be used to generate a picture that is more closely related to the desired domain, a more consistent image, thereby improving the effectiveness of the subsequent segmentation algorithm.
The low-quality fundus color illumination intelligent segmentation method based on the antagonism domain adaptation comprises two modules, namely an antagonism learning module and a UNet instance segmentation module. The generation countermeasure learning module adopts a generation countermeasure network (GAN) technology, and applies countermeasure learning to a generation task by constructing two generators, each taking fundus color photograph images from different domains as input. The technology is mainly used for mutually converting the pictures from the high-quality fundus color illumination domain and the pictures from the low-quality fundus color illumination domain, and the pixel level correspondence is achieved by utilizing the joint consistency loss. This applies in particular to the conversion of different picture fields, in short a high quality picture being converted by the generator into a low quality picture and then into a high quality picture, where the new picture needs to be as consistent as possible with the original picture. Due to differences in imaging devices and technologies, generating these characteristics against network technology may be particularly helpful in improving the conversion of the picture domain. By considering the distribution of the images, it is helpful to better capture the features of these fields, thereby improving the overall effect of the conversion of fundus illumination images. The UNet example segmentation module adopts UNet technology, which is a convolution neural network structure specially designed for medical image segmentation. UNet is characterized by its symmetrical structure, which facilitates accurate capture of image details in segmentation tasks. In fundus illumination segmentation, UNet can effectively identify and segment different retinal structures, including blood vessels, lesions, etc. The high-efficiency feature extraction capability of the method can still maintain high segmentation accuracy under the condition of large difference of processed image quality.
The present invention will be further described with reference to the accompanying drawings by taking a fundus blood vessel for segmenting fundus color illumination as an example.
As shown in fig. 1, the low-quality fundus color illumination intelligent segmentation method based on the contrast domain adaptation firstly performs data acquisition and preprocessing, including fundus reflection removal, histogram equalization, denoising and color normalization, so as to improve the image quality; then constructing and generating an countermeasure network, wherein the countermeasure network comprises a generator and a discriminator and is used for mapping among samples and judging authenticity, and countermeasure training comprises optimization of various loss functions; then constructing a UNet blood vessel segmentation model, and performing pre-training and fine adjustment to finally realize high-precision fundus blood vessel segmentation. The method comprises the following steps:
step1, data acquisition and preprocessing
S1-1, collecting two types of fundus color photograph image data, wherein one type of fundus color photograph data comes from professional fundus color photograph equipment, the other type of fundus color photograph data comes from portable fundus color photograph equipment, the picture from the professional fundus color photograph equipment is called a data source X, and the picture from the portable fundus color photograph equipment is called a data source Y.
S1-2, performing expert segmentation labeling on the image from the data source X, as shown in FIG. 2, and then performing the following data preprocessing on the image of the data source Y to improve the image quality before network training.
S1-2-1, removing fundus reflection: the optic disc (central portion of the fundus) in fundus illumination typically introduces large changes in luminance, sometimes requiring removal or alleviation of the effects of this area in order to better analyze other portions of the retina.
S1-2-2, histogram equalization: improving the contrast of the image. Histogram equalization (especially for luminance channels) is applied to improve the global contrast of the image.
S1-2-3, denoising: fundus illumination may be subject to various noise, such as light noise, artifacts, and the like. Denoising methods include median filtering, gaussian filtering, and wavelet denoising.
S1-2-4, color standardization: the color and brightness of fundus images may be different depending on different photographing conditions, and thus color normalization is required to keep the color and brightness uniform between different images.
Step2, as shown in FIG. 3, constructs and generates an antagonism network
The distribution of data from data source X is represented as X-p data (X), usingTraining samples representing data source X; the data distribution from data source Y is denoted as Y-p data (Y), use/>Representing training samples of data source Y.
S2-1: generators G and F are constructed. The goal of generator G is to map the pictures of domain X to domain Y, and the goal of generator F is to map the pictures of domain Y to domain X.
S2-2: discriminators D X and D Y are constructed. Wherein D X is intended to distinguish between the image { X } and the domain-generated image { F (y) }, whether the picture comes from the picture generated by generator F or from the picture of original domain X; similarly, D Y is intended to distinguish between an image { Y } and a domain-generated image { G (x) }, whether the picture is from the picture generated by generator G or from the picture of the original domain Y.
Note that: the generation network contains three convolutions, several residual blocks, two fractional order convolutions with steps of 1/2, and one convolution mapping features to RGB. We use 9 blocks for training images of 256 x 256 and higher resolution. Example normalization was used. For the arbiter network we use 70 x 70PATCHGANS, the purpose of which is to classify the true or false of 70 x 70 overlapping image patches. Such a patch level discriminator architecture has fewer parameters than a full image discriminator and can process images of arbitrary size in a full convolutional network.
Step3, challenge training
S3-1, for the map generator G, and its arbiter D Y, the construction penalty is as follows:
LG(G,DY,X,Y)=Ey~pdata(y)[logDY(y)]+Ex~pdata(x)[log(1-DY(G(x))]
Wherein G is used to generate an image G (x) similar to the Y domain, and DY is aimed at distinguishing between the generated sample G (x) and the real sample Y; x represents a sample derived from the field X, namely a picture derived from professional fundus illumination equipment; y represents a sample derived from field Y, i.e. a picture derived from a portable fundus illumination device.
S3-2, for the map generator F, and its arbiter D X, the construction penalty is as follows:
LG(F,DX,Y,X)=Ex~pdata(x)[logDX(x)]+Ey~pdata(y)[log(1-DX(F(y))]
Wherein F is used to generate an image F (y) similar to the X domain, and DX targets are to distinguish between the generated sample F (y) and the real sample X; x represents a sample derived from the field X, namely a picture derived from professional fundus illumination equipment; y represents a sample derived from field Y, i.e. a picture derived from a portable fundus illumination device.
S3-3, constructing joint consistency loss:
LC(G,F)=Ex~pdata(x)[||F(G(x))-x||1]+Ey~pdata(y)[||G(F(y))-y||1]
Wherein G, F represent two generators; x represents a sample derived from the field X, namely a picture derived from professional fundus illumination equipment; y represents a sample derived from the domain Y, namely a picture derived from the portable fundus illumination apparatus; i, 1 representing the L1 norm. S3-4, the final loss function of the countermeasure training is as follows:
L(G,F,DX,DY)=LG(G,DY,X,Y)+LG(F,DX,Y,X)+λLC(G,F)
where λ is a superparameter used to control the importance of joint consistency loss.
Thus, the final goal of the countermeasure training is:
Wherein G, F represent the initial two generators; d X is used to discriminate the image { x } and the domain-generated image { F (y) }; d Y is used to distinguish between the image { y } and the domain generated image { G (x) }, G x, F x representing the two generators that result.
Step4, constructing a blood vessel segmentation model
S4-1, as shown in FIG. 4, constructing a UNet segmentation model, wherein the segmentation model is marked as h (.omega), and omega is the weight of the segmentation model.
S4-2, constructing a cross entropy loss function: l h (x, y) = -ylog (h (x, ω))
Wherein X represents an original picture derived from domain X; y represents the split truth picture corresponding to x.
S4-3, segmentation training: pictures from field X (i.e., from a professional fundus color camera) are fed Unet for pre-training so that the model has a good starting point on the segmentation task.
Note that: the split network changes the encoder layer to conventional VGGs 16 and ResNet on the Unet basis, thereby increasing the ability and depth of the network to extract features.
Step5, combined fine tuning
S5-1, sending a picture from a domain X (namely, a picture from professional fundus color photographic equipment) into a generator G to obtain G (X), sending G (X) into UNet for segmentation, and finally obtaining a segmentation result h (G (X), omega).
S5-2, defining cross entropy loss: l h (G (x), y) = -ylog (h (G (x), ω))
Wherein X represents an original picture derived from domain X; g to generate an image G (x) similar to the Y domain; y represents the split truth picture corresponding to x.
S5-3, defining a combined loss function, which consists of the following parts:
Resistance loss: for evaluating the authenticity of the generated image.
Joint consistency loss: ensuring that the original style of the image can be restored after the style conversion.
Segmentation loss: the accuracy of the fundus vessel segmentation is assessed.
The final loss function is defined as:
Ltotal=L(G,F,DX,DY)+αLh(G(x),y)
=LG(G,DY,X,Y)+LG(F,DX,Y,X)+λLC(G,F)-αylog(h(G(x),ω))
Wherein L (G, F, D X,DY) represents an antagonistic training loss, including an antagonistic loss and a joint consistency loss; l h (G (x), y) is the cross entropy loss of the segmentation model; alpha is the weight used by the hyper-parameters to control Unet the segmentation penalty.
Step6: put into use
As shown in FIG. 5, the recognition effect graph of the invention can continuously learn the pictures from the domain Y after the invention is put into use, so that the segmentation recognition precision of the model is continuously improved in the use process. The doctor carries out the whole discernment to coming from portable eyeground colour photograph equipment at first, can obtain the result of whole eyeground image discernment, then the doctor can carry out manual sign and redraw to the little region that the result of discernment is problematic, and these pictures after the correction can directly send into UNet part in this network structure and learn the fine setting again for the model can cut apart the picture that comes from domain Y better.
The low-quality fundus color illumination intelligent segmentation method based on the contrast domain adaptation is a fundus image processing method combining a GAN technology and a U-Net technology, and the method firstly utilizes the GAN to process fundus color illumination from different imaging devices, improves image quality and adapts to subsequent image segmentation processing. GAN is able to convert a low quality fundus image into a style close to a high quality image by learning the mapping between two different fields (high quality and low quality images), a step that is critical for subsequent image segmentation.
The U-Net model is used for precisely segmenting the image processed by GAN. The U-Net model is known for its excellent image segmentation capability, especially in the medical image processing field. The model can effectively capture fine features in the image through the unique symmetrical structure and jump connection, so that a more accurate segmentation result is realized. This is particularly important for identifying lesion areas in fundus images, as these areas often contain critical diagnostic information.
In addition, the low-quality fundus color illumination intelligent segmentation method based on the contrast domain adaptation combines the contrast domain adaptation and the deep learning segmentation technology, and the combination not only can improve the adaptability and the robustness of the model in processing images with different quality, but also can improve the accuracy and the efficiency of segmentation, and in practical application, the method means that even images from imaging equipment with lower quality can be accurately segmented, so that more reliable diagnosis information is provided for ophthalmologists. Through the low-quality fundus color illumination intelligent segmentation method based on the resistance domain adaptation, a doctor can more quickly and accurately identify common fundus diseases such as diabetic retinopathy, macular degeneration and the like, which has important significance for early diagnosis and timely treatment, because early discovery of fundus diseases is usually closely related to better treatment effect. The low-quality fundus color illumination intelligent segmentation method based on the contrast domain adaptation is not only suitable for high-end imaging equipment of hospitals and clinics, but also suitable for common imaging equipment of basic medical and health institutions, especially in areas with limited resources, can provide accurate fundus lesion detection and analysis service for wider patient groups, and has important practical value in clinical application.

Claims (5)

1. The low-quality fundus color illumination intelligent segmentation method based on the antagonism domain adaptation is characterized by comprising the following steps of:
Step1, preprocessing an input fundus image through an image preprocessing module;
step2, representing the data distribution from the data source X as X-p data (X), using Training samples representing data source X, the data distribution from data source Y being represented as Y-p data (Y), use/>Training samples representing a data source Y, constructing and generating an countermeasure network;
Step3, performing countermeasure training to obtain a loss function of the final countermeasure training;
Step4, constructing a UNet segmentation model and carrying out segmentation training;
step5, joint fine tuning to obtain a final segmentation loss function;
Step6, putting into use, firstly, carrying out integral identification on the portable fundus color illumination equipment by a doctor, obtaining the identification result of the whole fundus image, then carrying out manual identification and redrawing on a small area with a problem on the identification result by the doctor, and directly sending the corrected picture into a UNet part for relearning and fine adjustment.
2. The intelligent segmentation method for low-quality fundus color illumination based on resistance domain adaptation according to claim 1, wherein Step2 comprises the following specific processes:
S2-1: building a generator G and a generator F, wherein the generator G aims at mapping the picture of the domain X to the domain Y, and the generator F aims at mapping the picture of the domain Y to the domain X;
S2-2: constructing discriminators D X and D Y, wherein D X is used for discriminating an image { X } and a domain generated image { F (y) }, and distinguishing whether a picture is from a picture generated by a generator F or from a picture of an original domain X; d Y is used to distinguish between the image { Y } and the domain-generated image { G (x) }, whether the picture is from the picture generated by generator G or from the picture of the original domain Y.
3. The low-quality fundus color illumination intelligent segmentation method based on the resistance domain adaptation according to claim 2, wherein Step3 comprises the following specific processes:
S3-1, for the map generator G, and its arbiter D Y, the construction penalty is as follows:
Wherein G is used to generate an image G (x) similar to the Y domain; d Y is used to distinguish between the generated sample G (x) and the real sample Y; x represents a sample derived from domain X; y represents a sample derived from domain Y;
S3-2, for the map generator F, and its arbiter D X, the construction penalty is as follows:
Wherein F is used to generate an image F (y) similar to the X domain; d X is used to distinguish between the generated sample F (y) and the real sample x; x represents a sample derived from domain X; y represents a sample derived from domain Y;
S3-3, constructing joint consistency loss:
Wherein G, F represent two generators; x represents a sample derived from domain X; y represents a sample derived from domain Y; i, 1 represents an L1 norm;
S3-4, the final loss function of the countermeasure training is as follows:
L(G,F,DX,DY)=LG(G,DY,X,Y)+LG(F,DX,Y,X)+λLC(G,F)
wherein λ is a hyper-parameter used to control joint consistency loss importance;
then, the final goal of the countermeasure training is:
Wherein G, F represent the initial two generators; d X is used to discriminate the image { x } and the domain-generated image { F (y) }; d Y is used to distinguish between the image { y } and the domain generated image { G (x) }, G x, F x representing the two generators that result.
4. The low-quality fundus color illumination intelligent segmentation method based on the resistance domain adaptation according to claim 3, wherein Step4 comprises the following specific processes:
S4-1, constructing a UNet segmentation model, wherein the segmentation model is marked as h (·, ω), and ω is the weight of the segmentation model;
S4-2, constructing a cross entropy loss function: l h (x, y) = -ylog (h (x, ω))
Wherein X represents an original picture derived from domain X; y represents the segmentation truth value picture corresponding to x;
s4-3, segmentation training: pictures from domain X are sent Unet for pre-training.
5. The intelligent segmentation method for low-quality fundus color illumination based on resistance domain adaptation according to claim 4, wherein Step5 comprises the following specific steps:
S5-1, sending the domain X into a generator G to obtain G (X), sending the G (X) into UNet for segmentation, and finally obtaining a segmentation result h (G (X), omega);
S5-2, defining cross entropy loss: l h (G (x), y) = -ylog (h (G (x), ω))
Wherein X represents an original picture derived from domain X; g to generate an image G (x) similar to the Y domain; y represents the segmentation truth value picture corresponding to x;
s5-3, defining a combined loss function
Ltotal=L(G,F,DX,DY)+αLh(G(x),y)
=LG(G,DY,X,Y)+LG(F,DX,Y,X)+λLC(G,F)-αylog(h(G(x),ω))
Wherein L (G, F, D X,DY) represents an antagonistic training loss, including an antagonistic loss and a joint consistency loss; l h (G (x), y) is the cross entropy loss of the segmentation model; alpha is the weight used by the hyper-parameters to control Unet the segmentation penalty.
CN202311805255.0A 2023-12-26 2023-12-26 Low-quality fundus color illumination intelligent segmentation method based on antagonism domain adaptation Pending CN117994266A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311805255.0A CN117994266A (en) 2023-12-26 2023-12-26 Low-quality fundus color illumination intelligent segmentation method based on antagonism domain adaptation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311805255.0A CN117994266A (en) 2023-12-26 2023-12-26 Low-quality fundus color illumination intelligent segmentation method based on antagonism domain adaptation

Publications (1)

Publication Number Publication Date
CN117994266A true CN117994266A (en) 2024-05-07

Family

ID=90896720

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311805255.0A Pending CN117994266A (en) 2023-12-26 2023-12-26 Low-quality fundus color illumination intelligent segmentation method based on antagonism domain adaptation

Country Status (1)

Country Link
CN (1) CN117994266A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476771A (en) * 2020-04-03 2020-07-31 中山大学 Domain self-adaptive method and system for generating network based on distance countermeasure
CN114372985A (en) * 2021-12-17 2022-04-19 中山大学中山眼科中心 Diabetic retinopathy focus segmentation method and system adapting to multi-center image
CN115731178A (en) * 2022-11-21 2023-03-03 华东师范大学 Cross-modal unsupervised domain self-adaptive medical image segmentation method
CN116563398A (en) * 2023-05-15 2023-08-08 北京石油化工学院 Low-quality fundus color photograph generation method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476771A (en) * 2020-04-03 2020-07-31 中山大学 Domain self-adaptive method and system for generating network based on distance countermeasure
CN114372985A (en) * 2021-12-17 2022-04-19 中山大学中山眼科中心 Diabetic retinopathy focus segmentation method and system adapting to multi-center image
CN115731178A (en) * 2022-11-21 2023-03-03 华东师范大学 Cross-modal unsupervised domain self-adaptive medical image segmentation method
CN116563398A (en) * 2023-05-15 2023-08-08 北京石油化工学院 Low-quality fundus color photograph generation method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张林等: "基于自适应补偿网络的视网膜血管分割", 《光学学报》, vol. 43, no. 14, 10 May 2023 (2023-05-10) *

Similar Documents

Publication Publication Date Title
US11151721B2 (en) System and method for automatic detection, localization, and semantic segmentation of anatomical objects
CN105513077B (en) A kind of system for diabetic retinopathy screening
Lu et al. Automatic optic disc detection from retinal images by a line operator
CN107169998B (en) A kind of real-time tracking and quantitative analysis method based on hepatic ultrasound contrast enhancement image
CN106650794A (en) Method and system for eliminating highlight of image affected by highlight reflection on object surface
CN107358612A (en) A kind of retinal vessel segmenting system combined based on fractal dimension with gaussian filtering and method
CN112102332A (en) Cancer WSI segmentation method based on local classification neural network
CN113012093B (en) Training method and training system for glaucoma image feature extraction
CN109087310A (en) Dividing method, system, storage medium and the intelligent terminal of Meibomian gland texture region
CN115965607A (en) Intelligent traditional Chinese medicine tongue diagnosis auxiliary analysis system
Yang et al. Unsupervised domain adaptation for cross-device OCT lesion detection via learning adaptive features
Zhao et al. Attention residual convolution neural network based on U-net (AttentionResU-Net) for retina vessel segmentation
CN117557840B (en) Fundus lesion grading method based on small sample learning
CN106960199A (en) A kind of RGB eye is as the complete extraction method in figure white of the eye region
CN110766665A (en) Tongue picture data analysis method based on strong supervision algorithm and deep learning network
CN109711306B (en) Method and equipment for obtaining facial features based on deep convolutional neural network
CN111640127A (en) Accurate clinical diagnosis navigation method for orthopedics department
CN112634221A (en) Image and depth-based cornea level identification and lesion positioning method and system
CN116452855A (en) Wound image classification and laser assisted treatment method based on deep learning
CN117994266A (en) Low-quality fundus color illumination intelligent segmentation method based on antagonism domain adaptation
CN109816665A (en) A kind of fast partition method and device of optical coherence tomographic image
CN111640126B (en) Artificial intelligent diagnosis auxiliary method based on medical image
CN115456974A (en) Strabismus detection system, method, equipment and medium based on face key points
CN114972881A (en) Image segmentation data labeling method and device
CN113796850A (en) Parathyroid MIBI image analysis system, computer device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination