CN110517235B - OCT image choroid automatic segmentation method based on GCS-Net - Google Patents

OCT image choroid automatic segmentation method based on GCS-Net Download PDF

Info

Publication number
CN110517235B
CN110517235B CN201910762318.6A CN201910762318A CN110517235B CN 110517235 B CN110517235 B CN 110517235B CN 201910762318 A CN201910762318 A CN 201910762318A CN 110517235 B CN110517235 B CN 110517235B
Authority
CN
China
Prior art keywords
gcs
layer
grouped
inter
feature maps
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910762318.6A
Other languages
Chinese (zh)
Other versions
CN110517235A (en
Inventor
石霏
陈新建
成雪娜
朱伟芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou University
Original Assignee
Suzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou University filed Critical Suzhou University
Priority to CN201910762318.6A priority Critical patent/CN110517235B/en
Publication of CN110517235A publication Critical patent/CN110517235A/en
Application granted granted Critical
Publication of CN110517235B publication Critical patent/CN110517235B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Abstract

The invention discloses an OCT image choroid automatic segmentation method based on GCS-Net, which comprises the following steps: data acquisition and preprocessing; constructing a GCS-Net network model, adopting a U-Net structure as a basic network, connecting each layer of crossing connection layers between an encoding path and a decoding path by adopting an inter-group channel expansion module (GCD), and connecting each layer of anti-convolution layers in the decoding path by adopting an inter-group space expansion module (GSD) in an inter-layer manner; and testing the trained GCS-Net network model, inputting the image to be segmented into the constructed model, and outputting a corresponding choroid segmentation map. The two modules respectively and automatically select multi-scale information among groups in two modes, so that the accuracy of automatic choroid segmentation is obviously improved, and the application objects can be expanded to pathological myopia or retina images containing optic nerve papilla.

Description

OCT image choroid automatic segmentation method based on GCS-Net
Technical Field
The invention relates to a GCS-Net-based automatic OCT image choroid segmentation method, and belongs to the technical field of fundus image segmentation.
Background
The choroid is a complex layer of blood vasculature between the Retinal Pigment Epithelium (RPE) and the sclera and has very important physiological functions. The distribution of the thickness of the retinal membrane and the change in volume in OCT images have become important indicators for managing retinal diseases. Many diseases are closely related to the morphology of the choroid, such as Pathological Myopia (PM), glaucoma, age-related macular degeneration (AMD), Central Serous Chorioretinopathy (CSC), myopic maculopathy, choroiditis, and the like. The realization of automatic segmentation of the choroid in the OCT image is of great significance to the discovery of early lesions, the observation of the course of the lesion and the study of the pathology.
Swept-frequency optical coherence tomography (SS-OCT) centered at 1050 nm is a high-resolution, non-contact, non-invasive biological tissue imaging technique that performs tomographic imaging of the anterior and posterior segment tissues in micron resolution to generate cross-sectional images of the biological tissues of the eye, including the macula, optic disc, where the entire choroidal and partial scleral structures are visible.
At present, both the traditional algorithm and the deep learning can segment the choroid in the OCT image, but both have certain limitations and disadvantages: (1) the traditional algorithm is complex and the accuracy of the detection of the inferior choroidal boundary is not high. (2) Many conventional algorithms are only applicable to normal retinas, or only to images centered at the macula, and for pathologic myopia or Optic Nerve Head (ONH) containing retinas, detection becomes more complex and difficult. (3) Although deep learning can make up for the defects of the traditional algorithm, most of the existing networks uniformly process all feature maps of the same layer, so that the reception fields of the networks in the same layer are the same, and the acquired local information is single. (4) With the continuous down-sampling of the network and the convolution operation with step size, the drawback of being able to obtain only a single size information at the same layer becomes more and more evident, resulting in an insufficiently accurate segmentation of the choroid.
Disclosure of Invention
The invention aims to solve the technical problem of providing an automatic segmentation method of an OCT image choroid, which has higher segmentation accuracy and can be applied to segmentation of a retina image with pathological myopia or optic nerve head.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
an automatic OCT image choroid segmentation method based on GCS-Net, wherein GCS-Net is defined as an inter-group context selection network, the method comprises the following steps:
acquiring and preprocessing data, wherein a data set is a three-dimensional OCT image of human eyes, and preprocessing the data to be used as a training set sample;
constructing a GCS-Net network model, training the GCS-Net network by using a training set sample, wherein the GCS-Net network adopts a U-Net structure as a basic network, each layer of crossing connection layers between an encoding path and a decoding path is connected by adopting an inter-group channel expansion module (GCD), and each layer of deconvolution layers in the decoding path is connected by adopting an inter-group space expansion module (GSD) for interlayer connection;
the inter-group channel expansion module divides the input feature maps into a plurality of groups, respectively performs expansion convolution operations with different expansion rates on the grouped feature maps, simultaneously performs feature extraction and output on the grouped feature maps to obtain all grouped channel information, multiplies the channel information with all grouped feature maps after the expansion convolution operations correspondingly to obtain all groups of feature maps, splices all grouped feature maps, performs residual error operation on the spliced grouped feature maps and the input feature maps, and outputs a prediction map of the module;
the inter-group space expansion module divides the input feature maps into a plurality of groups, respectively performs expansion convolution operations with different expansion rates on the grouped feature maps, simultaneously performs feature extraction and output on the grouped feature maps to obtain all grouped space information, multiplies the space information serving as space weight with all grouped feature maps after the expansion convolution operations correspondingly to obtain all grouped feature maps, and performs residual error operation on all grouped feature maps after splicing and the input feature maps to output a prediction map of the module;
testing the trained GCS-Net network model;
and inputting the image to be segmented into the constructed model, and outputting a corresponding choroid segmentation map.
Further, the GCS-Net network model adopts a combined loss function consisting of binary cross entropy loss (BCE) and Dys (Dice) loss, namely, Dice loss LDiceAnd BCE loss LBCEThe formula of (1) is as follows:
Figure BDA0002170721080000031
Figure BDA0002170721080000032
where N is the total number of pixels in the prediction map, pi∈[0,1]And giE {0,1} respectively represents the probability that pixel i is predicted as the target foreground and the gold standardEpsilon is a smoothing factor;
final joint loss function LTotalThe formula of (1) is:
LTotal=LDice+λLBCE (3)
λ here is a balance coefficient of Dice loss and BCE loss.
Further, the data preprocessing method is to bilinearly interpolate and down-sample the original OCT image with the size of 512(B scan width) × 992(B scan depth) to 256(B scan width) × 512(B scan depth), and perform data enhancement in a manner of simulating left and right eye images by randomly flipping the image left and right.
Furthermore, the GCS-Net network structure is a U-Net network with a 4-layer structure, each layer of the coding path adopts two 3 × 3 convolutional layers and a maximum pooling layer for feature extraction, each layer of feature map is connected with a corresponding decoding layer through the inter-group channel expansion module, each layer of the decoding path adopts a 3 × 3 convolutional layer, an upsampling layer and a 1 × 1 convolutional layer for recovery, each layer of feature map is connected with a corresponding decoding layer through the inter-group spatial expansion module, the output layer adopts a 3 × 3 convolutional layer and a 1 × 1 convolutional layer for output, and batch regularization (BN) and a modified linear unit (rej) are added to all convolutional layers except the output layer to correct data distribution.
Further, the inter-group channel expansion module averagely divides the characteristic diagram into 4 groups, and each group is respectively subjected to expansion convolution operation with expansion rates of 1, 2, 4 and 6.
Further, the step of performing feature extraction and output on the grouped feature maps in the inter-group channel expansion module includes: after the average pooling, convolution, batch regularization and activation operations, channel information of all packets is obtained by softmax regression.
Further, the inter-group space expansion module averagely divides the characteristic diagram into 3 groups, and each group is respectively subjected to expansion convolution operation with expansion rates of 1, 3 and 5.
Further, the step of performing feature extraction and output on the grouped feature maps by the inter-group spatial expansion module includes: and after operations of downsampling, convolution, batch regularization, activation, upsampling, convolution, batch regularization and activation, spatial information of all the groups is obtained through softmax regression.
Further, the segmentation method is used for segmenting a three-dimensional OCT image of pathologic myopia or retina containing Optic Nerve Head (ONH).
The invention achieves the following beneficial effects:
1) the invention designs an end-to-end network GCS-Net and trains the GCS-Net to obtain a choroid segmentation model with a receptive field consistent with a choroid segmentation target area, and then sends an OCT image with normal or high myopia into the trained choroid segmentation model to obtain a corresponding choroid segmentation image;
2) designing a channel expansion module (GCD) between groups at the crossing connection layer of the GCS-Net, wherein the GCD can select multi-scale information obtained by convolution of different expansion rates between the groups under the guidance of channel information, so that the consistency of a receptive field and a choroid segmentation target region is enhanced;
3) designing an inter-group spatial expansion module (GSD) in a decoding path of GCS-Net, wherein the GSD can select multi-scale information obtained by convolution of different expansion rates among groups under the guidance of spatial information, so that the consistency of a receptive field and a choroid segmentation target region is enhanced;
the invention provides a feasible and effective automatic segmentation method for normal and high-myopia choroids in a large-field three-dimensional OCT image acquired by a sweep optical coherence tomography scanner with a central wavelength of 1050 nanometers, which can automatically and accurately segment Bruch's Membrane (BM) defined as an upper choroid boundary and a choroid-sclera interface (CSI) defined as a lower boundary. The quantitative analysis of the thickness and the shape of the normal or pathological choroid plays an important role.
Drawings
FIG. 1 is an internal component of an interclass channel expansion module (GCD) as employed in an embodiment;
FIG. 2 is an internal component of the intergroup spatial expansion module (GSD) employed by the embodiments;
FIG. 3 is an overall network structure of GCS-Net constructed by the embodiment;
FIG. 4 is a choroidal segmentation result; wherein (a) and (b) are the segmentation results of the transverse scanning image of the normal human eye without the optic disc and with the optic disc respectively; (c) is the segmentation result of the horizontal scanning image of the highly myopic human eye; (d) is the result of the segmentation of the transversely scanned image of a pathologically highly myopic eye.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
An OCT image choroid automatic segmentation method based on GCS-Net comprises the following steps:
and (2) acquiring and preprocessing data, wherein a data set is a three-dimensional OCT image of human eyes, the data is preprocessed to be used as a training set sample, the data set can be acquired from retina images of normal and pathological myopia or optic nerve papilla, and the acquired data is subjected to unified specification processing and is used for model input.
Constructing a GCS-Net network model, training the GCS-Net network by using a training set sample, wherein the GCS-Net network adopts a U-Net structure as a basic network, each layer of crossing connection layers between an encoding path and a decoding path is connected by adopting an inter-group channel expansion module (GCD), and each layer of deconvolution layers in the decoding path is connected by adopting an inter-group space expansion module (GSD) for interlayer connection;
the inter-group channel expansion module divides the input feature maps into a plurality of groups, respectively performs expansion convolution operations with different expansion rates on the grouped feature maps, simultaneously performs feature extraction and output on the grouped feature maps to obtain all grouped channel information, multiplies the channel information with all grouped feature maps after the expansion convolution operations correspondingly to obtain all groups of feature maps, splices all grouped feature maps, performs residual error operation on the spliced grouped feature maps and the input feature maps, and outputs a prediction map of the module;
the inter-group space expansion module divides the input feature maps into a plurality of groups, respectively performs expansion convolution operations with different expansion rates on the grouped feature maps, simultaneously performs feature extraction and output on the grouped feature maps to obtain all grouped space information, multiplies the space information serving as space weight with all grouped feature maps after the expansion convolution operations correspondingly to obtain all grouped feature maps, and performs residual error operation on all grouped feature maps after splicing and the input feature maps to output a prediction map of the module;
and testing the trained GCS-Net network model, inputting the image to be segmented into the constructed model, and outputting a corresponding choroid segmentation map.
Examples
1) Data pre-processing
The experimental data set consisted of large field of view three-dimensional OCT images acquired by a Topcon DRI-OCT scanner centered at 1050 nm, scanning a region encompassing the center of the macula and the Optic Nerve Head (ONH). The image size was 512(B scan width) × 256(B scan times) × 992(B scan depth), and the corresponding volume was 12 × 9 × 2.6mm3. The data set consisted of 1650B-scan OCT images labeled by a specialist containing gold standards, of which 1150 were from 115 normal eyes and 500 were from 50 highly myopic eyes, each eye taking 10 evenly spaced B-scan OCT images.
In the training and verification process, in order to improve the calculation efficiency of the model, an original OCT image with the size of 512(B scanning width) multiplied by 992(B scanning depth) is subjected to bilinear interpolation and downsampling to 256(B scanning width) multiplied by 512(B scanning depth), and data enhancement is carried out in a mode that the image is subjected to random left-right turning to simulate a left-right eye image.
2) Design of network model
a) Interclass channel expansion module (GCD)
Fig. 1 shows the structure of inter-group channel inflation module (GCD) in the network model of the present invention. The module equally divides an input set of feature maps into four groups, and each group carries out expansion convolution operation with expansion rates of 1, 2, 4 and 6. The choice of inflation rate is determined by the size of the targeted area of choroidal segmentation. And obtaining four groups of characteristic graphs with different scales after the expansion convolution operation. And meanwhile, performing operations such as global average pooling, convolution, activation and the like on the input characteristic diagram, and finally returning four groups of channel information through softmax regression. It should be noted here that the softmax regression is performed in groups, and four groups of channel information perform softmax operation in the vertical direction so as to adaptively select different scale information. And then multiplying the four groups of channel information as weights by the four groups of feature maps with different scales obtained in the previous step to obtain four groups of feature maps, wherein the larger the weight of the channel information of which group is, the larger the contribution of the receptive field corresponding to the group to the final network prediction is. Finally, the obtained 4 groups of feature maps are spliced together and then subjected to a residual operation with the input feature map so as to output a prediction map of a final module. Therefore, the module automatically selects the inter-group multi-scale information under the guidance of the channel information by using the idea of grouping.
b) Inter-group space expansion module (GSD)
Fig. 2 is a structure of an inter-group spatial expansion module (GSD). Like the GCD module, the GSD module designed by the invention realizes the selection of multi-scale information among groups in another mode, and enhances the consistency of the receptive field and the segmentation target area. The module divides a group of input feature maps into three groups, wherein the number of channels in each group is not required to be completely the same, and then each group is subjected to expansion convolution operation with expansion rates of 1, 3 and 5 to obtain three groups of feature maps with different scales. Meanwhile, the input set of feature maps are subjected to a series of operations to obtain three feature maps with spatial weights, in the operations, the down-sampling purpose is to acquire more global information, the up-sampling purpose is to restore the size of the feature maps, and the softmax regression is to enable the module to automatically select multi-scale information. And then multiplying the three spatial weight characteristic graphs by the three groups of characteristic graphs with different scales obtained in the previous step to obtain three groups of characteristic graphs. Finally, splicing the three groups of obtained characteristic graphs, and performing a residual operation on the three groups of characteristic graphs and the input characteristic graph to output a prediction graph of a final module. Therefore, the module selects the inter-group multi-scale information for a group of feature maps under the guidance of the spatial information.
c) Network model integral framework
Based on GCD and GSD, the invention designs a trace type deep learning network frame (GCS-Net) to automatically segment the choroid in the OCT image, and the whole frame of the network model is shown in figure 3. Due to the strong multi-scale information selection capability of GCD and GSD, the present invention uses only 4-layer U-net based coding-decoding architecture as the underlying network. In the coding path, only two 3 x 3 convolutions and maximum pooling are used for rapidly acquiring feature maps with different resolutions, in the decoding path, a plurality of simple decoding blocks are used for rapidly and effectively recovering the feature maps with high resolutions, and in addition to the last two convolution layers for outputting a final prediction map, batch regularization (BN) and a modified linear unit (ReLU) are added to other convolution layers in the network for correcting the distribution of data. The network model places the GCD module in a crossing connection layer of a network, and aims to automatically select multi-scale information at each layer of an encoding path by using GCD, so that the defect that a single message is transmitted to a decoder by an original encoder is overcome. Meanwhile, the GSD is placed in a decoder part, so that the GSD is used for selecting multi-scale information, and the defect of global information loss in the up-sampling process is overcome.
d) Loss function
In the task of binary segmentation at the pixel level, the most common penalty is binary cross entropy penalty (BCE), which can solve the problem of data imbalance in medical images. Whereas dess (Dice) loss is more focused on segmentation of small objects, such as the highly myopic choroidal segmentation herein. The BCE loss and the Dice loss are combined to be used as a combined loss function, so that the problems of data imbalance and uneven choroid segmentation target areas in the choroid segmentation problem are solved. Loss of Dice LDiceAnd BCE loss LBCEThe specific formula of (A) is as follows:
Figure BDA0002170721080000091
Figure BDA0002170721080000092
where N is the total number of pixels in the prediction map. p is a radical ofi∈[0,1]And giE {0,1} respectively represents the pixel iThe probability of the measured target foreground and the real label of the gold standard are shown, epsilon is a smooth factor, and the value range is [0.1, 1%]。
Final joint loss function LTotalThe formula of (1) is:
LTotal=LDice+λLBCE (3)
where λ is the balance coefficient of Dice loss and BCE loss, where the value of λ is set to 1, and for fairness all experiments use the joint loss function here.
3) Training and testing of models
The cross validation method comprises the steps that 1650 data sets containing gold standards are subjected to 5-fold cross validation, 1320 OCT images without choroid segmentation upper and lower boundaries are trained to OCT images with choroid upper and lower boundaries end to end in each fold, a random gradient descent (SGD) algorithm with an initial learning rate of 0.01, momentum of 0.9 and a weight attenuation coefficient of 0.0001 is adopted in the training process to optimize a network, the number of pictures sent to the network every time is 8, and a network model can be trained when the iteration times reach 60 times. After the training is finished, 330 pieces of verification data are sent into the trained model for prediction, and the 330 choroid segmentation maps are obtained.
4) Results of the experiment
In order to quantitatively evaluate the performance of the invention, the choroid segmentation result predicted by the test set is compared with the gold standard labeled by a doctor, and four segmentation evaluation indexes are adopted: the cross-over ratio (IoU), Dice coefficient (DSC), sensitivity (Sen) and specificity (Spe) were evaluated, and the specific formula is shown in Table I. And three indexes of relative upper boundary error, relative lower boundary error and thickness error of the choroid are added in the comparison process with the FCN network for evaluation.
TABLE 1 concrete formula of evaluation index
IoU TP/(TP+FP+FN)
DSC 2TP/(2TP+FP+FN)
Sen TP/(TP+FN)
Spe TN/(FP+TN)
(TP: true positive; FP: false positive; TN: true negative; FN: false negative)
The contribution of the two modules of the present invention to the final result is refined by ablation experiments. As shown in table 2, regarding the IoU index, the performance of the GCD module is improved by 0.67% after adding into the basic network, and the performance of the GSD module is improved by 0.94% after adding into the basic network, which indicates that the GSD contribution is slightly larger. GCD and GSD are added into a basic network to form GCS-Net, the parameter number is only 6.7M, but the performance is improved by 1.21 percent, which shows that the micro deep learning network framework (GCS-Net) constructed by the invention has superior performance.
Table 2 results of ablation experiments
Method IoU(%) DSC(%) Sen(%) Spe(%) Amount of ginseng
Basic network 90.38±0.80 94.67±0.59 95.36±0.95 94.31±1.27 4.88M
Basic network + GCD 91.05±0.90 95.06±0.62 95.40±0.47 94.95±1.17 6.02M
Basic network + GSD 91.32±1.10 95.16±0.80 96.10±0.59 94.64±0.93 5.56M
GCS-Net 91.59±0.83 95.36±0.58 96.02±0.46 95.01±0.55 6.70M
Table 3 is a comparison with a conventional FCN network and it can be seen that the choroidal thickness error is smaller for highly myopic eyes relative to normal eyes. This is because, in general, highly myopic eyes have a much thinner choroidal thickness than normal eyes. In terms of segmentation indexes and error indexes, the segmentation result of the method is better than that of the traditional FCN (fuzzy C-means) no matter the choroid of normal human eyes or highly myopic human eyes. From the visualization results of fig. 4, it can be seen that the inventive GCS-Net performs better, both in edge and detail.
TABLE 3 analysis of choroidal segmentation results in comparison to FCN networks
Figure BDA0002170721080000111
5) Summary of the invention
The micro deep learning network framework (GCS-Net) for automatically segmenting the choroid of the OCT image is realized and verified, the network can automatically select multi-scale information among groups, and the size of a receptive field is automatically adjusted according to the size of a segmented target region. The method of the invention improves the defects of some existing deep learning networks while making up the defects of complexity and difficulty in segmenting highly myopic choroids by traditional algorithms. The number of network parameters of the invention is very small, and the two modules GCD and GSD automatically select multi-scale information between groups in two different modes. In addition, the present invention employs a joint loss function to solve the problems of data imbalance and nonuniformity of the choroidal segmentation target region.
Experimental results show that the GCS-Net network automatically selecting the multi-scale information among groups can accurately segment normal choroids and highly myopic choroids, and can automatically avoid an ONH region and a part of retina invalid folding without preprocessing, so that the accuracy of quantitative analysis of the choroids is improved, and the morphological information of the choroids in three-dimensional large-visual-field data can be comprehensively acquired.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (9)

1. An OCT image choroid automatic segmentation method based on GCS-Net is characterized by comprising the following steps:
acquiring and preprocessing data, wherein a data set is a three-dimensional OCT image of human eyes, and preprocessing the data to be used as a training set sample;
constructing a GCS-Net network model, training the GCS-Net network by using a training set sample, wherein the GCS-Net network adopts a U-Net structure as a basic network, each layer of crossing connection layers between an encoding path and a decoding path is connected by adopting an inter-group channel expansion module (GCD), and each layer of deconvolution layers in the decoding path is connected by adopting an inter-group space expansion module (GSD) for interlayer connection;
the inter-group channel expansion module divides the input feature maps into a plurality of groups, respectively performs expansion convolution operations with different expansion rates on the grouped feature maps, simultaneously performs feature extraction and output on the grouped feature maps to obtain all grouped channel information, multiplies the grouped channel information serving as channel weight with all grouped feature maps after the expansion convolution operations correspondingly to obtain all grouped feature maps, and performs residual error operation on all grouped feature maps after splicing and the input feature maps to output a prediction map of the module;
the inter-group space expansion module divides the input feature maps into a plurality of groups, respectively performs expansion convolution operations with different expansion rates on the grouped feature maps, simultaneously performs feature extraction and output on the grouped feature maps to obtain all grouped space information, correspondingly multiplies the grouped space information serving as space weight with all grouped feature maps after the expansion convolution operations to obtain all grouped feature maps, and performs residual error operation on all grouped feature maps after splicing and the input feature maps to output a prediction map of the module;
testing the trained GCS-Net network model;
and inputting the image to be segmented into the constructed model, and outputting a corresponding choroid segmentation map.
2. The method for OCT image choroid automatic segmentation based on GCS-Net according to claim 1, wherein the GCS-Net network model adopts a loss function which is a combined loss function composed of Binary Cross Entropy (BCE) loss and Dys (Dice) loss, and the Dys loss L is a combined loss function composed of Binary Cross Entropy (BCE) loss and Dys (Dice) lossDiceAnd binary cross entropy loss LBCEThe formula of (1) is as follows:
Figure FDA0003128227130000021
Figure FDA0003128227130000022
where N is the total number of pixels in the prediction map, pi∈[0,1]And giE {0,1} respectively represents the probability that the pixel i is predicted as the target foreground and the true label of the gold standard, and epsilon is a smoothing factor;
final joint loss function LTotalThe formula of (1) is:
LTotal=LDice+λLBCE (3)
λ here is a balance coefficient of Dice loss and BCE loss.
3. The method as claimed in claim 1, wherein the data preprocessing method is bilinear interpolation down-sampling of the original OCT image with size 512 x 992 to 256 x 512, and performing data enhancement by randomly left-right flipping the image to simulate left-right eye image.
4. The method as claimed in claim 1, wherein the GCS-Net network structure is a 4-layer U-Net network, each layer of the coding path uses two 3 × 3 convolutional layers and a max-pooling layer for feature extraction, each layer of feature map is connected to the corresponding decoding layer through the inter-group channel expansion module, each layer of the decoding path uses a 3 × 3 convolutional layer, an upsampling layer and a 1 × 1 convolutional layer for restoration, each layer of feature map is connected to the corresponding decoding layer through the inter-group spatial expansion module, the output layer uses a 3 × 3 convolutional layer and a 1 × 1 convolutional layer for output, and Batch Normalization (BN) and a modified linear unit (ReLU) are added to all convolutional layers except the output layer to correct data distribution.
5. The method of claim 4, wherein the inter-group channel dilation module averagely divides the feature map into 4 groups, and each group performs dilation convolution operations with dilation rates of 1, 2, 4, and 6 respectively.
6. The method for performing OCT image choroid automatic segmentation based on GCS-Net according to claim 1, wherein the step of performing feature extraction and output on the grouped feature map in the inter-group channel expansion module comprises: after the average pooling, convolution, batch regularization and activation operations, channel information of all packets is obtained by softmax regression.
7. The method of claim 4, wherein the inter-group spatial dilation module averagely divides the feature map into 3 groups, and each group performs dilation convolution operations with dilation rates of 1, 3, and 5 respectively.
8. The method for performing OCT image choroid automatic segmentation based on GCS-Net according to claim 1, wherein said inter-group space expansion module performing feature extraction on the grouped feature map and outputting the feature map comprises: and after operations of downsampling, convolution, batch regularization, activation, upsampling, convolution, batch regularization and activation, spatial information of all the groups is obtained through softmax regression.
9. The method for automatic segmentation of choroid in OCT images based on GCS-Net according to claim 1, wherein the segmentation method is used for segmentation of pathologic myopia or retina three-dimensional OCT image containing Optic Nerve Head (ONH).
CN201910762318.6A 2019-08-19 2019-08-19 OCT image choroid automatic segmentation method based on GCS-Net Active CN110517235B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910762318.6A CN110517235B (en) 2019-08-19 2019-08-19 OCT image choroid automatic segmentation method based on GCS-Net

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910762318.6A CN110517235B (en) 2019-08-19 2019-08-19 OCT image choroid automatic segmentation method based on GCS-Net

Publications (2)

Publication Number Publication Date
CN110517235A CN110517235A (en) 2019-11-29
CN110517235B true CN110517235B (en) 2021-10-19

Family

ID=68625762

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910762318.6A Active CN110517235B (en) 2019-08-19 2019-08-19 OCT image choroid automatic segmentation method based on GCS-Net

Country Status (1)

Country Link
CN (1) CN110517235B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113096166B (en) * 2019-12-17 2023-08-18 上海美杰医疗科技有限公司 Medical image registration method and device
CN111242928A (en) * 2020-01-14 2020-06-05 中国人民解放军陆军军医大学第二附属医院 Atrial full-automatic segmentation tracking and positioning method based on multi-view learning
CN111325755B (en) * 2020-01-21 2024-04-09 苏州大学 Method for segmenting nerve fibers in U-shaped network and cornea image
CN111444957B (en) * 2020-03-25 2023-11-07 腾讯科技(深圳)有限公司 Image data processing method, device, computer equipment and storage medium
CN112168211A (en) * 2020-03-26 2021-01-05 成都思多科医疗科技有限公司 Fat thickness and muscle thickness measuring method and system of abdomen ultrasonic image
CN111815569B (en) * 2020-06-15 2024-03-29 广州视源电子科技股份有限公司 Image segmentation method, device, equipment and storage medium based on deep learning
CN111862114A (en) * 2020-07-10 2020-10-30 温州医科大学 Choroidal three-dimensional blood vessel imaging and quantitative analysis method and device based on optical coherence tomography system
CN112348825A (en) * 2020-10-16 2021-02-09 佛山科学技术学院 DR-U-net network method and device for retinal blood flow image segmentation
CN112489048B (en) * 2020-12-01 2024-04-16 浙江工业大学 Automatic optic nerve segmentation method based on depth network
CN112541878A (en) * 2020-12-24 2021-03-23 北京百度网讯科技有限公司 Method and device for establishing image enhancement model and image enhancement
CN112750141A (en) * 2020-12-28 2021-05-04 中国科学院宁波材料技术与工程研究所慈溪生物医学工程研究所 3D iris surface reconstruction and quantification method based on AS-OCT image and segmentation network
CN112712520A (en) * 2021-01-18 2021-04-27 佛山科学技术学院 Choroid layer segmentation method based on ARU-Net
CN112884775B (en) * 2021-01-20 2022-02-22 推想医疗科技股份有限公司 Segmentation method, device, equipment and medium
CN112819831B (en) * 2021-01-29 2024-04-19 北京小白世纪网络科技有限公司 Segmentation model generation method and device based on convolution Lstm and multi-model fusion
CN113610842B (en) * 2021-08-27 2023-04-07 苏州大学 OCT image retina detachment and splitting automatic segmentation method based on CAS-Net

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108648196A (en) * 2018-03-22 2018-10-12 广州多维魔镜高新科技有限公司 Image partition method and storage medium based on recurrence linking convolutional neural networks
CN109509178A (en) * 2018-10-24 2019-03-22 苏州大学 A kind of OCT image choroid dividing method based on improved U-net network
CN109685813A (en) * 2018-12-27 2019-04-26 江西理工大学 A kind of U-shaped Segmentation Method of Retinal Blood Vessels of adaptive scale information

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108648196A (en) * 2018-03-22 2018-10-12 广州多维魔镜高新科技有限公司 Image partition method and storage medium based on recurrence linking convolutional neural networks
CN109509178A (en) * 2018-10-24 2019-03-22 苏州大学 A kind of OCT image choroid dividing method based on improved U-net network
CN109685813A (en) * 2018-12-27 2019-04-26 江西理工大学 A kind of U-shaped Segmentation Method of Retinal Blood Vessels of adaptive scale information

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Choroid segmentation in OCT images based on improved U-net;Xuena Cheng 等;《PROCEEDINGS OF SPIE》;20190331;第1-8页 *
large kernel matters——improve semantic segmentation by global convolutional network;Chao peng 等;《arXiv:1703.02719v1[cs.CV]》;20170308;第1-13页 *

Also Published As

Publication number Publication date
CN110517235A (en) 2019-11-29

Similar Documents

Publication Publication Date Title
CN110517235B (en) OCT image choroid automatic segmentation method based on GCS-Net
CN109345469B (en) Speckle denoising method in OCT imaging based on condition generation countermeasure network
Sadda et al. Consensus definition for atrophy associated with age-related macular degeneration on OCT: classification of atrophy report 3
Gholami et al. OCTID: Optical coherence tomography image database
Abràmoff et al. Retinal imaging and image analysis
Martinez-Enriquez et al. OCT-based full crystalline lens shape change during accommodation in vivo
Norman et al. Dimensions of the human sclera: thickness measurement and regional changes with axial length
US20220230300A1 (en) Using Deep Learning to Process Images of the Eye to Predict Visual Acuity
CN109509178A (en) A kind of OCT image choroid dividing method based on improved U-net network
CN108618749B (en) Retina blood vessel three-dimensional reconstruction method based on portable digital fundus camera
CN111292338A (en) Method and system for segmenting choroidal neovascularization from fundus OCT image
CA2776437C (en) Diagnostic method and apparatus for predicting potential preserved visual acuity
EP2830480A2 (en) Volumetric analysis of pathologies
US20190209006A1 (en) Segmentation-based corneal mapping
Sleman et al. A novel 3D segmentation approach for extracting retinal layers from optical coherence tomography images
Chen et al. Application of artificial intelligence and deep learning for choroid segmentation in myopia
CN115004222A (en) Neural network processing of OCT data to generate predictions of geographic atrophy growth rate
Rao et al. Deep learning based sub-retinal fluid segmentation in central serous chorioretinopathy optical coherence tomography scans
Liu et al. Semi-supervised automatic layer and fluid region segmentation of retinal optical coherence tomography images using adversarial learning
JP2023523245A (en) OCT EN FACE lesion segmentation using channel-coded slabs
Breher et al. Direct modeling of foveal pit morphology from distortion-corrected OCT images
Hassan et al. Analysis of optical coherence tomography images using deep convolutional neural network for maculopathy grading
Liu et al. A curriculum learning-based fully automated system for quantification of the choroidal structure in highly myopic patients
Alsaih et al. Retinal fluids segmentation using volumetric deep neural networks on optical coherence tomography scans
KR102482680B1 (en) Apparatus and method for predicting biometry based on fundus image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant