CN111598894B - Retina blood vessel image segmentation system based on global information convolution neural network - Google Patents
Retina blood vessel image segmentation system based on global information convolution neural network Download PDFInfo
- Publication number
- CN111598894B CN111598894B CN202010309418.6A CN202010309418A CN111598894B CN 111598894 B CN111598894 B CN 111598894B CN 202010309418 A CN202010309418 A CN 202010309418A CN 111598894 B CN111598894 B CN 111598894B
- Authority
- CN
- China
- Prior art keywords
- image
- main module
- training
- neural network
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
Abstract
A retina blood vessel image segmentation system based on a global information convolution neural network. The invention relates to a retinal vessel image segmentation system, which aims to solve the problems that the global information utilization is limited and important features are easy to lose in the conventional convolutional neural network retinal vessel image segmentation. The system of the invention comprises: the device comprises an image processing main module, a neural network main module, a training main module and a detection main module; the image processing main module is used for acquiring an original retina image, preprocessing the acquired original retina image and inputting the processed image into the training main module and the detection main module; the neural network main module is used for establishing a convolutional neural network which can extract global information and strengthen characteristics; the training main module is used for initializing network parameters to obtain a trained convolutional neural network model; the detection main module is used for testing by using the trained model and calculating the performance index of the model. The invention belongs to the field of retinal vessel image segmentation systems.
Description
Technical Field
The invention belongs to the field of medical image processing, and particularly relates to a retinal blood vessel image segmentation system.
Background
Fundus images contain rich pathological information, and doctors can diagnose various cardiovascular and ophthalmic diseases such as glaucoma, hypertension, arteriosclerosis and the like by observing the length, width and curvature of retinal blood vessels. Since fundus images can be easily acquired by a fundus camera without performing complicated processes such as angiography, they are widely used in clinical disease analysis. For more accurate quantitative assessment of the relevant disease, appropriate segmentation of the retinal vessel image is required to eliminate interference from other tissues of the eye. The high-precision automatic retinal vessel image segmentation technology has wide application prospect. Existing retinal image segmentation techniques are mainly classified into the following three categories. The first is an unsupervised algorithm: unsupervised algorithms are mostly traditional image processing technologies, such as a matched filtering method, a multi-threshold blood vessel detection method, a mathematical morphology algorithm and the like; secondly, a supervision algorithm is adopted: the method adopts a sample and corresponding label training mode, and the process usually aims at extracting characteristics of retina image pixel points and then is divided into two types of blood vessels and non-blood vessels; thirdly, a deep learning algorithm: due to the great improvement of computer hardware storage and computing capability, the deep learning algorithm is the method with the highest precision at present.
The depth learning algorithm for retinal vessel image segmentation mostly adopts a U-shaped convolution neural network structure, and an original image is encoded into abstract deep semantic features and then decoded into a segmentation result. However, considering the characteristics of the convolutional neural network and the whole network structure together, the prior art still has many problems:
(1) at the core of the convolutional neural network, a convolution operation is performed on an original image or a feature map through a convolution kernel (the size is usually 3 × 3, 5 × 5), and then the whole image or the feature map is traversed through window sliding to construct a link of global information. It is clear that such modeling approaches have limited use of global information.
(2) Performing retinal image segmentation in an encoded-then-decoded manner requires that the intermediate layer feature map be transformed multiple times in dimension. These dimensional transformations may be done by transpose convolution or up/down sampling. The retinal vessels are different in shape, thickness and length, and the narrowest part is only 3 pixels. The dimensional transformation is likely to lose important features for the segmentation of the vessel image.
Disclosure of Invention
The invention aims to solve the problems that global information is limited in utilization and important features are easy to lose in the conventional convolutional neural network retinal vessel image segmentation, and provides a convolutional neural network retinal vessel image segmentation system based on global information.
The retina blood vessel image segmentation system based on the global information convolution neural network comprises:
the device comprises an image processing main module, a neural network main module, a training main module and a detection main module;
the image processing main module is used for acquiring an original retina image, preprocessing the acquired original retina image to obtain a processed image, and inputting the preprocessed image to the training main module and the detection main module;
the neural network main module is used for establishing a convolutional neural network which can extract global information and strengthen characteristics;
the training main module is used for initializing network parameters, setting hyper-parameters involved in training and training the hyper-parameters to obtain a trained convolutional neural network model;
and the detection main module is used for testing by using the trained model and calculating the performance index of the model.
The most prominent characteristics and remarkable beneficial effects of the invention are as follows:
the invention relates to a retina blood vessel segmentation method based on a global information convolution neural network, which has the following advantages:
1. the invention provides a convolutional neural network based on global information for retinal vessel segmentation for the first time, and the method takes a coding and decoding network as a general framework and simultaneously realizes the utilization of global characteristics and the enhancement of easily lost characteristics in the coding and decoding process by using the proposed characteristic remapping, characteristic similarity calculation and characteristic reactivation process.
2. Experiments show that the invention has good performance, the accuracy can reach 95.98%, the sensitivity can reach 77.92%, the specificity can reach 98.37%, and the invention has advanced level.
3. The process of feature remapping, feature similarity calculation and feature reactivation provided by the invention has good universality, can be conveniently added in various coding and decoding networks without specially adjusting network structures, and can be used for other types of networks only by having similar features.
Drawings
FIG. 1 is a flow chart of a design method according to an embodiment;
FIG. 2 is a schematic diagram of an overall codec network architecture;
FIG. 3 is a schematic diagram of a process of feature remapping, feature similarity calculation and feature reactivation;
FIG. 4a is a partial retinal image artwork;
FIG. 4b is a gold standard graph of a portion of a retinal image;
FIG. 4c is a graph of a partial retinal image segmentation result;
Detailed Description
The first embodiment is as follows: the retinal blood vessel image segmentation system based on the global information convolution neural network in the embodiment comprises:
the specific embodiments of the present invention are described in conjunction with the DRIVE public data set:
the DRIVE public data set contains 40 color retinal images, 7 of which are lesion images; all fundus images in DRIVE are 565 × 584 pixel resolution; averagely dividing the data set into a training set and a testing set;
preprocessing an original retina image, and constructing a training data loader;
establishing a convolutional neural network capable of extracting global information and strengthening features, wherein the overall network structure uses a frame which is encoded and decoded first, and the feature map is subjected to the operations of feature remapping → feature similarity calculation → feature reactivation in the decoding process, so that the global information integration and the feature loss easiness are strengthened;
initializing network parameters, setting hyper-parameters related to training neural network locks, and starting training;
testing and calculating performance indexes by using the trained model;
the flow chart of the invention is shown in fig. 1, and specifically as follows:
the device comprises an image processing main module, a neural network main module, a training main module and a detection main module;
the image processing main module is used for acquiring an original retina image, preprocessing the acquired original retina image to obtain a processed image, and inputting the preprocessed image to the training main module and the detection main module;
the neural network main module is used for establishing a convolutional neural network which can extract global information and strengthen characteristics;
the training main module is used for initializing network parameters, setting hyper-parameters involved in training and training the hyper-parameters to obtain a trained convolutional neural network model;
and the detection main module is used for testing by using the trained model and calculating the performance index of the model.
The second embodiment is as follows: the first difference between the present embodiment and the first embodiment is that the image processing main module is configured to collect an original retinal image, pre-process the collected original retinal image to obtain a processed image, and input the pre-processed image to the training main module and the detection main module; the specific process is as follows:
the DRIVE public data set contains 40 color retinal images, 7 of which are lesion images; all fundus images in DRIVE are 565 × 584 pixel resolution; averagely dividing the data set into a training set and a testing set;
a1, acquiring training data and reading 20 images of a training set, wherein the 20 images are RGB three-channel color images, converting the 20 training images into single-channel gray images, performing histogram equalization on the converted images aiming at the problem that the contrast between the retinal background and blood vessels of the converted images is not obvious enough, enhancing the local contrast without influencing the overall contrast, and then performing gamma correction on the images to highlight the blood vessel structure;
a2, constructing a training data loader (image processing process including preprocessing, cutting and the like, the data loader is responsible for processing data of the convolutional neural network), in order to enable the trained network to be segmented aiming at images with different sizes, cutting the images after gamma correction, wherein the size of the images is 64 x 64, and as the proportion of blood vessel pixel points of the whole image to background pixel points is very small, in order to balance blood vessel and non-blood vessel training data, the image cutting process is not uniform cutting but selecting one blood vessel pixel point by probability p, cutting a region with the size of 64 x 64 around the selected pixel point as a center, and the value of p is 0.33; the training data loader continuously generates data through iterative loop (the number of iteration rounds in the hyper-parameter controls how many times the network is trained, and the training is stopped when the number of rounds reaches the network, and the data loader is also stopped).
Other steps and parameters are the same as those in the first embodiment.
The third concrete implementation mode: the first or second embodiment is different from the first or second embodiment in that the neural network main module is used for establishing a convolutional neural network capable of extracting global information and strengthening features; the specific process is as follows:
b1, constructing a convolutional neural coding network, wherein the general graph network structure is shown in FIG. 2, the first stage of a coding part is a convolutional layer with 3 convolutional cores of size 3 × 3, a ReLU (normalized Linear Unit) activation function and batch normalization are used after the convolutional layer, and the depth of a feature graph is 32; the second stage of the coding part is also a convolution layer with 3 convolution kernels of size 3 x 3, a ReLU activation function and batch normalization are used after the convolution layer, and the depth of a feature map is 64;
the third stage of coding and the first stage of decoding are convolutional layers with the convolutional kernel size of 3 multiplied by 3, and the second stage of decoding and the third stage are convolutional layers with the global feature fusion module and the convolutional kernel size of 3 multiplied by 3;
b2, performing global feature fusion and feature-loss enhancement on symmetric features in the joint encoding and decoding process, wherein the process can be specifically described as the process of feature remapping → feature similarity calculation → feature reactivation, as shown in fig. 3;
the convolutional neural network coding and decoding process is characterized by symmetry, has the same dimensionality and represents semantic information of the same level, and can be regarded as approximately meeting the same distribution; if the encoding characteristic and the decoding characteristic are mapped once, the similarity between the two characteristics can be calculated after the mapped characteristics also approximately meet the same distribution, the original characteristics are strengthened by using the similarity, and a 'characteristic remapping-characteristic similarity calculation-characteristic reactivation process' (the three processes are all executed twice for strengthening the characteristics) is executed twice in the whole network, and the first stage of encoding and the third stage of decoding, the second stage of encoding and the second stage of decoding are respectively aimed at; firstly, acquiring a coding process characteristic U and a decoding process characteristic D, and using convolution operation to complete remapping, namely:
wherein, the dimensionalities of U and D are both RW×H×CR is a real number; w represents the length of the feature map, H represents the width of the feature map, and C represents the number of channels of the feature map;for the features after remapping, the dimensions of the feature map after mapping are all three-dimensional, with dimension RW×H×C/4Changing the three-dimensional feature map into two-dimensional using a shape reshaping operation; after this stepHas a dimension of RN ×C/4Wherein N ═ W × H, N is an intermediate variable;
the calculation method of the similarity matrix between the features can be realized by a functionThe realization is as follows:
where T is the transpose of the matrix byCan obtain the vitaminDegree is RN×NUsing Softmax function to compress the feature similarity matrix to [0,1 ]]Obtaining the strengthening coefficient, and the specific strengthening coefficient can be expressed as:
wherein, aijRepresenting the value of the ith row and the jth column in the characteristic similarity matrix; the characteristics of the encoding and decoding process are superposed on the channel dimension and are arranged into a dimension R by using a convolution operationW×H×CThe feature map of (1) arranges the three-dimensional features into two dimensions through shape reshaping operation, multiplies the two dimensions by a similarity matrix, and adds the multiplication result and the feature superposition result in the encoding and decoding process to complete feature reinforcement;
the proposed process of feature remapping → feature similarity calculation → feature reactivation uses all feature points corresponding to the encoding and decoding process, which is different from the convolution operation that one calculation only aims at one small region, and strengthens the original features through the feature similarity, thereby avoiding the loss of important features;
b3, finally, dividing all pixel points of the retinal image into two types of blood vessels and non-blood vessels, compressing the feature map into two layers by using a convolution layer by an output part of the network, calculating probability values of the two types of blood vessels and non-blood vessels by using a Softmax function, and calculating a loss function, wherein the loss function is as follows:
wherein the content of the first and second substances,probability value, y, representing prediction of class kkRepresenting the true value of class k.
Other steps and parameters are the same as those in the first or second embodiment.
The fourth concrete implementation mode: the difference between this embodiment and the first to third embodiments is that the training master module is configured to initialize network parameters, set hyper-parameters involved in training, and train the hyper-parameters; the specific process is as follows:
c1, initializing network parameters, setting the learning rate to approximately meet the fitting process of the network, wherein the learning rate is usually higher first and lower later; setting the number of training rounds and the number of times of each iteration, setting the learning rate to be 0.001, decreasing the learning rate by 0.00002 in each iteration, and starting training after all relevant hyper-parameters of the training are set;
initializing a hyper-parameter of a network parameter setting training process, wherein the training adopts a batch iteration method, the batch size is set to 16, 50 iterations are performed in total, and 6000 images are iterated in each iteration; the setting of the learning rate is decreased according to the number of rounds, the learning rate of the 1 st to 15 th rounds is set to 0.001, the learning rate of the 16 th to 40 th rounds is set to 0.0005, and the learning rate of the last 10 rounds is set to 0.0001; the optimizer uses an Adam optimizer;
and C2, if the network does not improve the accuracy when reaching the set training round number or within 5 rounds, the network is considered to be converged, and the trained network parameters are stored.
Other steps and parameters are the same as those in one of the first to third embodiments.
The fifth concrete implementation mode: the difference between this embodiment and one of the first to the fourth embodiments is that the detection main module is configured to perform a test using a trained model and calculate a model performance index; the specific process is as follows:
d1, executing the image preprocessing process of the first step on the test set image, uniformly cutting the test set image into 64 x 64 image blocks and inputting the image blocks into the trained network test, wherein the process does not execute the updating of the network parameters; after testing of all image blocks is completed, splicing the segmentation images output by the network into the size of the original image;
d2, calculating performance indexes such as accuracy, specificity, sensitivity and the like of the test result.
Other steps and parameters are the same as those of one to four in which the implementation is divided.
Examples
TABLE 1 segmentation comparison results
Accuracy rate | Sensitivity of the probe | Specificity of | Area under curve | |
The method of the invention | 95.98% | 77.92% | 98.37% | 0.9842 |
Generic codec network | 95.38% | 77.41% | 97.41% | 0.9783 |
As can be seen from the table above, the method provided by the invention has the advantages that the four evaluation indexes of the retinal vessel image segmentation accuracy, the sensitivity, the specificity and the area under the curve are all improved.
The present invention is capable of other embodiments and its several details are capable of modifications in various obvious respects, all without departing from the spirit and scope of the present invention.
Claims (4)
1. The retina blood vessel image segmentation system based on the global information convolution neural network is characterized by comprising the following steps:
the device comprises an image processing main module, a neural network main module, a training main module and a detection main module;
the image processing main module is used for acquiring an original retina image, preprocessing the acquired original retina image to obtain a processed image, and inputting the preprocessed image to the training main module and the detection main module;
the neural network main module is used for establishing a convolutional neural network which can extract global information and strengthen characteristics; the specific process is as follows:
b1, constructing a convolutional neural coding network, wherein the first stage of a coding part is a convolutional layer with 3 convolutional cores of size 3 multiplied by 3, a ReLU activation function and batch normalization are used after the convolutional layer, and the depth of a feature map is 32; the second stage of the coding part is also a convolution layer with 3 convolution kernels of size 3 x 3, a ReLU activation function and batch normalization are used after the convolution layer, and the depth of a feature map is 64;
the third stage of coding and the first stage of decoding are convolutional layers with the convolutional kernel size of 3 multiplied by 3, and the second stage of decoding and the third stage are convolutional layers with the global feature fusion module and the convolutional kernel size of 3 multiplied by 3;
b2, carrying out global feature fusion and easy-loss feature reinforcement on symmetrical features in the joint encoding and decoding process, and specifically describing the process of feature remapping → feature similarity calculation → feature reactivation;
the original features are strengthened by using the similarity, the processes of feature remapping, feature similarity calculation and feature reactivation are executed twice in the whole network, and the first stage of encoding and the third stage of decoding, the second stage of encoding and the second stage of decoding are respectively aimed at; firstly, acquiring a coding process characteristic U and a decoding process characteristic D, and using convolution operation to complete remapping, namely:
wherein, the dimensionalities of U and D are both RW×H×CR is a real number; w represents the length of the feature map, H represents the width of the feature map, and C represents the number of channels of the feature map;for the features after remapping, the dimensions of the feature map after mapping are all three-dimensional, with dimension RW ×H×C/4Changing the three-dimensional feature map into two-dimensional using a shape reshaping operation; after this stepHas a dimension of RN×C/4Wherein N ═ W × H, N is an intermediate variable;
method for calculating similarity matrix between features through functionThe realization is as follows:
where T is the transpose of the matrix byTo obtain dimension RN×NUsing Softmax function to compress the feature similarity matrix to [0,1 ]]Obtaining the strengthening coefficient, wherein the specific strengthening coefficient is expressed as:
wherein, aijRepresenting the value of the ith row and the jth column in the characteristic similarity matrix; the characteristics of the encoding and decoding process are superposed on the channel dimension and are arranged into a dimension R by using a convolution operationW×H×CThe feature map of (1) arranges the three-dimensional features into two dimensions through shape reshaping operation, multiplies the two dimensions by a similarity matrix, and adds the multiplication result and the feature superposition result in the encoding and decoding process to complete feature reinforcement;
b3, compressing the feature map into two layers by using a convolution layer, calculating probability values of a blood vessel and a non-blood vessel by using a Softmax function, and calculating a loss function at the same time, wherein the loss function is as follows:
wherein the content of the first and second substances,probability value, y, representing prediction of class kkA true value representing class k;
the training main module is used for initializing network parameters, setting hyper-parameters involved in training and training the hyper-parameters to obtain a trained convolutional neural network model;
and the detection main module is used for testing by using the trained model and calculating the performance index of the model.
2. The retinal blood vessel image segmentation system based on the global information convolutional neural network as claimed in claim 1, wherein the image processing main module is configured to collect an original retinal image, pre-process the collected original retinal image to obtain a processed image, and input the pre-processed image to the training main module and the detection main module; the specific process is as follows:
the DRIVE public data set contains 40 color retinal images, 7 of which are lesion images; all fundus images in DRIVE are 565 × 584 pixel resolution; averagely dividing the data set into a training set and a testing set;
a1, acquiring training data and reading 20 images of a training set, wherein the 20 images are RGB three-channel color images, converting the 20 training images into single-channel gray images, performing histogram equalization on the converted images, and performing gamma correction on the images;
a2, constructing a training data loader, cutting the image subjected to gamma correction, wherein the size of the image is 64 x 64, in the image cutting process, a blood vessel pixel point is selected according to probability p, a region with the size of 64 x 64 around the selected pixel point is cut by taking the selected pixel point as a center, and the value of p is 0.33; the training data loader continuously generates data through iterative loop, and the training is stopped when the number of iterative rounds is reached.
3. The system for retinal vessel image segmentation based on the global information convolutional neural network as claimed in claim 2, wherein the training main module is used for initializing network parameters, setting hyper-parameters involved in training and training the hyper-parameters; the specific process is as follows:
c1, initializing network parameters, setting hyper-parameters of a training process, wherein the training adopts a batch iteration method, the batch size is set to 16, 50 iterations are performed in total, and 6000 images are iterated in each iteration; the setting of the learning rate is decreased according to the number of rounds, the learning rate of the 1 st to 15 th rounds is set to 0.001, the learning rate of the 16 th to 40 th rounds is set to 0.0005, and the learning rate of the last 10 rounds is set to 0.0001; the optimizer uses an Adam optimizer;
and C2, if the network does not improve the accuracy when reaching the set training round number or within 5 rounds, the network is considered to be converged, and the trained network parameters are stored.
4. The retinal vessel image segmentation system based on the global information convolutional neural network as claimed in claim 3, wherein the detection main module is configured to perform a test using a trained model and calculate a model performance index; the specific process is as follows:
d1, executing the image preprocessing process of the first step on the test set image, uniformly cutting the test set image into 64 x 64 image blocks and inputting the image blocks into the trained network test, wherein the process does not execute the updating of the network parameters; after testing of all image blocks is completed, splicing the segmentation images output by the network into the size of the original image;
d2, calculating the accuracy, specificity and sensitivity performance indexes of the test result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010309418.6A CN111598894B (en) | 2020-04-17 | 2020-04-17 | Retina blood vessel image segmentation system based on global information convolution neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010309418.6A CN111598894B (en) | 2020-04-17 | 2020-04-17 | Retina blood vessel image segmentation system based on global information convolution neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111598894A CN111598894A (en) | 2020-08-28 |
CN111598894B true CN111598894B (en) | 2021-02-09 |
Family
ID=72190377
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010309418.6A Active CN111598894B (en) | 2020-04-17 | 2020-04-17 | Retina blood vessel image segmentation system based on global information convolution neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111598894B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112215840A (en) * | 2020-10-30 | 2021-01-12 | 上海商汤临港智能科技有限公司 | Image detection method, image detection device, driving control method, driving control device, electronic equipment and storage medium |
CN114170150B (en) * | 2021-11-17 | 2023-12-19 | 西安交通大学 | Retina exudates full-automatic segmentation method based on curvature loss function |
CN114663421B (en) * | 2022-04-08 | 2023-04-28 | 皖南医学院第一附属医院(皖南医学院弋矶山医院) | Retina image analysis system and method based on information migration and ordered classification |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106408562A (en) * | 2016-09-22 | 2017-02-15 | 华南理工大学 | Fundus image retinal vessel segmentation method and system based on deep learning |
CN107016676A (en) * | 2017-03-13 | 2017-08-04 | 三峡大学 | A kind of retinal vascular images dividing method and system based on PCNN |
CN107292887A (en) * | 2017-06-20 | 2017-10-24 | 电子科技大学 | A kind of Segmentation Method of Retinal Blood Vessels based on deep learning adaptive weighting |
CN108510473A (en) * | 2018-03-09 | 2018-09-07 | 天津工业大学 | The FCN retinal images blood vessel segmentations of convolution and channel weighting are separated in conjunction with depth |
CN109726743A (en) * | 2018-12-12 | 2019-05-07 | 苏州大学 | A kind of retina OCT image classification method based on Three dimensional convolution neural network |
CN110148111A (en) * | 2019-04-01 | 2019-08-20 | 江西比格威医疗科技有限公司 | The automatic testing method of a variety of retina lesions in a kind of retina OCT image |
CN110930418A (en) * | 2019-11-27 | 2020-03-27 | 江西理工大学 | Retina blood vessel segmentation method fusing W-net and conditional generation confrontation network |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108986124A (en) * | 2018-06-20 | 2018-12-11 | 天津大学 | In conjunction with Analysis On Multi-scale Features convolutional neural networks retinal vascular images dividing method |
CN109087302A (en) * | 2018-08-06 | 2018-12-25 | 北京大恒普信医疗技术有限公司 | A kind of eye fundus image blood vessel segmentation method and apparatus |
CN109345538B (en) * | 2018-08-30 | 2021-08-10 | 华南理工大学 | Retinal vessel segmentation method based on convolutional neural network |
CN109685813B (en) * | 2018-12-27 | 2020-10-13 | 江西理工大学 | U-shaped retinal vessel segmentation method capable of adapting to scale information |
CN110473188B (en) * | 2019-08-08 | 2022-03-11 | 福州大学 | Fundus image blood vessel segmentation method based on Frangi enhancement and attention mechanism UNet |
-
2020
- 2020-04-17 CN CN202010309418.6A patent/CN111598894B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106408562A (en) * | 2016-09-22 | 2017-02-15 | 华南理工大学 | Fundus image retinal vessel segmentation method and system based on deep learning |
CN107016676A (en) * | 2017-03-13 | 2017-08-04 | 三峡大学 | A kind of retinal vascular images dividing method and system based on PCNN |
CN107292887A (en) * | 2017-06-20 | 2017-10-24 | 电子科技大学 | A kind of Segmentation Method of Retinal Blood Vessels based on deep learning adaptive weighting |
CN108510473A (en) * | 2018-03-09 | 2018-09-07 | 天津工业大学 | The FCN retinal images blood vessel segmentations of convolution and channel weighting are separated in conjunction with depth |
CN109726743A (en) * | 2018-12-12 | 2019-05-07 | 苏州大学 | A kind of retina OCT image classification method based on Three dimensional convolution neural network |
CN110148111A (en) * | 2019-04-01 | 2019-08-20 | 江西比格威医疗科技有限公司 | The automatic testing method of a variety of retina lesions in a kind of retina OCT image |
CN110930418A (en) * | 2019-11-27 | 2020-03-27 | 江西理工大学 | Retina blood vessel segmentation method fusing W-net and conditional generation confrontation network |
Also Published As
Publication number | Publication date |
---|---|
CN111598894A (en) | 2020-08-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111598894B (en) | Retina blood vessel image segmentation system based on global information convolution neural network | |
CN107578416B (en) | Full-automatic heart left ventricle segmentation method for coarse-to-fine cascade deep network | |
CN110969626B (en) | Method for extracting hippocampus of human brain nuclear magnetic resonance image based on 3D neural network | |
CN109584254A (en) | A kind of heart left ventricle's dividing method based on the full convolutional neural networks of deep layer | |
CN109461495A (en) | A kind of recognition methods of medical image, model training method and server | |
CN111798464A (en) | Lymphoma pathological image intelligent identification method based on deep learning | |
CN110533683B (en) | Image omics analysis method fusing traditional features and depth features | |
CN115205300B (en) | Fundus blood vessel image segmentation method and system based on cavity convolution and semantic fusion | |
CN111738363B (en) | Alzheimer disease classification method based on improved 3D CNN network | |
CN110570432A (en) | CT image liver tumor segmentation method based on deep learning | |
CN113344864A (en) | Ultrasonic thyroid nodule benign and malignant prediction method based on deep learning | |
CN113610859B (en) | Automatic thyroid nodule segmentation method based on ultrasonic image | |
CN114864076A (en) | Multi-modal breast cancer classification training method and system based on graph attention network | |
CN113393469A (en) | Medical image segmentation method and device based on cyclic residual convolutional neural network | |
CN112785593A (en) | Brain image segmentation method based on deep learning | |
CN113269799A (en) | Cervical cell segmentation method based on deep learning | |
CN111242949B (en) | Fundus image blood vessel segmentation method based on full convolution neural network multi-scale features | |
CN115375711A (en) | Image segmentation method of global context attention network based on multi-scale fusion | |
CN113344933B (en) | Glandular cell segmentation method based on multi-level feature fusion network | |
CN114943721A (en) | Neck ultrasonic image segmentation method based on improved U-Net network | |
CN114565601A (en) | Improved liver CT image segmentation algorithm based on DeepLabV3+ | |
CN113421250A (en) | Intelligent fundus disease diagnosis method based on lesion-free image training | |
CN112863650A (en) | Cardiomyopathy identification system based on convolution and long-short term memory neural network | |
CN117036288A (en) | Tumor subtype diagnosis method for full-slice pathological image | |
CN116758336A (en) | Medical image intelligent analysis system based on artificial intelligence |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |