CN112085745A - Retinal vessel image segmentation method of multi-channel U-shaped full convolution neural network based on balanced sampling splicing - Google Patents

Retinal vessel image segmentation method of multi-channel U-shaped full convolution neural network based on balanced sampling splicing Download PDF

Info

Publication number
CN112085745A
CN112085745A CN202010931829.9A CN202010931829A CN112085745A CN 112085745 A CN112085745 A CN 112085745A CN 202010931829 A CN202010931829 A CN 202010931829A CN 112085745 A CN112085745 A CN 112085745A
Authority
CN
China
Prior art keywords
image
channel
segmentation
network
full convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010931829.9A
Other languages
Chinese (zh)
Inventor
魏丽芳
张天一
张婷
杨长才
周术诚
陈日清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Agriculture and Forestry University
Original Assignee
Fujian Agriculture and Forestry University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Agriculture and Forestry University filed Critical Fujian Agriculture and Forestry University
Priority to CN202010931829.9A priority Critical patent/CN112085745A/en
Publication of CN112085745A publication Critical patent/CN112085745A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention provides a retinal vessel image segmentation method of a multi-channel U-shaped full convolution neural network based on balanced sampling splicing, which comprises the following steps of: step S1: carrying out three-channel histogram equalization on the original color retina image and carrying out preprocessing by combining characteristic value gamma correction; step S2: after multi-scale balanced division sampling points are constructed, image blocks are randomly spliced to expand data samples; step S3: normalizing the image data; step S4: inputting the normalized image data into a retinal vessel segmentation model to perform retinal vessel network segmentation; the retina blood vessel segmentation model is obtained by training normalized image data generated by a training set through a three-channel U-Net full convolution network sensitive to color. It has great advantages in precision, sensitivity, specificity and AUC.

Description

Retinal vessel image segmentation method of multi-channel U-shaped full convolution neural network based on balanced sampling splicing
Technical Field
The invention relates to the field of biomedical image processing, in particular to a retinal vessel image segmentation method of a multi-channel U-shaped full convolution neural network based on balanced sampling splicing.
Background
Through observing the state of retinal blood vessels, doctors can obtain partial state information of human bodies noninvasively and noninvasively, and further can assist in diagnosis of diseases. The information of the retinal blood vessels generally needs to be extracted by a retinal image segmentation technology, which belongs to a problem of image processing.
There are roughly two main categories of retinal image segmentation in the prior art: (1) based on unsupervised retinal image segmentation. The unsupervised method is to find the inherent mode of the blood vessel in the fundus image to determine whether the pixel point belongs to the blood vessel. The algorithm itself does not rely on training data or manually labeled gold standards. Such as multi-scale matched filtering algorithms and clustering algorithms. (2) Supervised retinal image based segmentation; in the supervised method, the rule of blood vessel extraction is to learn through an algorithm and segment a test image on the basis of a training set of artificial markers. The vascular structures, which are gold standards, are precisely labeled by an ophthalmologist. Supervised classification methods are designed based on pre-classified data, which generally perform better than unsupervised and result in very good segmentation results in healthy retinal images. Such as algorithms based on SVM classifiers, AdaBoost classifiers, deep convolutional neural networks, and the like. The realization of retinal vessel high-precision segmentation by training a segmentation model by using a machine learning supervised algorithm has become a research hotspot in recent years.
Disclosure of Invention
Aiming at the blank of the prior art, the invention aims to provide a retinal vessel segmentation method of a multi-channel U-shaped full convolution neural network based on balanced sampling splicing. Meanwhile, the number of samples is expanded by adopting a multi-scale equalized sampling random splicing strategy, the diversity and the robustness of training data are improved, the problem of over-model training and over-fitting caused by insufficient sample data amount is avoided, and the guarantee is provided for further high-precision blood vessel segmentation.
Firstly, carrying out histogram equalization and characteristic value gamma correction preprocessing on data to improve the color difference between blood vessels and a background; then sampling an equalized image block division mode of the multi-scale fixed central point, and then randomly splicing the image blocks to expand image data; and finally, performing model training on the color-sensitive three-channel U-Net network to generate a retinal vessel segmentation model so as to realize accurate segmentation of retinal vessels. In the method, on the basis of a segmentation network, a U-Net framework supporting three-color component input is selected, and a color-sensitive multi-channel U-Net full convolution network is constructed. Therefore, in the preprocessing, the histogram which is used for carrying out histogram equalization on the three color components and adjusting the gamma value to shrink the equalization is used, so that the color difference is improved; the multi-scale equalization sampling further enhances the diversity and robustness of the training set, prevents the problem of overmoded training fitting caused by insufficient data volume, and enables the algorithm to obtain great advantages in precision, sensitivity, specificity and AUC.
The invention specifically adopts the following technical scheme:
a retinal vessel image segmentation method of a multi-channel U-shaped full convolution neural network based on balanced sampling splicing is characterized by comprising the following steps:
step S1: carrying out three-channel histogram equalization on the original color retina image and carrying out preprocessing by combining characteristic value gamma correction;
step S2: after multi-scale balanced division sampling points are constructed, image blocks are randomly spliced to expand data samples;
step S3: normalizing the image data;
step S4: inputting the normalized image data into a retinal blood vessel segmentation model to perform retinal blood vessel image segmentation; the retina blood vessel segmentation model is obtained by training normalized image data generated by a training set through a three-channel U-Net full convolution network sensitive to color.
Preferably, step S2 specifically includes the following steps:
step S21: determining the number of image blocks which can be divided without repetition in an image according to the size of a given image block;
step S22: calculating the coordinates of the fixed central point of each image block, and dividing the image into equivalent image blocks;
step S23: randomly taking out a certain number of image blocks from the image block list and splicing the image blocks into a new image with the same size as the original image.
Preferably, the color-sensitive three-channel U-Net full convolution network adopts three-channel input to learn information on each color component on the basis of adopting a U-Net network model architecture, and Padding is added to ensure the structural characteristic consistency of an input image and an output image; the network structure has independent convolution kernels for three color components, and the network comprises a convolution layer, a RELU activation function, a pooling layer, an upsampling layer and a short connection; after the image is processed by a network, a predicted segmentation image with the channel of 1 and the size equal to that of the original image is generated, and the predicted segmentation image is used for calculating a loss function with a real segmentation image.
Preferably, the color-sensitive three-channel U-Net full convolution network uses a two-classification cross entropy loss function, the mapping of the pixel points adopts a Sigmoid function, and the loss function is expressed as:
Figure BDA0002669391770000031
wherein, P0The probability that the pixel point is the blood vessel structure under the truth value chart is represented, P1The probability that the pixel point is the pixel point of the non-vessel structure under the truth value chart is shown, N is the total pixel number,
Figure BDA0002669391770000032
the probability that the pixel point is output as a blood vessel pixel under the Sigmoid function mapping is obtained,
Figure BDA0002669391770000033
and outputting the probability of the pixel point as a non-blood vessel pixel under the Sigmoid function mapping.
Compared with the prior art, the invention and the preferred scheme thereof have the following beneficial effects:
1. according to the method, the histogram equalization and gamma correction combined with the characteristic value are adopted to preprocess the image data, so that the contrast of the background and the blood vessel structure is enhanced, and a foundation is laid for the subsequent accurate segmentation of the blood vessel;
2. the processing scheme of randomly splicing the training data after sampling based on the multi-scale fixed central point balanced image block is adopted, the number of images in the data set is increased on the premise of not reducing the robustness of a training sample, and the problems of model misjudgment and overfitting caused by the non-uniformity of the training set and the insufficient data volume are prevented.
The multichannel U-Net network adopted by the invention fully utilizes the color information of the retinal image and combines the preprocessing and sampling strategies to realize the high-precision prediction of the retinal blood vessel image.
Drawings
The invention is described in further detail below with reference to the following figures and detailed description:
FIG. 1 is a schematic overall flow diagram of an embodiment of the present invention;
Detailed Description
In order to make the features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail as follows:
as shown in fig. 1, the training process of the retinal vessel image segmentation method based on the multi-channel U-shaped full convolution neural network of the equalized sampling stitching provided by this embodiment includes the following steps:
firstly, enhancing images of a training set by histogram equalization and gamma correction preprocessing of characteristic values, and then dividing image blocks by adopting a multi-scale fixed central point and randomly splicing equilibrium sampling and data expansion strategies of data samples. And finally, performing model training through a color-sensitive three-channel input U-Net full convolution network to generate a retinal vessel segmentation model.
In the testing step, after preprocessing, the accurate prediction and segmentation of the retinal vessel network are realized by adopting the retinal vessel segmentation model obtained by training.
In this embodiment, a specific implementation manner of enhancing a retinal image by histogram equalization and gamma correction preprocessing with a characteristic value is as follows: the histogram equalization is designed to perform histogram equalization on the color components of 3 channels in order to achieve a large color difference. However, after histogram equalization enhancement, the color difference of the image can still be further enhanced, so that the image after equalization processing is corrected by gamma values selected according to the image characteristics to compress the histogram, the visibility of the blood vessel structure is increased, and the contrast of the target retinal blood vessel structure and the background is improved.
The implementation mode of the equalization sampling and data expansion strategy of randomly splicing data samples after the multi-scale fixed central point image block is divided is as follows: firstly, determining how many image blocks can be repeatedly divided in an image according to the sizes of the image blocks, calculating the coordinates of a fixed central point of each image block in the image, dividing the image into equivalent image blocks, and randomly taking out a certain number of image blocks from an image block list to splice into new images with the same size as the original size. Therefore, even though the image block division is not randomized, the image block samples with huge number can be randomly extracted and spliced, and the number of images in the data set can still be amplified on the premise of not reducing the robustness. Meanwhile, the sampling of the equalized image blocks is realized in a fixed center point mode, any position of each image is guaranteed to have sampling, and misjudgment of the model due to the fact that masks appear in training samples and the number of junctions of the masks is small and overfitting is caused is prevented. On this basis, the present embodiment selects to sample image blocks of different scales: the method comprises the steps of forming a new image by 7 scale image blocks of 4, 16, 25, 49, 64, 100, 196 and the like, randomly splicing 15 new data in each scale, enhancing training samples in different scales, adding training images of an original data set into the training samples to form 125 pieces of training image data, and preventing overfitting caused by insufficient data volume and simultaneously enhancing the robustness of the training samples.
In this embodiment, the specific implementation manner of the data normalization policy is as follows: since three-channel data input is used, normalization processing is performed on each color component of three channels for each image data for consistency of data processing. The processing just before the image is input into the network is the image normalization post-processing, for example, the normalization operation is also performed when the characteristic gamma value is corrected in other preprocessing steps, but the image is restored when being saved.
The color-sensitive three-channel input U-Net full convolution network of the embodiment adopts a U-Net network model architecture, and is different from an original network in that 3-channel input is adopted to learn information on each color component, and Padding is added to ensure structural characteristic consistency of an input image and an output image. Meanwhile, the network structure has partial independent convolution kernels for three color components, so that the U-Net is sensitive to color difference. The network contains convolutional layers and RELU activation functions, pooling layers, upsampling and short connections. After the image is processed by the network, the predicted segmentation image with the channel of 1 and the binary size same as that of the original image is finally generated, and the predicted segmentation image is used for calculating a loss function (loss function) with the real segmentation image, so that the network is further improved.
The realization mode for generating the retinal vessel segmentation model by carrying out model training on the color-sensitive three-channel input U-Net full convolution network is as follows: first, at the beginning of model training, preset parameter information is read. The input image list is then read, since the pre-processed image is normalized when it is input to the network. After the optimizer optimizes the learning rate, training a sample input network to predict a segmentation image, calculating a Loss function value between the segmentation image and a true value image, wherein the true value segmentation image is fixed, so that the true value distribution information Entropy is also fixed, a two-classification Cross Entropy Loss function (Cross Entropy Loss) is used in the network, and meanwhile, in order to relieve the unbalanced problem of the sample, a Sigmoid function is adopted for mapping pixel points. The loss function is expressed as:
Figure BDA0002669391770000051
P0the probability of the pixel point of which the pixel point is of the blood vessel structure under the label graph is represented, P1The probability that the pixel point under the label graph is the pixel point of the non-vessel structure is shown, N is the total pixel number,
Figure BDA0002669391770000052
the probability that the pixel point is output as a blood vessel pixel under the Sigmoid function mapping is obtained,
Figure BDA0002669391770000053
and outputting the probability of the pixel point as a non-blood vessel pixel under the Sigmoid function mapping.
And then updating the network by using a back propagation algorithm, and repeating the steps until the training is completely finished to generate a network model. And carrying out the same preprocessing operation and sampling mode as the training sample on the test image, and then inputting the produced network model to obtain a predicted image of the blood vessel network.
The present invention is not limited to the above-mentioned preferred embodiments, and any other various forms of retinal vessel image segmentation methods based on the multi-channel U-shaped full convolution neural network of the equalized sampling stitching can be obtained according to the teaching of the present invention.

Claims (4)

1. A retinal vessel image segmentation method of a multi-channel U-shaped full convolution neural network based on balanced sampling splicing is characterized by comprising the following steps:
step S1: carrying out three-channel histogram equalization on the original color retina image and carrying out preprocessing by combining characteristic value gamma correction;
step S2: after multi-scale balanced division sampling points are constructed, image blocks are randomly spliced to expand data samples;
step S3: normalizing the image data;
step S4: inputting the normalized image data into a retinal blood vessel segmentation model to perform retinal blood vessel image segmentation; the retina blood vessel segmentation model is obtained by training normalized image data generated by a training set through a three-channel U-Net full convolution network sensitive to color.
2. The retinal vessel image segmentation method based on the multi-channel U-shaped full convolution neural network of the equalized sampling splicing as claimed in claim 1, characterized in that: step S2 specifically includes the following steps:
step S21: determining the number of image blocks which can be divided without repetition in an image according to the size of a given image block;
step S22: calculating the coordinates of the fixed central point of each image block, and dividing the image into equivalent image blocks;
step S23: randomly taking out a certain number of image blocks from the image block list and splicing the image blocks into a new image with the same size as the original image.
3. The retinal vessel image segmentation method based on the multi-channel U-shaped full convolution neural network of the equalized sampling splicing as claimed in claim 1, characterized in that:
on the basis of adopting a U-Net network model architecture, the color-sensitive three-channel U-Net full convolution network adopts three-channel input to learn information on each color component, and Padding is added to ensure the structural characteristic consistency of an input image and an output image; the network structure has independent convolution kernels for three color components, and the network comprises a convolution layer, a RELU activation function, a pooling layer, an upsampling layer and a short connection; after the image is processed by a network, a prediction segmentation image with the channel of 1 and the same size as the original image is generated, and the prediction segmentation image is used for calculating a loss function output value with a real segmentation image.
4. The retinal vessel image segmentation method based on the multi-channel U-shaped full convolution neural network of the equalized sampling splicing as claimed in claim 3, characterized in that:
the color-sensitive three-channel U-Net full convolution network uses a two-classification cross entropy loss function, the mapping of pixel points adopts a Sigmoid function, and the loss function is expressed as follows:
Figure FDA0002669391760000021
wherein, P0The probability that the pixel point is the blood vessel structure under the truth value chart is represented, P1The probability that the pixel point is the pixel point of the non-vessel structure under the truth value chart is shown, N is the total pixel number,
Figure FDA0002669391760000022
the probability that the pixel point is output as a blood vessel pixel under the Sigmoid function mapping is obtained,
Figure FDA0002669391760000023
and outputting the probability of the pixel point as a non-blood vessel pixel under the Sigmoid function mapping.
CN202010931829.9A 2020-09-07 2020-09-07 Retinal vessel image segmentation method of multi-channel U-shaped full convolution neural network based on balanced sampling splicing Pending CN112085745A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010931829.9A CN112085745A (en) 2020-09-07 2020-09-07 Retinal vessel image segmentation method of multi-channel U-shaped full convolution neural network based on balanced sampling splicing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010931829.9A CN112085745A (en) 2020-09-07 2020-09-07 Retinal vessel image segmentation method of multi-channel U-shaped full convolution neural network based on balanced sampling splicing

Publications (1)

Publication Number Publication Date
CN112085745A true CN112085745A (en) 2020-12-15

Family

ID=73732060

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010931829.9A Pending CN112085745A (en) 2020-09-07 2020-09-07 Retinal vessel image segmentation method of multi-channel U-shaped full convolution neural network based on balanced sampling splicing

Country Status (1)

Country Link
CN (1) CN112085745A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113256743A (en) * 2021-06-16 2021-08-13 图兮数字科技(北京)有限公司 Image processing method and device, electronic equipment and readable storage medium
CN113671660A (en) * 2021-08-13 2021-11-19 Oppo广东移动通信有限公司 Image generation method and device, electronic equipment and storage medium
CN114648814A (en) * 2022-02-25 2022-06-21 北京百度网讯科技有限公司 Face living body detection method, training method, device, equipment and medium of model
CN114708266A (en) * 2022-06-07 2022-07-05 青岛通产智能科技股份有限公司 Tool, method and device for detecting card defects and medium
CN115690143A (en) * 2022-09-26 2023-02-03 推想医疗科技股份有限公司 Image segmentation method and device, electronic equipment and storage medium
CN116109518A (en) * 2023-03-30 2023-05-12 之江实验室 Data enhancement and segmentation method and device for metal rust image

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127709A (en) * 2016-06-24 2016-11-16 华东师范大学 A kind of low-luminance color eye fundus image determination methods and Enhancement Method
CN108510532A (en) * 2018-03-30 2018-09-07 西安电子科技大学 Optics and SAR image registration method based on depth convolution GAN
CN108986124A (en) * 2018-06-20 2018-12-11 天津大学 In conjunction with Analysis On Multi-scale Features convolutional neural networks retinal vascular images dividing method
CN109345538A (en) * 2018-08-30 2019-02-15 华南理工大学 A kind of Segmentation Method of Retinal Blood Vessels based on convolutional neural networks
CN109859146A (en) * 2019-02-28 2019-06-07 电子科技大学 A kind of colored eye fundus image blood vessel segmentation method based on U-net convolutional neural networks
CN110706233A (en) * 2019-09-30 2020-01-17 北京科技大学 Retina fundus image segmentation method and device
CN111563528A (en) * 2020-03-31 2020-08-21 西北工业大学 SAR image classification method based on multi-scale feature learning network and bilateral filtering

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127709A (en) * 2016-06-24 2016-11-16 华东师范大学 A kind of low-luminance color eye fundus image determination methods and Enhancement Method
CN108510532A (en) * 2018-03-30 2018-09-07 西安电子科技大学 Optics and SAR image registration method based on depth convolution GAN
CN108986124A (en) * 2018-06-20 2018-12-11 天津大学 In conjunction with Analysis On Multi-scale Features convolutional neural networks retinal vascular images dividing method
CN109345538A (en) * 2018-08-30 2019-02-15 华南理工大学 A kind of Segmentation Method of Retinal Blood Vessels based on convolutional neural networks
CN109859146A (en) * 2019-02-28 2019-06-07 电子科技大学 A kind of colored eye fundus image blood vessel segmentation method based on U-net convolutional neural networks
CN110706233A (en) * 2019-09-30 2020-01-17 北京科技大学 Retina fundus image segmentation method and device
CN111563528A (en) * 2020-03-31 2020-08-21 西北工业大学 SAR image classification method based on multi-scale feature learning network and bilateral filtering

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李大湘;张振;: "基于改进U-Net视网膜血管图像分割算法", 光学学报, no. 10, 25 May 2020 (2020-05-25) *
鲍文霞 等: "基于多路卷积神经网络的大田小麦赤霉病图像识别", 《农业工程学报》, 8 June 2020 (2020-06-08), pages 1 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113256743A (en) * 2021-06-16 2021-08-13 图兮数字科技(北京)有限公司 Image processing method and device, electronic equipment and readable storage medium
CN113671660A (en) * 2021-08-13 2021-11-19 Oppo广东移动通信有限公司 Image generation method and device, electronic equipment and storage medium
CN114648814A (en) * 2022-02-25 2022-06-21 北京百度网讯科技有限公司 Face living body detection method, training method, device, equipment and medium of model
CN114708266A (en) * 2022-06-07 2022-07-05 青岛通产智能科技股份有限公司 Tool, method and device for detecting card defects and medium
CN115690143A (en) * 2022-09-26 2023-02-03 推想医疗科技股份有限公司 Image segmentation method and device, electronic equipment and storage medium
CN115690143B (en) * 2022-09-26 2023-07-11 推想医疗科技股份有限公司 Image segmentation method, device, electronic equipment and storage medium
CN116109518A (en) * 2023-03-30 2023-05-12 之江实验室 Data enhancement and segmentation method and device for metal rust image

Similar Documents

Publication Publication Date Title
EP3674968B1 (en) Image classification method, server and computer readable storage medium
CN112085745A (en) Retinal vessel image segmentation method of multi-channel U-shaped full convolution neural network based on balanced sampling splicing
CN109345538B (en) Retinal vessel segmentation method based on convolutional neural network
US11487995B2 (en) Method and apparatus for determining image quality
CN112132817B (en) Retina blood vessel segmentation method for fundus image based on mixed attention mechanism
CN110097554B (en) Retina blood vessel segmentation method based on dense convolution and depth separable convolution
WO2020215676A1 (en) Residual network-based image identification method, device, apparatus, and storage medium
CN111242288B (en) Multi-scale parallel deep neural network model construction method for lesion image segmentation
CN110059586B (en) Iris positioning and segmenting system based on cavity residual error attention structure
CN109816666B (en) Symmetrical full convolution neural network model construction method, fundus image blood vessel segmentation device, computer equipment and storage medium
CN111882566B (en) Blood vessel segmentation method, device, equipment and storage medium for retina image
CN112330684B (en) Object segmentation method and device, computer equipment and storage medium
WO2022127500A1 (en) Multiple neural networks-based mri image segmentation method and apparatus, and device
CN112446891A (en) Medical image segmentation method based on U-Net network brain glioma
CN113793348A (en) Retinal vessel segmentation method and device
CN115731242A (en) Retina blood vessel segmentation method based on mixed attention mechanism and asymmetric convolution
CN117058676B (en) Blood vessel segmentation method, device and system based on fundus examination image
CN111080646A (en) Improved image segmentation method based on wide-activation convolutional neural network
CN115294075A (en) OCTA image retinal vessel segmentation method based on attention mechanism
CN112750137A (en) Liver tumor segmentation method and system based on deep learning
Simanjuntak et al. Cataract classification based on fundus images using convolutional neural network
CN117611599B (en) Blood vessel segmentation method and system integrating centre line diagram and contrast enhancement network
CN111144296A (en) Retina fundus picture classification method based on improved CNN model
CN117409030B (en) OCTA image blood vessel segmentation method and system based on dynamic tubular convolution
Dong et al. Supervised learning-based retinal vascular segmentation by m-unet full convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination