CN111489328A - Fundus image quality evaluation method based on blood vessel segmentation and background separation - Google Patents
Fundus image quality evaluation method based on blood vessel segmentation and background separation Download PDFInfo
- Publication number
- CN111489328A CN111489328A CN202010149988.3A CN202010149988A CN111489328A CN 111489328 A CN111489328 A CN 111489328A CN 202010149988 A CN202010149988 A CN 202010149988A CN 111489328 A CN111489328 A CN 111489328A
- Authority
- CN
- China
- Prior art keywords
- blood vessel
- image
- quality evaluation
- neural network
- fundus image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- Quality & Reliability (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Eye Examination Apparatus (AREA)
Abstract
A fundus image quality evaluation method based on blood vessel segmentation and background separation comprises the following steps: 1) firstly, performing vessel segmentation on an input image through a pre-trained U-Net model on a DRIVE public fundus image dataset; 2) multiplying the blood vessel characteristic diagram obtained in the step 1) with the original image element by element to obtain an image only containing blood vessels and background information; 3) respectively inputting the extracted characteristic images into the convolutional neural network branches for training to obtain model parameters; 4) and carrying out quality evaluation on the test picture by using the trained convolutional neural network model. The invention realizes higher evaluation accuracy, reduces the doctor rechecking rate and avoids the treatment opportunity delay possibly caused by the repeated examination. The model provided by the invention has universality, can be embedded into various advanced convolutional neural network structures and improves the network performance, and meanwhile, provides a method for fusing the vascular prior knowledge and the neural network end-to-end feature extraction.
Description
Technical Field
The invention relates to the field of medical image processing and computer vision, in particular to an image quality evaluation method based on a convolutional neural network.
Background
The fundus images are captured by a special fundus camera, wherein the fundus images comprise important physiological structures such as optic discs, macula lutea and blood vessels in retinas, and are important images in medical images. The optic disc is represented as an approximately circular bright color area in a normal fundus image, has the strongest contrast with a background area, and is the initial area of optic nerves and blood vessels; the macula, because it is rich in lutein, appears as a dark region in a normal fundus image, and the dark region has no vascular structure, and there is an inwardly depressed region called fovea in the midpoint of the macula; the blood vessels start from the optic disc region and extend to the whole eyeball, are distributed in a tree shape in the whole fundus image, are thickest and most dense in the optic disc region, and extend along the vertical direction basically.
Although the fundus image quality evaluation method is more mature, the quality evaluation of most fundus images depends on the subjective judgment of medical researchers, and the subjective judgment mode is mainly influenced by the characteristics of large-area exposure and visual field blurring in a wide-range global scale. The fundus images play an important role in tasks of extracting blood vessel information, positioning an optic disc, positioning a macular region and the like, and the applications provide huge information for diagnosis and treatment of ophthalmic diseases. A study involving extensive databases of retinal images has shown that more than 25% of the image display is of insufficient quality for proper medical diagnosis. In addition to the financial investment required to retrieve a picture of poor quality, it is inconvenient for the patient to return to the medical center for repeated fundus photographic examinations. More seriously, misdiagnosis caused by poor image shooting quality is not easy to be found, and the treatment time is delayed. Therefore, it is important to develop an automatic evaluation system for fundus image quality. Through quality evaluation, the above-mentioned disadvantages of ophthalmic examinations can be alleviated by taking a second photograph immediately after the poor quality photograph is taken. In order to ensure the quality of fundus images while reducing the time and effort of manual screening, it is necessary and urgent to automatically and objectively evaluate the quality of fundus images during the acquisition process.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a fundus image quality evaluation method, which divides a fundus input image into a blood vessel part and a background part, enables the fundus input image to focus on the characteristics of the blood vessel part and the background part respectively, designs a double-branch attention structure, can extract global characteristics influencing the image quality, and inhibits the interference of blood vessels and local textures on the extraction of key characteristics of image quality evaluation. The model provided by the invention has universality, can be embedded into various advanced network structures, and improves the network performance. Meanwhile, the invention provides a path for fusing the vascular prior knowledge and the end-to-end feature extraction of the neural network.
In order to solve the technical problems, the invention provides the following technical scheme:
a background separation fundus image quality evaluation method based on blood vessel segmentation guidance comprises the following steps:
1) firstly, performing vessel segmentation on an input image through a pre-trained U-Net model on a DRIVE public fundus image dataset;
2) multiplying the blood vessel characteristic diagram obtained in the step 1) with the original image element by element to obtain an image only containing blood vessels and background information;
3) inputting the extracted characteristic images into network branches respectively for training to obtain model parameters;
4) and carrying out quality evaluation on the test picture by using the trained convolutional neural network model.
Further, the network structure implemented in step 3) includes two parts: the dual-branch feature extraction path and the channel feature strengthening structure are characterized in that the dual-branch feature extraction path is composed of two feature extraction paths which respectively comprise two convolution blocks, the internal structures of the convolution blocks are the same, the output of the last convolution block of the path branch is fused, the learned different regional features are integrated, the shallow layer features are extracted and learned in the convolution blocks, and finally, a global pooling layer and a full connection layer are added on the original convolution blocks to enhance feature representation in the channels.
The invention has the following beneficial effects: the sensitivity of the conventional convolutional neural network to fundus blood vessels with severe local gradient changes is weakened, the relevant characteristics of larger scales influencing the image quality in the background, such as overexposure, blurring and the like of the area, are enhanced, and the representation capability of the network on the image quality characteristics is improved.
Drawings
FIG. 1 is an overall flow chart of the method employed in the present invention;
fig. 2 is a schematic diagram of a network structure adopted in the present invention.
Detailed Description
The invention is further described below with reference to the schematic drawings.
Referring to fig. 1 and 2, a background separation fundus image quality evaluation method based on blood vessel segmentation guidance includes the following steps:
1) firstly, performing vessel segmentation on an input image through a pre-trained U-Net model on a DRIVE public fundus image dataset;
2) multiplying the blood vessel characteristic diagram obtained in the step 1) with the original image element by element to obtain an image only containing blood vessels and background information;
3) inputting the extracted characteristic images into network branches respectively for training to obtain model parameters;
4) and carrying out quality evaluation on the test picture by using the trained convolutional neural network model.
Further, the network structure implemented in step 3) includes two parts: the dual-branch feature extraction path and the channel feature strengthening structure are characterized in that the dual-branch feature extraction path is composed of two feature extraction paths which respectively comprise two convolution blocks, the internal structures of the convolution blocks are the same, the output of the last convolution block of the path branch is fused, the learned different regional features are integrated, the shallow layer features are extracted and learned in the convolution blocks, and finally, a global pooling layer and a full connection layer are added on the original convolution blocks to enhance feature representation in the channels.
An embodiment of this example includes the steps of:
first step, image preprocessing
The quality of the image directly influences the precision of the effect, and the purpose of image preprocessing is mainly to eliminate irrelevant information in the image, simplify data to the maximum extent and overcome image interference. First, each color retina image is cut, redundant black edges of the fundus image are removed, only the fundus part is reserved, and the interested area is extracted.
Second step, data expansion
Because the number of pictures in the data set is small, and the network needs a large amount of data to drive the training of the model, the effective data expansion can avoid overfitting on one hand, and can bring the improvement of the model performance on the other hand. The expansion method proposed by the invention is to randomly select in the ranges of left-right, up-down turning and rotation angles of 90, 180 and 270 so as to create different transformation images. According to the image distribution of the training data, the images with good quality are increased by 4 times, the images with poor quality are increased by 8 times, and the training data can be effectively expanded.
Third step, model training
The processed picture block is used as input and sent to a neural network for training, and the flow of the fundus image quality evaluation method based on blood vessel segmentation and background separation designed by the invention is shown in figure 1. Firstly, performing blood vessel segmentation on an input image through a pre-trained U-Net model on a DRIVE public fundus image data set, setting the number of channels of the last layer of convolution layer of the pre-trained U-Net model to be 2, wherein a judgment pixel point in one channel is not a blood vessel, and a pixel point in the other channel is judged to be a blood vessel, separating the obtained two-channel characteristic diagram, making a mask and performing element-by-element multiplication on the mask and the input image, respectively obtaining an image only containing blood vessel information and background information, and inputting the image into a double-branch structure for characteristic extraction.
The invention relates to a method for extracting characteristics of a neural network, which comprises the steps of constructing a path containing a blood vessel, merging the output of the last volume block of the path branch, integrating learned characteristics of different regions, performing scale correction on shallow and deep layers of the neural network by convolution of three 1 × 1, performing scale correction on convolution of each 1 ×, and increasing the total channel size of all the original volume blocks by using the same channel number and enhancing the total channel size of all the original volume blocks.
And connecting the feature graph output after the last layer of convolution with a Softmax classification layer, converting the feature graph into a probability vector representing the high and low quality of each image, and randomly initializing the internal parameters of the convolution kernel in a plurality of modes at the beginning stage of model training by using a cross entropy loss function and a random gradient descent optimization function. And finally obtaining a model which can be finally used for fundus image quality evaluation through iterative training.
Fourth step, model test
And (3) carrying out the same pretreatment on the image data set used in the test and the image data set used in the training, preserving the proportion of the original data category, inputting the image data set into the model for judgment and prediction, and extracting the maximum probability of the judgment category as a prediction result.
Claims (2)
1. A fundus image quality evaluation method based on blood vessel segmentation and background separation is characterized by comprising the following steps:
1) firstly, performing vessel segmentation on an input image through a pre-trained U-Net model on a DRIVE public fundus image dataset;
2) multiplying the blood vessel characteristic diagram obtained in the step 1) with the original image element by element to obtain an image only containing blood vessels and background information;
3) respectively inputting the extracted characteristic images into the convolutional neural network branches for training to obtain model parameters;
4) and carrying out quality evaluation on the test picture by using the trained convolutional neural network model.
2. A fundus image quality evaluation method based on blood vessel segmentation and background separation according to claim 1, characterized in that the network structure implemented in the step 3) comprises two parts: the dual-branch feature extraction path and the channel feature strengthening structure are characterized in that the dual-branch feature extraction path is composed of two feature extraction paths which respectively comprise two convolution blocks, the internal structures of the convolution blocks are the same, the output of the last convolution block of the path branch is fused, the learned different regional features are integrated, the shallow layer features are extracted and learned in the convolution blocks, and finally, a global pooling layer and a full connection layer are added on the original convolution blocks, so that the feature representation capability in the channel is enhanced.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010149988.3A CN111489328B (en) | 2020-03-06 | 2020-03-06 | Fundus image quality evaluation method based on blood vessel segmentation and background separation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010149988.3A CN111489328B (en) | 2020-03-06 | 2020-03-06 | Fundus image quality evaluation method based on blood vessel segmentation and background separation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111489328A true CN111489328A (en) | 2020-08-04 |
CN111489328B CN111489328B (en) | 2023-06-30 |
Family
ID=71798605
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010149988.3A Active CN111489328B (en) | 2020-03-06 | 2020-03-06 | Fundus image quality evaluation method based on blood vessel segmentation and background separation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111489328B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112233066A (en) * | 2020-09-16 | 2021-01-15 | 南京理工大学 | Eye bulbar conjunctiva image quality evaluation method based on gradient activation map |
CN112598633A (en) * | 2020-12-17 | 2021-04-02 | 中南大学 | Fundus image quality evaluation method based on dark channel and bright channel |
WO2021135552A1 (en) * | 2020-06-28 | 2021-07-08 | 平安科技(深圳)有限公司 | Segmentation effect assessment method and apparatus based on deep learning, and device and medium |
CN113362332A (en) * | 2021-06-08 | 2021-09-07 | 南京信息工程大学 | Depth network segmentation method for coronary artery lumen contour under OCT image |
CN113537298A (en) * | 2021-06-23 | 2021-10-22 | 广东省人民医院 | Retina image classification method and device |
CN114882014A (en) * | 2022-06-16 | 2022-08-09 | 深圳大学 | Dual-model-based fundus image quality evaluation method and device and related medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150104087A1 (en) * | 2013-10-10 | 2015-04-16 | University Of Rochester | Automated Fundus Image Field Detection and Quality Assessment |
US20180181833A1 (en) * | 2014-08-25 | 2018-06-28 | Agency For Science, Technology And Research | Methods and systems for assessing retinal images, and obtaining information from retinal images |
CN108921228A (en) * | 2018-07-12 | 2018-11-30 | 成都上工医信科技有限公司 | A kind of evaluation method of eye fundus image blood vessel segmentation |
CN109377472A (en) * | 2018-09-12 | 2019-02-22 | 宁波大学 | A kind of eye fundus image quality evaluating method |
CN109671094A (en) * | 2018-11-09 | 2019-04-23 | 杭州电子科技大学 | A kind of eye fundus image blood vessel segmentation method based on frequency domain classification |
-
2020
- 2020-03-06 CN CN202010149988.3A patent/CN111489328B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150104087A1 (en) * | 2013-10-10 | 2015-04-16 | University Of Rochester | Automated Fundus Image Field Detection and Quality Assessment |
US20180181833A1 (en) * | 2014-08-25 | 2018-06-28 | Agency For Science, Technology And Research | Methods and systems for assessing retinal images, and obtaining information from retinal images |
CN108921228A (en) * | 2018-07-12 | 2018-11-30 | 成都上工医信科技有限公司 | A kind of evaluation method of eye fundus image blood vessel segmentation |
CN109377472A (en) * | 2018-09-12 | 2019-02-22 | 宁波大学 | A kind of eye fundus image quality evaluating method |
CN109671094A (en) * | 2018-11-09 | 2019-04-23 | 杭州电子科技大学 | A kind of eye fundus image blood vessel segmentation method based on frequency domain classification |
Non-Patent Citations (1)
Title |
---|
盛韩伟;戴培山;刘智航;张文妙韵;赵亚丽;范敏;: "基于拓扑结构的眼底图像分割评价新方法" * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021135552A1 (en) * | 2020-06-28 | 2021-07-08 | 平安科技(深圳)有限公司 | Segmentation effect assessment method and apparatus based on deep learning, and device and medium |
CN112233066A (en) * | 2020-09-16 | 2021-01-15 | 南京理工大学 | Eye bulbar conjunctiva image quality evaluation method based on gradient activation map |
CN112233066B (en) * | 2020-09-16 | 2022-09-27 | 南京理工大学 | Eye bulbar conjunctiva image quality evaluation method based on gradient activation map |
CN112598633A (en) * | 2020-12-17 | 2021-04-02 | 中南大学 | Fundus image quality evaluation method based on dark channel and bright channel |
CN113362332A (en) * | 2021-06-08 | 2021-09-07 | 南京信息工程大学 | Depth network segmentation method for coronary artery lumen contour under OCT image |
CN113537298A (en) * | 2021-06-23 | 2021-10-22 | 广东省人民医院 | Retina image classification method and device |
CN114882014A (en) * | 2022-06-16 | 2022-08-09 | 深圳大学 | Dual-model-based fundus image quality evaluation method and device and related medium |
CN114882014B (en) * | 2022-06-16 | 2023-02-03 | 深圳大学 | Dual-model-based fundus image quality evaluation method and device and related medium |
Also Published As
Publication number | Publication date |
---|---|
CN111489328B (en) | 2023-06-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111489328A (en) | Fundus image quality evaluation method based on blood vessel segmentation and background separation | |
CN109087302A (en) | A kind of eye fundus image blood vessel segmentation method and apparatus | |
CN109166126B (en) | Method for segmenting paint cracks on ICGA image based on condition generation type countermeasure network | |
CN108021916B (en) | Deep learning diabetic retinopathy sorting technique based on attention mechanism | |
CN109635862B (en) | Sorting method for retinopathy of prematurity plus lesion | |
CN110276356A (en) | Eye fundus image aneurysms recognition methods based on R-CNN | |
CN110197493A (en) | Eye fundus image blood vessel segmentation method | |
CN109493317A (en) | The more vertebra dividing methods of 3D based on concatenated convolutional neural network | |
CN112132817A (en) | Retina blood vessel segmentation method for fundus image based on mixed attention mechanism | |
CN108764342B (en) | Semantic segmentation method for optic discs and optic cups in fundus image | |
CN110097554A (en) | The Segmentation Method of Retinal Blood Vessels of convolution is separated based on intensive convolution sum depth | |
CN113793348B (en) | Retinal blood vessel segmentation method and device | |
CN109464120A (en) | A kind of screening for diabetic retinopathy method, apparatus and storage medium | |
CN112017185A (en) | Focus segmentation method, device and storage medium | |
CN109919915A (en) | Retinal fundus images abnormal area detection method and equipment based on deep learning | |
CN112101424A (en) | Generation method, identification device and equipment of retinopathy identification model | |
CN112541923A (en) | Cup optic disk segmentation method based on fundus image data set migration learning | |
CN115035127A (en) | Retinal vessel segmentation method based on generative confrontation network | |
CN115409764A (en) | Multi-mode fundus blood vessel segmentation method and device based on domain self-adaptation | |
CN112700409A (en) | Automatic retinal microaneurysm detection method and imaging method | |
Venkatalakshmi et al. | Graphical user interface for enhanced retinal image analysis for diagnosing diabetic retinopathy | |
CN115619814A (en) | Method and system for jointly segmenting optic disk and optic cup | |
CN112806957B (en) | Keratoconus and subclinical keratoconus detection system based on deep learning | |
CN114359104A (en) | Cataract fundus image enhancement method based on hierarchical generation | |
CN113576399A (en) | Sugar net analysis method and system and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |