CN115249248A - Retinal artery and vein blood vessel direct identification method and system based on fundus image - Google Patents
Retinal artery and vein blood vessel direct identification method and system based on fundus image Download PDFInfo
- Publication number
- CN115249248A CN115249248A CN202110468433.XA CN202110468433A CN115249248A CN 115249248 A CN115249248 A CN 115249248A CN 202110468433 A CN202110468433 A CN 202110468433A CN 115249248 A CN115249248 A CN 115249248A
- Authority
- CN
- China
- Prior art keywords
- module
- feature map
- map
- blood vessel
- arteriovenous
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/12—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/14—Arrangements specially adapted for eye photography
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/155—Segmentation; Edge detection involving morphological operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- General Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Ophthalmology & Optometry (AREA)
- Public Health (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Veterinary Medicine (AREA)
- Animal Behavior & Ethology (AREA)
- Surgery (AREA)
- Medical Informatics (AREA)
- Heart & Thoracic Surgery (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Eye Examination Apparatus (AREA)
Abstract
Provided is a retinal artery and vein blood vessel direct identification method based on a fundus image, comprising the following steps: processing the input fundus images by using a basic segmentation network, and outputting 64-channel feature maps; processing the feature map of the 64 channels by using a blood vessel constraint module, and outputting a first result map; the first result graph can generate an arteriovenous characteristic graph of 4 channels including background, artery, vein and unknown blood vessel through 1 × 1 convolution; and the 4-channel arteriovenous characteristic graph generates a final retina arteriovenous blood vessel identification graph through a first Sigmoid function. The arteriovenous blood vessel classification result obtained by the method has higher precision, efficiency and robustness.
Description
Technical Field
The invention belongs to the field of fundus image detection, and particularly relates to a retinal artery and vein blood vessel direct identification method and system based on a fundus image.
Background
Retinal blood vessels are the only internal vascular tissues that can be observed in a human body under non-invasive conditions, and many systemic diseases such as diabetes, hypertension, cardiovascular diseases and the like cause changes in retinal vascular structure and morphology, and the effects of different diseases and progression stages on arteries and veins are different, for example, narrowing of arteries is considered to be a phenomenon associated with hypertension, and widening of veins is associated with stroke and cardiovascular diseases. In order to analyze morphological characteristics of retinal artery and vein, the retinal artery and vein blood vessels need to be accurately segmented and obtained from fundus images, the traditional retinal blood vessel segmentation method depends on manual operation of professional doctors with a great deal of professional knowledge and experience accumulation, and the defects that standards among different operators are difficult to unify, repeatability is poor, efficiency is low and the like exist, so that huge diagnosis and treatment requirements are difficult to meet. Therefore, the automatic arteriovenous identification of retinal vessels and the acquisition of related quantitative parameters can be realized, the medical cost can be greatly reduced, the medical research can be assisted, and the development and popularization of the fundus screening technology are promoted.
In recent years, a number of automated techniques for retinal arteriovenous classification have been proposed, which can be broadly categorized into two types, graph-based techniques and feature-based techniques. However, these techniques mainly rely on the results of the previous binary image of vessel background segmentation to extract the vessel centerline to generate the vessel map or extract features, on one hand, the whole classification process is slow, on the other hand, the results of arteriovenous classification heavily depend on the accuracy of vessel segmentation, and if the quality of vessel segmentation in the first stage is low, the accuracy of arteriovenous classification in the second stage naturally decreases. The retinal artery and vein automatic analysis software based on the fundus image which is proposed at present adopts a two-stage method of firstly performing blood vessel segmentation and then classifying arteriovenous according to the color, brightness, morphological characteristics and the like of the artery and vein, and can only analyze limited fundus image data sets, so that the application range is not wide enough, and the blood vessel segmentation precision and efficiency are low.
In addition, existing analysis software is also limited to extraction of geometric parameters of retinal vessel morphology, most of which only include vessel diameter and tortuosity measurements.
Therefore, it is necessary to research a direct identification method of retinal artery and vein blood vessels based on a fundus image to solve one or more of the technical problems described above.
Disclosure of Invention
To solve at least one of the above technical problems, according to an aspect of the present invention, there is provided a retinal artery and vein blood vessel direct identification method based on fundus images, which employs a semantic segmentation network for directly performing artery-vein-background classification without depending on the result of a blood vessel segmentation binary image, the input of the network being a photographed original fundus image, and the output being directly an arteriovenous segmentation result.
Furthermore, the invention introduces more measurement of the retinal vascular morphology geometric parameters, so that the application range and the popularity of the system are wider and wider. Moreover, the invention extracts parameters of only a few main blood vessels in the region of interest, which improves the working efficiency of the system under the condition of meeting clinical requirements.
Specifically, the retinal artery and vein blood vessel direct identification method based on the fundus image is characterized by comprising the following steps of:
processing the input fundus image by using a basic segmentation network, and outputting a feature map of 64 channels;
processing the feature map of the 64 channels by using a blood vessel constraint module, and outputting a first result map;
the first result graph is convolved by 1 multiplied by 1 to generate a 4-channel arteriovenous characteristic graph containing a background, an artery, a vein and an unknown blood vessel;
the arteriovenous characteristic diagrams of the 4 channels are subjected to a first Sigmoid function to generate a final retinal arteriovenous blood vessel identification diagram;
wherein the vessel-constraining module comprises two parallel first and second branches, the first branch comprising: two 3 x 3 convolution modules and one 1x1 convolution module, which are used for generating a blood vessel segmentation feature map according to the feature map of the 64 channels; the second Sigmoid function module is used for converting the blood vessel segmentation feature map into a probability map; and a Gaussian activation function module for generating a blood vessel activation map according to the probability map;
the second branch comprises: two 3 x 3 convolution modules for generating an arteriovenous characteristic diagram according to the characteristic diagram of the 64 channels; the first matrix multiplication module is used for multiplying the probability map and the arteriovenous characteristic map and outputting the result map, and the second matrix multiplication module is used for multiplying the output of the first matrix multiplication module and the blood vessel activation map and outputting the first result map.
According to yet another aspect of the present invention, the basic partitioned network is a U-shaped partitioned network.
According to another aspect of the invention, the U-shaped segmentation network comprises a left-side down-sampling part and a right-side up-sampling part, wherein the left-side down-sampling part performs two times of 3 × 3 convolution on the fundus image to obtain a feature map, then performs 2 × 2 maximum pooling layer to down-sample the feature map to half of the original size, continuously performs multi-scale feature extraction by using a multi-scale feature module in the next 3 feature layers, reduces the size of the input feature map by half through 3 × 3 convolution operation with the step length of 2, and finally outputs the obtained feature map to the right-side up-sampling part after the multi-scale feature module and the 3 × 3 convolution operation; the right upsampling part comprises 3 upsampling and merging feature map modules, each upsampling and merging feature map module expands the size of a feature map to be 2 times of that of an input feature map through 2 x 2 upsampling operation, the feature map is merged with the feature map corresponding to the left downsampling part and then subjected to 3 x 3 convolution operation twice to output the feature map, a 2 x 2 upsampling operation is next performed to obtain a feature map of 192 channels, and then a feature map of 64 channels is output through 3 x 3 convolution operation.
According to another aspect of the invention, the multi-scale feature module uses pre-trained Res2Net to first divide the feature map after 1 × 1 convolution into k subsets according to the number of channels, where the ith subset is x i I e {1,2,.., k }. Except for x 1 Each of x i There will be a corresponding 3 x 3 convolution with F i () Indicates that y is output i Expressed as:
thus, the outputs with different perception visual field sizes are obtained, and finally, k outputs are obtained, namely y 1 To y k Fusion was performed and subjected to a 1x1 convolution operation.
According to still another aspect of the present invention, there is also provided a system for direct retinal arteriovenous vessel identification based on a fundus image, characterized by comprising:
the basic segmentation network module is used for processing the input fundus images and outputting 64-channel characteristic maps;
the blood vessel constraint module is used for processing the feature map of the 64 channels and outputting a first result map;
a 1 × 1 convolution module, configured to generate a 4-channel arteriovenous feature map including a background, an artery, a vein, and an unknown blood vessel according to the first result map; and
the first Sigmoid function module is used for generating a final retina arteriovenous blood vessel identification image according to the 4-channel arteriovenous characteristic image;
wherein the vessel confinement module comprises two parallel first and second branches, the first branch comprising: two 3 x 3 convolution modules and one 1x1 convolution module, which are used for generating a blood vessel segmentation characteristic map according to the characteristic map of the 64 channels; the second Sigmoid function module is used for converting the blood vessel segmentation feature map into a probability map; and a Gaussian activation function module for generating a blood vessel activation map according to the probability map;
the second branch comprises: two 3 x 3 convolution modules for generating an arteriovenous characteristic diagram according to the characteristic diagram of the 64 channels; the first matrix multiplication module is used for multiplying the probability map and the arteriovenous characteristic map and outputting the result map, and the second matrix multiplication module is used for multiplying the output of the first matrix multiplication module and the blood vessel activation map and outputting the first result map.
According to yet another aspect of the present invention, the basic split network module is a U-shaped split network module.
According to another aspect of the invention, the U-shaped segmentation network module comprises a left-side down-sampling part and a right-side up-sampling part, wherein the left-side down-sampling part performs 3 × 3 convolution on the fundus image twice to obtain a feature map, then the feature map is down-sampled to half of the original size through a 2 × 2 maximum pooling layer, then a multi-scale feature module is continuously utilized in 3 feature layers to perform multi-scale feature extraction, the size of the input feature map is reduced by half through 3 × 3 convolution operation with the step length of 2, and finally the obtained feature map is output to the right-side up-sampling part after the multi-scale feature module and the 3 × 3 convolution operation; the right upsampling part comprises 3 upsampling and merging feature map modules, each upsampling and merging feature map module expands the size of a feature map to be 2 times of that of an input feature map through 2 x 2 upsampling operation, the feature map is merged with the feature map corresponding to the left downsampling part and then subjected to 3 x 3 convolution operation twice to output the feature map, a 2 x 2 upsampling operation is next performed to obtain a feature map of 192 channels, and then a feature map of 64 channels is output through 3 x 3 convolution operation.
According to another aspect of the invention, the multi-scale feature module uses pre-trained Res2Net to first divide the feature map after 1 × 1 convolution into k subsets according to the number of channels, where the ith subset is x i I e {1,2,.., k }. Except for x 1 Each of x i There will be a corresponding 3 x 3 convolution with F i () Indicates that y is output i Expressed as:
thus, the outputs with different perception visual field sizes are obtained, and finally, k outputs are obtained, namely y 1 To y k Fusion was performed and subjected to a 1x1 convolution operation.
According to another aspect of the present invention, there is also provided a retinal artery and vein blood vessel automatic analysis method based on a fundus image, characterized by comprising the steps of:
acquiring a fundus image to be analyzed;
automatic arteriovenous identification of retinal blood vessels;
post-treatment repair of arteriovenous vessels;
extracting a central line and a boundary of the blood vessel;
identifying the intersection point of the center lines of the blood vessels;
detecting an optic disc in the fundus image;
positioning a region of interest;
selecting an artery and vein vessel to be analyzed in the region of interest;
acquiring morphological geometric parameters;
the automatic artery and vein recognition of the retinal blood vessels is specifically direct recognition by adopting the method.
According to still another aspect of the present invention, there is also provided a retinal artery and vein blood vessel automatic analysis system based on a fundus image, characterized by comprising:
a first module for acquiring a fundus image to be analyzed;
the second module is used for automatic arteriovenous identification of retinal blood vessels;
the third module is used for post-treatment repair of arteriovenous vessels;
a fourth module for vessel centerline and boundary extraction;
a fifth module for intersection identification of vessel centerlines;
a sixth module for detecting a disc in the fundus image;
a seventh module for locating a region of interest;
the eighth module is used for selecting the artery and vein blood vessels to be analyzed in the region of interest;
a ninth module for obtaining morphological geometric parameters;
the second module is specifically the system for directly identifying the retinal arteriovenous vessels based on the fundus image.
The invention can obtain one or more of the following technical effects:
1. the invention realizes the direct identification of the retinal arteriovenous vessels (one-step direct identification) through the vessel constraint module, the vessel constraint module relieves the problem of unbalanced positive and negative samples in the retinal vessel segmentation, and simultaneously enhances the characteristic expression of the edges of microvessels and main vessels;
2. the arteriovenous blood vessel classification result obtained by the method is higher in precision, more efficient and higher in robustness;
3. an annular area around the optic disc within the range of 0.5-2 times the diameter of the optic disc is selected as an interested area to be subjected to morphological parameter extraction, so that the working efficiency of the system is further improved and the clinical significance is achieved;
4. the design of the multi-scale characteristic module can adapt to and automatically analyze fundus pictures acquired by cameras with different resolutions and different models, and the compatibility of the system is improved.
Drawings
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
FIG. 1 is a schematic view of a vascular restriction module in accordance with a preferred embodiment of the present invention.
Fig. 2 is a schematic diagram of a U-shaped split network module according to another preferred embodiment of the present invention.
Fig. 3 is a schematic view of the arteriovenous identification result of fundus images with different resolutions according to still another preferred embodiment of the present invention.
Fig. 4 is a flowchart of a retinal artery and vein blood vessel automatic analysis method based on a fundus image according to still another preferred embodiment of the present invention.
Fig. 5 is a schematic diagram of optic disc detection, namely positioning of a region of interest (white circles are regions of interest) according to another preferred embodiment of the present invention.
Fig. 6 is a schematic diagram of the extraction result of the arteriovenous in the region of interest in fig. 5.
Detailed Description
The best mode for carrying out the present invention will be described in detail with reference to the accompanying drawings, wherein the detailed description is for the purpose of illustrating the invention in detail, and is not to be construed as limiting the invention, as various changes and modifications can be made therein without departing from the spirit and scope thereof, which are intended to be encompassed within the appended claims.
Example 1
According to a preferred embodiment of the present invention, referring to fig. 1, there is provided a retinal artery and vein blood vessel direct identification method based on a fundus image, characterized by comprising the steps of:
processing the input fundus image by using a basic segmentation network, and outputting a feature map of 64 channels;
processing the feature map of the 64 channels by using a blood vessel constraint module, and outputting a first result map;
the first result graph can generate an arteriovenous characteristic graph of 4 channels including background, artery, vein and unknown blood vessel through 1 × 1 convolution;
and the 4-channel arteriovenous characteristic graph generates a final retina arteriovenous blood vessel identification graph through a first Sigmoid function.
Wherein the vessel-constraining module comprises two parallel first and second branches, the first branch comprising: two 3 x 3 convolution modules and one 1x1 convolution module, which are used for generating a blood vessel segmentation characteristic map according to the characteristic map of the 64 channels; the second Sigmoid function module is used for converting the blood vessel segmentation feature map into a probability map; and a Gaussian activation function module for generating a blood vessel activation map according to the probability map.
Preferably, the second branch comprises: two 33 convolution modules for generating arteriovenous characteristic diagrams according to the 64-channel characteristic diagrams; the first matrix multiplication module is used for multiplying the probability map and the arteriovenous characteristic map and outputting the result map, and the second matrix multiplication module is used for multiplying the output of the first matrix multiplication module and the blood vessel activation map and outputting the first result map.
Preferably, referring to fig. 3, the result of arteriovenous identification of the fundus images at different resolutions by the method of the present invention is shown.
According to another preferred embodiment of the present invention, the basic partitioned network is a U-shaped partitioned network.
According to another preferred embodiment of the present invention, referring to fig. 2, the U-shaped segmentation network includes a left down-sampling part and a right up-sampling part, the left down-sampling part performs two times of 3 × 3 convolution on the fundus image to obtain a feature map, then performs down-sampling on the feature map size to half of the original size through a 2 × 2 maximum pooling layer, then performs multi-scale feature extraction continuously using a multi-scale feature module in the next 3 feature layers and reduces the size of the input feature map by half through a 3 × 3 convolution operation with a step size of 2, and finally outputs the obtained feature map to the right up-sampling part after the multi-scale feature module and the 3 × 3 convolution operation; the right upsampling part comprises 3 upsampling and merging feature map modules, each upsampling and merging feature map module expands the size of a feature map to be 2 times of that of an input feature map through 2 x 2 upsampling operation, the feature map is merged with the feature map corresponding to the left downsampling part and then subjected to 3 x 3 convolution operation twice to output the feature map, a 2 x 2 upsampling operation is next performed to obtain a feature map of 192 channels, and then a feature map of 64 channels is output through 3 x 3 convolution operation.
According to another preferred embodiment of the present invention, the multi-scale feature module uses pre-trained Res2Net to first divide the feature map after 1 × 1 convolution into k subsets according to the number of channels, where the ith subset is x i I e {1,2,.., k }. Except for x 1 Each of x i There will be a corresponding 3 x 3 convolution with F i () Indicate, then output y i Expressed as:
thus, the outputs with different perception visual field sizes are obtained, and finally, k outputs are obtained, namely y 1 To y k Fusion was performed and subjected to a 1x1 convolution operation.
There is also provided in accordance with still another preferred embodiment of the present invention a system for direct retinal arteriovenous vessel identification based on fundus images, including:
the basic segmentation network module is used for processing the input fundus images and outputting 64-channel characteristic maps;
the blood vessel constraint module is used for processing the feature map of the 64 channels and outputting a first result map;
a 1 × 1 convolution module, configured to generate a 4-channel arteriovenous feature map including a background, an artery, a vein, and an unknown blood vessel according to the first result map; and
the first Sigmoid function module is used for generating a final retina arteriovenous blood vessel identification image according to the 4-channel arteriovenous characteristic image;
wherein the vessel confinement module comprises two parallel first and second branches, the first branch comprising: two 3 x 3 convolution modules and one 1x1 convolution module, which are used for generating a blood vessel segmentation feature map according to the feature map of the 64 channels; the second Sigmoid function module is used for converting the blood vessel segmentation feature map into a probability map; and a Gaussian activation function module for generating a blood vessel activation map according to the probability map;
the second branch comprises: two 3 x 3 convolution modules for generating an arteriovenous characteristic diagram according to the characteristic diagram of the 64 channels; the first matrix multiplication module is used for multiplying the probability map and the arteriovenous characteristic map and outputting the result map, and the second matrix multiplication module is used for multiplying the output of the first matrix multiplication module and the blood vessel activation map and outputting the first result map.
According to another preferred embodiment of the present invention, the basic split network module is a U-shaped split network module.
According to another preferred embodiment of the present invention, the U-shaped segmentation network module includes a left downsampling portion and a right upsampling portion, the left downsampling portion performs two times of 3 × 3 convolution on the fundus image to obtain a feature map, then performs one 2 × 2 maximum pooling layer to downsample the feature map to half of the original size, then continuously performs multi-scale feature extraction using a multi-scale feature module in 3 feature layers and reduces the size of the input feature map by half through 3 × 3 convolution operation with a step size of 2, and finally outputs the obtained feature map to the right upsampling portion after the multi-scale feature module and the 3 × 3 convolution operation; the right upsampling part comprises 3 upsampling and merging feature map modules, each upsampling and merging feature map module expands the size of a feature map to be 2 times of that of an input feature map through 2 x 2 upsampling operation, the feature map is merged with the feature map corresponding to the left downsampling part and then subjected to 3 x 3 convolution operation twice to output the feature map, a 2 x 2 upsampling operation is next performed to obtain a feature map of 192 channels, and then a feature map of 64 channels is output through 3 x 3 convolution operation.
According to another preferred embodiment of the present invention, the multi-scale feature module uses pre-trained Res2Net to first divide the feature map after 1 × 1 convolution into k subsets according to the number of channels, where the ith subset is x i I e {1,2,.., k }. Except for x 1 Each of x i There will be a corresponding 3 x 3 convolution with F i () Indicates that y is output i Expressed as:
thus, the outputs with different perception visual field sizes are obtained, and finally k outputs are obtained, namely y 1 To y k Fusion was performed and subjected to a 1x1 convolution operation.
According to another preferred embodiment of the present invention, referring to fig. 4 to 6, there is also provided a retinal artery and vein blood vessel automatic analysis method based on fundus images, characterized by comprising the steps of:
acquiring a fundus image to be analyzed;
automatic arteriovenous identification of retinal blood vessels;
post-treatment repair of arteriovenous vessels;
extracting a central line and a boundary of the blood vessel;
identifying the intersection point of the center lines of the blood vessels;
detecting optic discs in the fundus images;
positioning a region of interest;
selecting an artery and vein vessel to be analyzed in the region of interest;
acquiring morphological geometric parameters;
the automatic artery and vein recognition of the retinal blood vessels is specifically direct recognition by adopting the method.
The respective steps will be described in detail below.
And acquiring a fundus image to be analyzed. The doctor shoots the eye images of the patient through the eye fundus camera, then performs quality analysis, continues to collect the images if the image quality does not reach the standard, and can not be used for the next analysis and processing until the collected eye fundus image quality reaches the standard. There is no requirement for the type of camera used to take the fundus picture.
And identifying retinal blood vessels and arteriovenous. And (4) performing arteriovenous identification on the fundus picture acquired in the first step. The invention provides a vessel-constrained network (VC-Net) for classifying retinal vessel artery and vein. Preferably, a U-type network may be adopted as a basic network architecture for arteriovenous identification, as shown in fig. 2. Meanwhile, the invention introduces a Multi-scale feature module (Multi-scale feature) in the down-sampling process, the Multi-scale feature module is used for learning features of retinal blood vessels with different scales, because retinal blood vessels with different scales exist in the fundus image, for example, the diameter of a vein blood vessel is larger than that of an artery blood vessel, and the diameter of a main blood vessel is larger than that of a capillary blood vessel. Preferably, the multi-scale feature module uses pre-trained Res2Net, and firstly, the feature map after 1 × 1 convolution is averagely divided into k subsets according to the number of channels, wherein the ith subset is x i I e {1,2,.., k }. Except for x 1 Each of x i There will be a corresponding 3 x 3 convolution,by F i () Indicates that y is output i Expressed as:
in this way, outputs of different perceived field sizes are obtained, and finally, for example, four outputs (k is 4, but not limited thereto) are fused and subjected to a convolution of 1 × 1. Preferably, k is set as a control parameter, that is, the number of input channels can be equally divided into a plurality of characteristic channels, and the larger k is, the stronger the multi-scale capability is.
Advantageously, a vessel-constraint module (VC) is designed at the end of the basic segmentation network. After passing through the basic segmentation network, the fundus image outputs two parallel branches, one branch is used for generating a blood vessel segmentation characteristic map, and the other branch is used for generating an arteriovenous characteristic map. The blood vessel segmentation feature map is converted into a probability map through a Sigmoid function, and then the probability map is multiplied by the arteriovenous feature map for further arteriovenous classification, and the part generates a highly reasonable blood vessel activation map by combining local and global blood vessel features to restrain arteriovenous features, namely, the features which tend to be background (negative samples) are inhibited, and more blood vessel (positive samples) features are concerned, so that the problem of imbalance of positive and negative samples can be well relieved. The blood vessels in one fundus image only account for 15% of the whole image, and the corresponding artery and vein blood vessels account for about 7.5% respectively, which is very challenging for directly classifying the fundus image into the background, the artery and the vein, and the designed VC module can focus the arteriovenous classification task on the blood vessels, pay more attention to the blood vessel characteristics, and therefore, the direct arteriovenous classification can be realized. Meanwhile, after the blood vessel segmentation probability map, gaussian kernel function mapping probability is designed to increase the feature weight of the blood vessel edge area and the capillary vessel, so that the feature expression of the edges of the micro blood vessel and the coarse blood vessel is enhanced. The Gaussian activation function in the present invention is defined as: f (x) = α (e) -|x-0.5| -e -0.5 ) +1,x is the probability map for the entire vessel segmentation, x ∈ [0,1]And α is a fixed parameter greater than 0 (set to 1 in this study). Based on the existing research and experimental viewsIt is observed that both capillary and boundary pixel probabilities are substantially centered around 0.5, while main vessel and background pixels are close to 1 and 0, respectively. F (x) constrains the activation weight value to [1, α (1-e) -0.5 )+1]Within the range. Pixels with a probability close to 0.5 will be assigned a higher weight (close to (α (1-e)) via F (x) -0.5 ) + 1)), while the background and main vessels will be arranged with lower weight (close to 1), during which process the potential capillaries will be activated. And then multiplying the result obtained by multiplying the blood vessel activation graph by the blood vessel probability graph and the arteriovenous characteristic graph, generating an arteriovenous characteristic graph of 4 channels (background, artery, vein and unknown blood vessel) through 1 multiplied by 1 convolution, and finally generating a final retina blood vessel arteriovenous identification probability graph through a Sigmoid function. The network structure of the whole retinal vascular arteriovenous identification is shown in figure 1.
The input of the network is the original fundus image collected in the first step, the output after the network is the classified artery-vein-background characteristic image, the retinal vessel arteriovenous identification network provided by the invention is suitable for fundus images with different resolutions, such as DRIVE, LES and HRF fundus image data sets, and the resolutions of the fundus images are 584 × 565, 1444 × 1620 and 3304 × 2336 respectively. The results of arteriovenous identification of the three datasets of different resolutions are shown in fig. 3, the first line is the original fundus image (input of the network) corresponding to the three datasets, and the second line uses the results of arteriovenous identification (output of the network) of the method proposed by the present invention. Compared with a two-stage method of firstly performing blood vessel segmentation and then performing artery and vein recognition based on the characteristics of blood vessel such as color, morphology and the like, the direct artery and vein recognition method provided by the invention is more efficient, higher in robustness and higher in model generalization capability. However, even the most advanced deep learning framework at present cannot achieve completely accurate arteriovenous vessel identification, and a result always has certain flaws, so that the result after passing through a deep learning network can be repaired through a series of post-processing algorithms, and a prediction result is more perfect. The post-processing method mentioned here mainly refers to some traditional algorithms based on network topology and so on.
Vessel centerline and boundary extraction. The method comprises the steps of extracting a vessel central line and a vessel boundary after identifying an artery and a vein of an image of the fundus, wherein the vessel central line and the vessel boundary information have certain auxiliary effect on the acquisition of vessel parameters, for example, an edge detector based on information fusion can be adopted for measuring the diameter of a vessel to obtain vessel edge information, a vessel path of interest is identified, the initial diameter of the vessel of interest is calculated by automatically generating cross lines perpendicular to the vessel central line, the distance between each cross line and two cross lines of the vessel edge is taken as the diameter of the vessel, and the average value of the distances between a plurality of cross lines and two cross lines of the vessel edge is taken as the diameter of the vessel for a certain section of the vessel.
Intersection points of the vessel centerlines are identified. The intersection point may be an intersection point of an arterial blood vessel and an arterial blood vessel, a branch point of an arterial blood vessel, an intersection point of a venous blood vessel and a venous blood vessel, a branch point of a venous blood vessel, or an intersection point of an arterial blood vessel and a venous blood vessel, and blood vessel parameters at the intersection point are not considered to reduce the influence of artery and vein classification errors.
Detection of optic discs in fundus images. Aiming at the fundus images acquired in the first step, rapid and accurate detection of the optic disc can be performed by adopting some mainstream target detection deep learning network models. Because the winding of blood vessels in the optic disc is complex and is not beneficial to analysis, and the calculation of blood vessel parameters mainly focuses on main blood vessels, the farther from the optic disc, the lower the calculation value is, and therefore, only the information acquisition and the related morphological parameter quantification are carried out on the blood vessels in a certain range around the optic disc.
And (4) positioning the region of interest. Taking the detected optic disc center as the origin, selecting an annular region within 0.5-2 times of the diameter of the optic disc as the region of interest to be subjected to morphological parameter extraction, as shown in fig. 5.
And selecting the artery and vein blood vessels to be analyzed in the region of interest. According to clinical requirements, K widest artery and vein blood vessels in the annular region are usually selected for parameter extraction, and doctors can select the number of the blood vessels according to the clinical requirements, wherein K is an integer of 4, 5, 6, 7 and the like. Compared with the method for calculating the morphological geometric parameters of all blood vessels in the annular region, the method can greatly reduce the workload and improve the working efficiency of the system under the condition of meeting the requirement.
And acquiring morphological geometric parameters. Quantifying the morphological geometric parameters of retinal blood vessels, and acquiring the morphological geometric parameters of the artery and vein blood vessels by using the K selected in the previous step, wherein the parameters mainly comprise: pipe diameter, bending degree, arteriovenous ratio, fractal dimension, branch angle, branch coefficient and the like. Compared with the method for extracting the geometric parameters of the blood vessels from the whole fundus image, the method provided by the invention can reduce the calculation complexity and simultaneously meet the clinical requirements.
According to still another preferred embodiment of the present invention, there is provided a retinal artery and vein blood vessel automatic analysis system based on a fundus image, characterized by comprising:
a first module for acquiring a fundus image to be analyzed;
the second module is used for automatic arteriovenous identification of retinal blood vessels;
the third module is used for post-treatment repair of arteriovenous vessels;
the fourth module is used for extracting the center line and the boundary of the blood vessel;
a fifth module for intersection identification of vessel centerlines;
a sixth module for detecting a disc in the fundus image;
a seventh module for locating a region of interest;
the eighth module is used for selecting the artery and vein blood vessels to be analyzed in the region of interest;
a ninth module for obtaining morphological geometric parameters;
the second module is specifically the system for directly identifying the retinal arteriovenous vessels based on the fundus image.
The invention can obtain one or more of the following technical effects:
1. the invention realizes the direct identification of the retinal arteriovenous vessels (one-step direct identification) through the vessel constraint module, the vessel constraint module relieves the problem of unbalanced positive and negative samples in the retinal vessel segmentation, and simultaneously enhances the characteristic expression of the edges of microvessels and main vessels;
2. the arteriovenous blood vessel classification result obtained by the method is higher in precision, more efficient and higher in robustness;
3. an annular area around the optic disc within the range of 0.5-2 times the diameter of the optic disc is selected as an interested area to be subjected to morphological parameter extraction, so that the working efficiency of the system is further improved and the clinical significance is achieved;
4. the design of the multi-scale characteristic module can adapt to and automatically analyze fundus pictures acquired by cameras with different resolutions and different models, and the compatibility of the system is improved.
It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.
Claims (10)
1. A direct retinal artery and vein blood vessel identification method based on fundus images is characterized by comprising the following steps:
processing the input fundus images by using a basic segmentation network, and outputting 64-channel feature maps;
processing the feature map of the 64 channels by using a blood vessel constraint module, and outputting a first result map;
the first result graph can generate an arteriovenous characteristic graph of 4 channels including background, artery, vein and unknown blood vessel through 1 × 1 convolution;
the arteriovenous characteristic diagram of the 4 channels generates a final retina arteriovenous blood vessel identification diagram through a first Sigmoid function;
wherein the vessel-constraining module comprises two parallel first and second branches, the first branch comprising: two 3 x 3 convolution modules and one 1x1 convolution module, which are used for generating a blood vessel segmentation feature map according to the feature map of the 64 channels; the second Sigmoid function module is used for converting the blood vessel segmentation feature map into a probability map; and a Gaussian activation function module for generating a blood vessel activation map according to the probability map;
the second branch comprises: two 3 x 3 convolution modules for generating an arteriovenous characteristic diagram according to the characteristic diagram of the 64 channels; the first matrix multiplication module is used for multiplying the probability map and the arteriovenous characteristic map and outputting the result map, and the second matrix multiplication module is used for multiplying the output of the first matrix multiplication module and the blood vessel activation map and outputting the first result map.
2. The direct retinal artery and vein blood vessel identification method based on a fundus image as described in claim 1, wherein said basic segmentation network is a U-shaped segmentation network.
3. A direct retinal artery and vein blood vessel identification method based on a fundus image as claimed in claim 2, wherein the U-shaped segmentation network comprises a left-side down-sampling part and a right-side up-sampling part, the left-side down-sampling part performs two times of 3 x 3 convolution on the fundus image to obtain a feature map, then the feature map size is down-sampled to half of the original size through a 2 x 2 maximum pooling layer, then multi-scale feature extraction is continuously performed in 3 feature layers by using a multi-scale feature module and the size of the input feature map is reduced by half through 3 x 3 convolution operation with a step size of 2, and finally the obtained feature map is output to the right-side up-sampling part after the multi-scale feature module and the 3 x 3 convolution operation; the right upsampling part comprises 3 upsampling and merging feature map modules, each upsampling and merging feature map module expands the size of a feature map to be 2 times of that of an input feature map through 2 x 2 upsampling operation, the feature map is merged with the feature map corresponding to the left downsampling part and then subjected to 3 x 3 convolution operation twice to output the feature map, a 2 x 2 upsampling operation is next performed to obtain a feature map of 192 channels, and then a feature map of 64 channels is output through 3 x 3 convolution operation.
4. Fundus-based image based on claim 3The direct identification method of retinal arteriovenous vessels is characterized in that a multi-scale feature module adopts pre-trained Res2Net, firstly, a feature map after 1x1 convolution is averagely divided into k subsets according to the number of channels, and the ith subset is x i I e {1,2,.., k }. Except for x 1 Each of x i There will be a corresponding 3 x 3 convolution with F i () Indicates that y is output i Expressed as:
thus, the outputs with different perception visual field sizes are obtained, and finally, k outputs are obtained, namely y 1 To y k Fusion was performed and subjected to a 1x1 convolution operation.
5. The utility model provides a direct system of discerning of retinal arteriovenous blood vessel based on fundus image which characterized in that includes:
the basic segmentation network module is used for processing the input fundus image and outputting a feature map of 64 channels;
the blood vessel constraint module is used for processing the feature map of the 64 channels and outputting a first result map;
a 1 × 1 convolution module, configured to generate a 4-channel arteriovenous feature map including a background, an artery, a vein, and an unknown blood vessel according to the first result map; and
the first Sigmoid function module is used for generating a final retina arteriovenous blood vessel identification image according to the 4-channel arteriovenous characteristic image;
wherein the vessel-constraining module comprises two parallel first and second branches, the first branch comprising: two 3 x 3 convolution modules and one 1x1 convolution module, which are used for generating a blood vessel segmentation feature map according to the feature map of the 64 channels; the second Sigmoid function module is used for converting the blood vessel segmentation feature map into a probability map; and a Gaussian activation function module for generating a blood vessel activation map according to the probability map;
the second branch comprises: two 3 x 3 convolution modules for generating an arteriovenous characteristic diagram according to the characteristic diagram of the 64 channels; the first matrix multiplication module is used for multiplying the probability map and the arteriovenous characteristic map and outputting the result map, and the second matrix multiplication module is used for multiplying the output of the first matrix multiplication module and the blood vessel activation map and outputting the first result map.
6. The system for retinal arteriovenous vessel direct identification based on a fundus image according to claim 5, characterized in that the basic segmentation network module is a U-shaped segmentation network module.
7. A system for retinal arteriovenous vessel direct identification based on a fundus image as claimed in claim 6 wherein the U-shaped segmentation network module comprises a left downsampling part and a right upsampling part, the left downsampling part performs two times of 3 x 3 convolution on the fundus image to obtain a feature map, then the feature map size is downsampled to half of the original size through a 2 x 2 maximum pooling layer, then multi-scale feature extraction is continuously performed by using a multi-scale feature module in the 3 feature layers, the size of the input feature map is halved through the 3 x 3 convolution operation with the step length of 2, and finally the obtained feature map is output to the right upsampling part after the multi-scale feature module and the 3 x 3 convolution operation; the right side up-sampling part comprises 3 up-sampling and merging feature map modules, each up-sampling and merging feature map module expands the size of a feature map to be 2 times of that of an input feature map through 2 x 2 up-sampling operation, the feature map is merged with the feature map corresponding to the left side down-sampling part and then subjected to 3 x 3 convolution operation twice to output the feature map, next, 2 x 2 up-sampling operation is carried out once to obtain a feature map of 192 channels, and then, a feature map of 64 channels is output through 3 x 3 convolution operation.
8. A system as claimed in claim 7 for retinal arteriovenous vessel direct identification based on fundus images and characterized in that the multiscale feature module uses pre-trained Res2Net to first divide the feature map after 1x1 convolution into k subsets on average according to the number of channels, the ith subset being x i I e {1,2,.., k }. Except for x 1 Each of x i There will be a corresponding 3 x 3 convolution with F i () Indicates that y is output i Expressed as:
thus, the outputs with different perception visual field sizes are obtained, and finally, k outputs are obtained, namely y 1 To y k Fusion was performed and subjected to a 1x1 convolution operation.
9. A retinal artery and vein blood vessel automatic analysis method based on fundus images is characterized by comprising the following steps:
acquiring a fundus image to be analyzed;
automatic arteriovenous identification of retinal blood vessels;
post-treatment repair of arteriovenous vessels;
extracting the central line and the boundary of the blood vessel;
identifying the intersection point of the center lines of the blood vessels;
detecting optic discs in the fundus images;
positioning a region of interest;
selecting an artery and vein vessel to be analyzed in the region of interest;
acquiring morphological geometric parameters;
wherein, the automatic artery and vein recognition of the retinal blood vessel is directly recognized by adopting the method of any one of claims 1 to 4.
10. An automatic retinal artery and vein blood vessel analysis system based on a fundus image, characterized by comprising:
a first module for acquiring a fundus image to be analyzed;
the second module is used for automatic arteriovenous identification of retinal blood vessels;
the third module is used for post-treatment repair of arteriovenous vessels;
a fourth module for vessel centerline and boundary extraction;
a fifth module for intersection identification of vessel centerlines;
a sixth module for detecting a disc in the fundus image;
a seventh module for locating a region of interest;
the eighth module is used for selecting the artery and vein blood vessels to be analyzed in the region of interest;
a ninth module for obtaining morphological geometric parameters;
wherein the second module is the system for direct retinal arteriovenous vessel identification based on fundus images of any of claims 5-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110468433.XA CN115249248A (en) | 2021-04-28 | 2021-04-28 | Retinal artery and vein blood vessel direct identification method and system based on fundus image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110468433.XA CN115249248A (en) | 2021-04-28 | 2021-04-28 | Retinal artery and vein blood vessel direct identification method and system based on fundus image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115249248A true CN115249248A (en) | 2022-10-28 |
Family
ID=83696593
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110468433.XA Pending CN115249248A (en) | 2021-04-28 | 2021-04-28 | Retinal artery and vein blood vessel direct identification method and system based on fundus image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115249248A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115511883A (en) * | 2022-11-10 | 2022-12-23 | 北京鹰瞳科技发展股份有限公司 | Method, apparatus and storage medium for determining curvature of retinal fundus blood vessel |
-
2021
- 2021-04-28 CN CN202110468433.XA patent/CN115249248A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115511883A (en) * | 2022-11-10 | 2022-12-23 | 北京鹰瞳科技发展股份有限公司 | Method, apparatus and storage medium for determining curvature of retinal fundus blood vessel |
CN115511883B (en) * | 2022-11-10 | 2023-04-18 | 北京鹰瞳科技发展股份有限公司 | Method, apparatus and storage medium for determining curvature of retinal fundus blood vessel |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111340789B (en) | Fundus retina blood vessel identification and quantification method, device, equipment and storage medium | |
CN108764286B (en) | Classification and identification method of feature points in blood vessel image based on transfer learning | |
Abramoff et al. | The automatic detection of the optic disc location in retinal images using optic disc location regression | |
Liu et al. | A framework of wound segmentation based on deep convolutional networks | |
CN109685809B (en) | Liver infusorian focus segmentation method and system based on neural network | |
CN110751636B (en) | Fundus image retinal arteriosclerosis detection method based on improved coding and decoding network | |
CN111222361A (en) | Method and system for analyzing hypertension retina vascular change characteristic data | |
CN114758137B (en) | Ultrasonic image segmentation method and device and computer readable storage medium | |
CN112884788B (en) | Cup optic disk segmentation method and imaging method based on rich context network | |
CN115578783B (en) | Device and method for identifying eye diseases based on eye images and related products | |
CN113205524A (en) | Blood vessel image segmentation method, device and equipment based on U-Net | |
CN113408647B (en) | Extraction method of cerebral small blood vessel structural characteristics | |
CN117058676B (en) | Blood vessel segmentation method, device and system based on fundus examination image | |
CN113724203B (en) | Model training method and device applied to target feature segmentation in OCT image | |
CN113516678A (en) | Eye fundus image detection method based on multiple tasks | |
Zhao et al. | Attention residual convolution neural network based on U-net (AttentionResU-Net) for retina vessel segmentation | |
CN117611824A (en) | Digital retina image segmentation method based on improved UNET | |
CN116758336A (en) | Medical image intelligent analysis system based on artificial intelligence | |
CN115249248A (en) | Retinal artery and vein blood vessel direct identification method and system based on fundus image | |
CN118071688A (en) | Real-time cerebral angiography quality assessment method | |
CN116740076B (en) | Network model design method for pigment segmentation in retinal pigment degeneration fundus image | |
CN115908795A (en) | Fundus arteriovenous segmentation method, blood vessel parameter calculation method, device and equipment | |
CN115410032A (en) | OCTA image classification structure training method based on self-supervision learning | |
CN114998582A (en) | Coronary artery blood vessel segmentation method, device and storage medium | |
Wang et al. | Segmentation of intravascular ultrasound images based on convex–concave adjustment in extreme regions |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |