CN109615614B - Method for extracting blood vessels in fundus image based on multi-feature fusion and electronic equipment - Google Patents

Method for extracting blood vessels in fundus image based on multi-feature fusion and electronic equipment Download PDF

Info

Publication number
CN109615614B
CN109615614B CN201811418985.4A CN201811418985A CN109615614B CN 109615614 B CN109615614 B CN 109615614B CN 201811418985 A CN201811418985 A CN 201811418985A CN 109615614 B CN109615614 B CN 109615614B
Authority
CN
China
Prior art keywords
image
fundus
blood vessel
extracting
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811418985.4A
Other languages
Chinese (zh)
Other versions
CN109615614A (en
Inventor
李建强
李鹏智
解黎阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201811418985.4A priority Critical patent/CN109615614B/en
Publication of CN109615614A publication Critical patent/CN109615614A/en
Application granted granted Critical
Publication of CN109615614B publication Critical patent/CN109615614B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The embodiment of the invention provides a method for extracting blood vessels in a fundus image based on multi-feature fusion and electronic equipment, wherein the method comprises the following steps: based on the fundus image to be processed, respectively extracting a plurality of different features of the fundus blood vessel by utilizing a plurality of different classifiers, and performing feature fusion on the plurality of different features; acquiring a segmented image of the fundus blood vessel by utilizing a trained denseCRF model based on the comprehensive characteristics after the characteristic fusion; and performing morphological analysis processing on the segmentation image, and extracting a binary image of the fundus blood vessel. The embodiment of the invention extracts the blood vessel image in the fundus image by multi-feature fusion of the fundus blood vessel, and can more effectively and accurately extract the blood vessel edge image, including the fundus blood vessel edge image of the blurred image.

Description

Method for extracting blood vessels in fundus image based on multi-feature fusion and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to a method for extracting blood vessels in a fundus image based on multi-feature fusion and electronic equipment.
Background
For analysis of fundus images, methods of image semantic segmentation are attracting increasing attention from researchers, such as a threshold segmentation method, an edge segmentation method, a genetic coding segmentation method, a wavelet transform segmentation method, a cluster segmentation method, and the like. With the development of artificial intelligence recognition technology, neural networks (CNNs) have attracted extensive attention of researchers and are applied in the field of image segmentation.
At present, most of image semantic segmentation methods based on a full convolution network FCN + CRF are image segmentation algorithms represented by FCN + densecrF, a sample is trained to obtain a fundus blood vessel space shape segmentation feature containing a non-sharp boundary through training an FCN model, and the segmentation feature is used for performing densecrF training to obtain a fundus blood vessel segmentation map. The trained features are single, so that the classification result obtained by the trained model is not fine enough and is not sensitive to details in the image, the extraction precision is not good enough, the edge precision is not high, and particularly the details of the blood vessel edge image of some blurred images are not sufficient.
Disclosure of Invention
In order to overcome the above problems or at least partially solve the above problems, embodiments of the present invention provide a method and an electronic device for extracting blood vessels in a fundus image based on multi-feature fusion, so as to extract blood vessel edge images, including a fundus blood vessel edge image of a blurred image, more efficiently and accurately.
In a first aspect, an embodiment of the present invention provides a method for extracting blood vessels in a fundus image based on multi-feature fusion, including:
based on the fundus image to be processed, respectively extracting a plurality of different features of the fundus blood vessel by utilizing a plurality of different classifiers, and performing feature fusion on the plurality of different features;
acquiring a segmented image of the fundus blood vessel by utilizing a trained denseCRF model based on the comprehensive characteristics after the characteristic fusion;
and performing morphological analysis processing on the segmentation image, and extracting a binary image of the fundus blood vessel.
In a second aspect, an embodiment of the present invention provides an apparatus for extracting blood vessels in a fundus image based on multi-feature fusion, including:
the comprehensive characteristic extraction module is used for respectively extracting a plurality of different characteristics of the fundus blood vessel by utilizing a plurality of different classifiers based on the fundus image to be processed and carrying out characteristic fusion on the different characteristics;
the fundus blood vessel image segmentation module is used for acquiring a segmentation image of the fundus blood vessel by utilizing a trained denseCRF model based on the comprehensive characteristics after the characteristic fusion;
and the extraction output module is used for performing morphological analysis processing on the segmentation image and extracting a binary image of the fundus blood vessel.
In a third aspect, an embodiment of the present invention provides an electronic device, including: at least one memory, at least one processor, a communication interface, and a bus; the memory, the processor and the communication interface complete mutual communication through the bus, and the communication interface is used for information transmission between the electronic equipment and the fundus image equipment; the memory stores therein a computer program executable on the processor, and the processor implements the method for extracting blood vessels in a fundus image based on multi-feature fusion as described in the first aspect above when executing the computer program.
In a fourth aspect, an embodiment of the present invention provides a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the method for extracting blood vessels in a fundus image based on multi-feature fusion as described in the first aspect above.
According to the method and the electronic equipment for extracting the blood vessels in the fundus image based on the multi-feature fusion, provided by the embodiment of the invention, the complementarity between the identification results obtained by different classifiers is considered, the multiple classifiers are fused, the multiple feature fusion is combined with the FCN + densecrF model, the blood vessel image in the fundus image is extracted, the segmentation and extraction effect of the fundus blood vessels is better, partial blood vessels can be extracted from the image with fuzzy edges, and the extraction precision is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a method for extracting blood vessels from a fundus image based on multi-feature fusion according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of multi-feature fusion in the method for extracting blood vessels from fundus images based on multi-feature fusion according to the embodiment of the present invention;
FIG. 3 is a schematic flow chart of morphological analysis of segmentation images in the method for extracting blood vessels from fundus images based on multi-feature fusion according to the embodiment of the present invention;
fig. 4 is a schematic flowchart of a method for extracting blood vessels from a fundus image based on multi-feature fusion according to another embodiment of the present invention;
fig. 5 is a schematic structural diagram of an extraction device for blood vessels in a fundus image based on multi-feature fusion according to an embodiment of the present invention;
fig. 6 is a schematic physical structure diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments obtained by persons of ordinary skill in the art based on the embodiments of the present invention without any creative efforts belong to the protection scope of the embodiments of the present invention.
Because the neural network has strong adaptivity, learning capability, nonlinear mapping capability, robustness and fault-tolerant capability, more and more attention is paid to people. However, for image recognition using high-dimensional features, it is difficult for the conventional single classifier to obtain a satisfactory recognition rate. Because the recognition results obtained by different classifiers are often very complementary, various classifiers are fused, and the network model trained after the fusion of various characteristics can increase the available amount of recognition information and reduce the uncertainty of the information, thereby being an effective way for improving the recognition rate and the accuracy of the whole system. The embodiment of the invention adopts a mode of combining various feature fusions with the FCN + densecrF model, so that the segmentation and extraction effect of the fundus blood vessels is better, partial blood vessels can be extracted from the image with blurred edges, and the precision is improved. Embodiments of the present invention will be described and illustrated with reference to various embodiments.
Fig. 1 is a schematic flowchart of a method for extracting blood vessels from a fundus image based on multi-feature fusion according to an embodiment of the present invention, as shown in fig. 1, the method includes:
s101, based on the fundus image to be processed, a plurality of different classifiers are utilized to respectively extract a plurality of different features of the fundus blood vessel, and feature fusion is carried out on the plurality of different features.
The embodiment of the invention fuses a plurality of classifiers by utilizing the complementarity between the identification results obtained by different classifiers. That is, different classifiers are respectively used to correspondingly extract different features of the fundus blood vessel from the fundus image to be processed, and the different features represent the shape of the fundus blood vessel from different angles. Optionally, the different features may be edge features, texture features and spatial shape features of the fundus blood vessel.
And then, for the different features, in order to ensure that the standard of the fundus blood vessel image is more accurate, performing feature fusion on the different features to obtain a comprehensive feature after feature fusion. It is understood that when different features are fused, existing data fusion algorithms can be used, and improved algorithms can also be used for fusion.
And S102, acquiring a segmented image of the fundus blood vessel by using the trained densecrF model based on the comprehensive characteristics after the characteristic fusion.
After acquiring the comprehensive characteristics of the fundus blood vessel image according to the processing, the embodiment of the invention needs to obtain the segmentation image of the fundus blood vessel from the fundus image to be processed according to the comprehensive characteristics. In particular, the segmentation may be achieved using a conditional random field (denseCRF). Each pixel point is taken as a node, and the relation between the pixels is taken as an edge, so that a conditional random field is formed. By utilizing the comprehensive characteristics after the fusion of different characteristics and considering the global class transfer influence by combining the denseCRF structure, the segmentation of the fundus blood vessel image is realized, the accuracy can be improved, and the optimized fundus blood vessel segmentation image is obtained by optimizing the result.
S103, morphological analysis processing is carried out on the segmentation images, and a binary image of the fundus blood vessel is extracted.
In the embodiment of the invention, on the basis of obtaining the segmentation image of the fundus blood vessel according to the processing, in order to visually represent the shape, the position and the like of the fundus blood vessel, morphological analysis processing is carried out according to the segmentation image of the fundus blood vessel, and finally a binary image of the fundus blood vessel to be segmented is obtained.
It can be understood that, among other things, morphological analysis is mainly used to extract image components from the image that are meaningful in expressing and describing the shape of the region, so that subsequent recognition can grasp the shape features of the target object, such as the boundary and connected regions, which have the most distinguishing capability.
According to the method for extracting the blood vessels in the fundus image based on the multi-feature fusion, provided by the embodiment of the invention, the complementarity between the identification results obtained by different classifiers is considered, the multiple classifiers are fused, the multiple feature fusion is combined with the FCN + densecrF model, the blood vessel image in the fundus image is extracted, the segmentation and extraction effect of the fundus blood vessels is better, partial blood vessels can be extracted from the image with fuzzy edges, and the extraction precision is improved.
Optionally, the step of extracting the edge feature, the texture feature, and the spatial shape feature of the fundus blood vessel respectively includes: analyzing the fundus image to be processed by using a Canny algorithm classifier, and extracting edge characteristics; analyzing the fundus image to be processed by using an LBP algorithm classifier, and extracting texture features; and analyzing the fundus image to be processed by using an FCN algorithm classifier, and extracting spatial shape characteristics.
It is understood that, in consideration of the different characteristics of the fundus blood vessels, i.e., the different characteristics of the edge characteristics, texture characteristics, and spatial shape characteristics of the fundus blood vessels, in the above-described embodiments, the respective classifiers are selected to perform the respective extractions of these characteristics from the fundus image to be processed. Specifically, texture features of fundus blood vessels, fundus blood vessel edge features extracted based on canny, and fundus blood vessel spatial shape segmentation features including non-sharp boundaries extracted by FCN are extracted based on LBP, respectively. The FCN model of the full convolution layer network is directly operated based on pixel points of an image, and a segmentation image of segmentation characteristics is obtained through a series of convolution layers and a layer of deconvolution layer and finally through softmax layer output category probability.
It can be understood that, in order to realize accurate fundus blood vessel image segmentation, it is necessary to train and optimally update each classifier, i.e. the processing model, with a certain amount of training samples before performing data processing with the classifier, i.e. the processing model, i.e. before the step of extracting edge features by analyzing the fundus image to be processed with the Canny algorithm classifier, the method of the embodiment of the present invention further includes: obtaining a certain number of fundus image training samples, performing iterative training on the established basic Canny algorithm classifier, basic LBP algorithm classifier, basic FCN algorithm classifier and basic densecrF model by using the fundus image training samples, and obtaining the Canny algorithm classifier, the LBP algorithm classifier, the FCN algorithm classifier and the densecrF model.
It can be understood that the basic Canny algorithm classifier, the basic LBP algorithm classifier, the basic FCN algorithm classifier and the basic denseCRF model can be corresponding network models which are randomly generated in advance according to application requirements or automatically generated after model parameters are initially set. Then, a certain amount of training models are needed to perform iterative training and optimization updating on the network models until the optimal model parameters meeting the set conditions are obtained, that is, the optimal model is obtained, that is, the optimal model can be used as a processing model for finally performing fundus blood vessel image segmentation, that is, according to the Canny algorithm classifier, the LBP algorithm classifier, the FCN algorithm classifier and the denseCRF model applied in the above embodiments.
Optionally, according to the foregoing embodiments, the step of performing feature fusion on a plurality of different features specifically includes: extracting feature vectors based on the edge features, the texture features and the space shape features respectively, and cascading the extracted feature vectors to form comprehensive feature vectors; and carrying out normalization processing on the comprehensive characteristic vector by using a probability normalization algorithm, and processing the normalized comprehensive characteristic vector by using a principal component analysis algorithm to realize characteristic fusion.
Fig. 2 is a schematic flow diagram illustrating multi-feature fusion performed in a method for extracting blood vessels from a fundus image based on multi-feature fusion according to an embodiment of the present invention, and as shown in fig. 2, when performing multi-feature fusion of a fundus blood vessel image according to an embodiment of the present invention, feature vectors of different features are first extracted based on edge features, texture features, and spatial shape features of fundus blood vessels extracted by different classifiers, and are concatenated to form a comprehensive feature vector X (X ═ X)1,···,Xn). Then, the integrated feature vector X is normalized to (X) by a probability normalization algorithm1,···,Xn) Normalization processing is carried out to obtain a normalized comprehensive characteristic vector X '═ X'1,...X′n) So that the magnitude of the individual components is close to unity. And finally, solving the autocorrelation matrix of the normalized comprehensive characteristic vector by using a principal component analysis method and carrying out corresponding K-L transformation to obtain the characteristic fusion of the different characteristics.
Fig. 3 is a schematic flow chart of morphological analysis performed on a segmented image in the method for extracting blood vessels from an fundus image based on multi-feature fusion according to the embodiment of the present invention, and as shown in fig. 3, the step of processing the normalized comprehensive feature vector by using a principal component analysis algorithm to realize feature fusion specifically includes: solving an autocorrelation matrix of the normalized comprehensive characteristic vector, decomposing the autocorrelation matrix by utilizing a principal component analysis algorithm, and respectively obtaining a characteristic vector matrix and a characteristic value matrix; and performing K-L transformation on the normalized comprehensive characteristic vector to obtain a transformed characteristic value matrix, and selecting the characteristic vector corresponding to the maximum characteristic value in the transformed characteristic value matrix as the comprehensive characteristic.
According to the above embodiment, when fusion of different features of the fundus blood vessel is realized by the principal component analysis method based on the integrated feature vector after normalization, the integrated feature after normalization is first obtainedSign vector X ═ X'1,...X′n) And performing characteristic decomposition on the autocorrelation matrix R according to the principle of Principal Component Analysis (PCA) to obtain an eigenvector matrix A and an eigenvalue matrix U. Then, X '═ X'1,...X′n) K-L conversion is performed, and if Y ' is made to be U ' X ', Y ' is made to be (X) = '1,...X′n) K-L transformation of (1). Wherein each column vector of U' contains a composite feature vector X ═ X (X)1,···,Xn) Then Y' is taken as the fusion feature of X. In addition, the feature vector corresponding to the maximum feature value can be selected as the comprehensive feature of X, so that the purpose of feature fusion is achieved, and the comprehensive feature comprising the texture, the edge and the space shape of the fundus blood vessel is obtained.
Optionally, according to the above embodiments, the step of performing morphological analysis processing on the segmentation image and extracting a binary image of the fundus blood vessel specifically includes: carrying out graying inversion on the segmented image, and carrying out morphological top-hat filtering processing on the image subjected to graying inversion to obtain an image subjected to top-hat conversion; and (4) performing binarization processing on the image after the high-hat transformation by using a threshold analysis method to obtain a binary image of the fundus blood vessel.
In the embodiments of the present invention, the gray scale processing is performed on the divided image according to each of the above embodiments, and the image after the gray scale inversion is obtained. The gray scale of the blood vessel part after the treatment is larger (brighter), and the gray scale of the background part is smaller (darker), so that the subsequent high-cap filtering treatment is facilitated.
And then, performing morphological top-hat filtering on the input image after the graying is inverted, namely, subtracting the image after the morphological opening operation from the image to obtain an image after top-hat conversion. Image contrast may be enhanced by top-hat filtering. And high-cap filtering can be adopted for the binary image or the gray-scale image. Image contrast can be enhanced by top-hat filtering, which highlights bright objects from a dark background.
And finally, thresholding the image after the top hat transformation. That is, the difference between the gray characteristics of the target area to be extracted in the image and the background thereof is utilized, the image is regarded as the combination of two types of areas (the target area and the background area) with different gray levels, and a reasonable threshold is selected to determine whether each pixel point in the image belongs to the target area or the background area, so that a corresponding binary image is generated. By setting the threshold parameter, a binary image of the fundus blood vessel is extracted.
To further illustrate the technical solutions of the embodiments of the present invention, the embodiments of the present invention provide the following processing flows of the embodiments according to the above embodiments, but do not limit the scope of the embodiments of the present invention.
Fig. 4 is a schematic flow chart of a method for extracting blood vessels from a fundus image based on multi-feature fusion according to another embodiment of the present invention, and as shown in fig. 4, the method for extracting a blood vessel in a fundus image according to an embodiment of the present invention includes the following processing procedures:
firstly, training an FCN algorithm classifier, an LBP algorithm classifier and a Canny algorithm classifier by using a fundus image training sample to obtain a fundus blood vessel space shape segmentation characteristic, a fundus blood vessel texture characteristic and a fundus blood vessel edge characteristic which contain non-sharp boundaries, and then performing multi-feature fusion on the fundus blood vessel characteristics. That is to say that the first and second electrodes,
extracting texture features of fundus blood vessels based on LBP, fundus blood vessel edge features extracted based on canny and fundus blood vessel space shape segmentation features including non-sharp boundaries extracted by FCN respectively;
extracting the feature vectors of the three features, and combining the features into a vector X (X ═ X)1···Xn);
Then, by using a probability standardization method, the characteristic vector X is equal to (X)1···Xn) Normalizing to make the magnitude of each component close to be consistent;
finally, a feature vector X '═ X' (X ') after normalization is obtained'1,...X′n) According to the principle of PCA, performing characteristic decomposition on R to obtain an eigenvector matrix A and an eigenvalue matrix U, then performing K-L transformation on X ', wherein each column vector of U ' in the transformation process contains the information of X, and then taking Y ' of the K-L transformation as the fusion of XAnd (4) characteristic vectors corresponding to the maximum characteristic values can be selected as comprehensive characteristics of X, so that the purpose of characteristic fusion is achieved, and comprehensive characteristics including the texture, the edge and the space shape of the fundus blood vessel are obtained.
Secondly, training a denseCRF model by using the obtained comprehensive characteristics to obtain a segmented image of the fundus blood vessel. It is understood that the feature fusion for fundus image vessel segmentation described above can also be used in other network models.
And finally, performing morphological processing operation on the fundus blood vessel segmentation image obtained by the denseCRF model to finally obtain a fundus blood vessel binary image.
According to the embodiment of the invention, PCA processing is carried out on texture features of fundus blood vessels extracted based on LBP, fundus blood vessel edge features extracted based on canny and fundus blood vessel space shape segmentation features including non-sharp boundaries extracted by FCN, and a densecrF model is trained by using the obtained comprehensive features including fundus blood vessel textures, edges and space shapes. Because the recognition results obtained by different classifiers are often very complementary, various classifiers are fused, and the network model trained after the fusion of various characteristics can increase the available amount of recognition information and reduce the uncertainty of the information, thereby being an effective way for improving the recognition rate and the accuracy of the whole system. The invention combines multiple feature fusion and the FCN + densecrF model to ensure that the segmentation and extraction effect of the fundus blood vessel is better, the fuzzy pathological image can also extract partial blood vessel edges, and the extraction precision of the blood vessel and the blood vessel edges is improved.
As another aspect of the embodiments of the present invention, the embodiments of the present invention provide an apparatus for extracting blood vessels in a fundus image based on multi-feature fusion according to the above-described embodiments, which is used for realizing the extraction of blood vessels in a fundus image based on multi-feature fusion in the above-described embodiments. Therefore, the description and definition in the method for extracting blood vessels from fundus images based on multi-feature fusion in the embodiments described above can be used for understanding the execution modules in the embodiments of the present invention, and specific reference may be made to the embodiments described above, which are not described herein again.
According to an embodiment of the present invention, the structure of the extraction device for blood vessels in fundus images based on multi-feature fusion is shown in fig. 5, which is a schematic structural diagram of the extraction device for blood vessels in fundus images based on multi-feature fusion provided by the embodiment of the present invention, and the device can be used for realizing the extraction of blood vessels in fundus images based on multi-feature fusion in the above-mentioned method embodiments, and the device comprises: an integrated feature extraction module 501, a fundus blood vessel image segmentation module 502 and an extraction output module 503. Wherein:
the comprehensive feature extraction module 501 is configured to extract a plurality of different features of the fundus blood vessel based on the fundus image to be processed, respectively, using a plurality of different classifiers, and perform feature fusion on the plurality of different features; the fundus blood vessel image segmentation module 502 is used for acquiring a segmentation image of a fundus blood vessel by utilizing a trained denseCRF model based on the comprehensive characteristics after the characteristic fusion; the extraction output module 503 is configured to perform morphological analysis processing on the segmented image, and extract a binary image of the fundus blood vessel.
Specifically, the comprehensive feature extraction module 501 respectively adopts different classifiers to correspondingly extract different features of the fundus blood vessel from the fundus image to be processed, and the different features represent the shape of the fundus blood vessel from different angles. Optionally, the different features may be edge features, texture features and spatial shape features of the fundus blood vessel. Then, for these different features, the comprehensive feature extraction module 501 performs feature fusion on these different features to obtain a comprehensive feature after feature fusion, so as to make the standard of the fundus blood vessel image more accurate.
The fundus blood vessel image segmentation module 502 may use a conditional random field (denseCRF) to achieve segmentation. Each pixel point is taken as a node, and the relation between the pixels is taken as an edge, so that a conditional random field is formed. The fundus blood vessel image segmentation module 502 realizes segmentation of the fundus blood vessel image by utilizing the integrated characteristics after fusion of different characteristics and considering the global class transfer influence by combining with the denseCRF structure.
In order to visually represent the shape, position, and the like of the fundus blood vessel, the extraction and output module 503 performs morphological analysis processing on the segmented image of the fundus blood vessel to finally obtain a binary image of the fundus blood vessel to be segmented.
According to the extraction device of the blood vessels in the fundus image based on the multi-feature fusion, provided by the embodiment of the invention, the corresponding execution module is arranged, the complementarity between the identification results obtained by different classifiers is considered, the multiple classifiers are fused, the multiple feature fusion is combined with the FCN + densecrF model, the extraction of the blood vessel image in the fundus image is carried out, the segmentation and extraction effects of the fundus blood vessels can be better, partial blood vessels can be extracted from the image with fuzzy edges, and the extraction precision is improved.
It is understood that, in the embodiment of the present invention, each relevant program module in the apparatus of each of the above embodiments may be implemented by a hardware processor (hardware processor). Moreover, the device for extracting blood vessels in fundus images based on multi-feature fusion according to the embodiments of the present invention can implement the flow of extracting blood vessels in fundus images based on multi-feature fusion according to the above-mentioned method embodiments by using the above-mentioned program modules, and when the device is used for implementing the extraction of blood vessels in fundus images based on multi-feature fusion according to the above-mentioned method embodiments, the beneficial effects produced by the device according to the embodiments of the present invention are the same as those of the above-mentioned method embodiments, and reference may be made to the above-mentioned method embodiments, and no further description is given here.
As another aspect of the embodiment of the present invention, in this embodiment, an electronic device is provided according to the above embodiments, and with reference to fig. 6, an entity structure diagram of the electronic device provided in the embodiment of the present invention includes: at least one memory 601, at least one processor 602, a communication interface 603, and a bus 604.
Wherein, the memory 601, the processor 602 and the communication interface 603 complete mutual communication through a bus 604, and the communication interface 603 is used for information transmission between the electronic device and the fundus image device; the memory 601 stores a computer program operable on the processor 602, and the processor 602 executes the computer program to implement the method for extracting blood vessels in a fundus image based on multi-feature fusion as described in the above embodiments.
It is understood that the electronic device at least includes a memory 601, a processor 602, a communication interface 603 and a bus 604, and the memory 601, the processor 602 and the communication interface 603 form a communication connection with each other through the bus 604, and can complete communication with each other, such as the processor 602 reading program instructions of the extraction method of blood vessels in the fundus image based on multi-feature fusion from the memory 601. In addition, the communication interface 603 can also realize communication connection between the electronic device and the fundus image device, and can complete mutual information transmission, such as extraction of blood vessels in the fundus image based on multi-feature fusion through the communication interface 603.
When the electronic device is running, the processor 602 calls the program instructions in the memory 601 to execute the methods provided by the above-mentioned method embodiments, including for example: based on the fundus image to be processed, respectively extracting a plurality of different features of the fundus blood vessel by utilizing a plurality of different classifiers, and performing feature fusion on the plurality of different features; acquiring a segmented image of the fundus blood vessel by utilizing a trained denseCRF model based on the comprehensive characteristics after the characteristic fusion; morphological analysis processing is performed on the segmented image, and a binary image of the fundus blood vessel and the like are extracted.
The program instructions in the memory 601 may be implemented in the form of software functional units and stored in a computer readable storage medium when sold or used as a stand-alone product. Alternatively, all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, where the program may be stored in a computer-readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Embodiments of the present invention also provide a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute a method for extracting a blood vessel in a fundus image based on multi-feature fusion as described in the above embodiments, for example, including: based on the fundus image to be processed, respectively extracting a plurality of different features of the fundus blood vessel by utilizing a plurality of different classifiers, and performing feature fusion on the plurality of different features; acquiring a segmented image of the fundus blood vessel by utilizing a trained denseCRF model based on the comprehensive characteristics after the characteristic fusion; morphological analysis processing is performed on the segmented image, and a binary image of the fundus blood vessel and the like are extracted.
According to the electronic device and the non-transitory computer readable storage medium provided by the embodiments of the invention, by executing the method for extracting blood vessels in the fundus image based on multi-feature fusion described in the embodiments, the complementarity between the recognition results obtained by different classifiers is considered, multiple classifiers are fused, and the multiple feature fusion is combined with the FCN + densecrF model to extract the blood vessel image in the fundus image, so that the segmentation and extraction effects of fundus blood vessels can be better, partial blood vessels can be extracted from the image with blurred edges, and the extraction accuracy is improved.
It is to be understood that the above-described embodiments of the apparatus, the electronic device and the storage medium are merely illustrative, and that elements described as separate components may or may not be physically separate, may be located in one place, or may be distributed on different network elements. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. Based on such understanding, the technical solutions mentioned above may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a usb disk, a removable hard disk, a ROM, a RAM, a magnetic or optical disk, etc., and includes several instructions for causing a computer device (such as a personal computer, a server, or a network device, etc.) to execute the methods described in the method embodiments or some parts of the method embodiments.
In addition, it should be understood by those skilled in the art that in the specification of the embodiments of the present invention, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
In the description of the embodiments of the invention, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description. Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects.
However, the disclosed method should not be interpreted as reflecting an intention that: that is, the claimed embodiments of the invention require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of an embodiment of this invention.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the embodiments of the present invention, and not to limit the same; although embodiments of the present invention have been described in detail with reference to the foregoing embodiments, it should be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (8)

1. A method for extracting blood vessels in a fundus image based on multi-feature fusion is characterized by comprising the following steps:
based on the fundus image to be processed, respectively extracting a plurality of different features of the fundus blood vessel by utilizing a plurality of different classifiers, and performing feature fusion on the plurality of different features;
acquiring a segmented image of the fundus blood vessel by utilizing a trained denseCRF model based on the comprehensive characteristics after the characteristic fusion;
performing morphological analysis processing on the segmentation image, and extracting a binary image of the fundus blood vessel;
the step of extracting a plurality of different features of the fundus blood vessel respectively specifically includes:
respectively extracting edge features, texture features and space shape features of the fundus blood vessels from the fundus image to be processed;
the step of performing feature fusion on the plurality of different features specifically includes:
extracting feature vectors based on the edge features, the texture features and the space shape features respectively, and cascading the extracted feature vectors to form comprehensive feature vectors;
and carrying out normalization processing on the comprehensive characteristic vector by using a probability normalization algorithm, and processing the normalized comprehensive characteristic vector by using a principal component analysis algorithm to realize the characteristic fusion.
2. The method according to claim 1, wherein the step of extracting the edge feature, texture feature and spatial shape feature of the fundus blood vessel respectively specifically comprises:
analyzing the fundus image to be processed by using a Canny algorithm classifier, and extracting the edge characteristics; analyzing the fundus image to be processed by using an LBP algorithm classifier, and extracting the texture features; and analyzing the fundus image to be processed by using an FCN algorithm classifier, and extracting the spatial shape characteristics.
3. The method according to claim 1, wherein the step of processing the normalized integrated feature vector by using a principal component analysis algorithm to realize the feature fusion specifically comprises:
solving an autocorrelation matrix of the normalized comprehensive eigenvector, decomposing the autocorrelation matrix by utilizing a principal component analysis algorithm, and respectively obtaining an eigenvector matrix and an eigenvalue matrix;
and performing K-L conversion on the normalized comprehensive characteristic vector to obtain a converted characteristic value matrix, and selecting the characteristic vector corresponding to the maximum characteristic value in the converted characteristic value matrix as the comprehensive characteristic.
4. The method according to any one of claims 1 to 3, wherein the step of performing morphological analysis processing on the segmented image and extracting the binary image of the fundus blood vessel specifically comprises:
carrying out graying negation on the segmentation image, and carrying out morphological top-hat filtering processing on the image subjected to graying negation to obtain a top-hat converted image;
and performing binarization processing on the image after the high-hat transformation by using a threshold analysis method to obtain a binary image of the fundus blood vessel.
5. The method according to claim 2, characterized in that before the step of analyzing the fundus image to be processed by means of a Canny algorithm classifier to extract the edge features, it further comprises:
obtaining a certain number of fundus image training samples, performing iterative training on the established basic Canny algorithm classifier, basic LBP algorithm classifier, basic FCN algorithm classifier and basic densecrF model by using the fundus image training samples, and obtaining the Canny algorithm classifier, the LBP algorithm classifier, the FCN algorithm classifier and the densecrF model.
6. An extraction device of blood vessels in fundus images based on multi-feature fusion is characterized by comprising:
the comprehensive characteristic extraction module is used for respectively extracting a plurality of different characteristics of the fundus blood vessel by utilizing a plurality of different classifiers based on the fundus image to be processed and carrying out characteristic fusion on the different characteristics;
the fundus blood vessel image segmentation module is used for acquiring a segmentation image of the fundus blood vessel by utilizing a trained denseCRF model based on the comprehensive characteristics after the characteristic fusion;
the extraction output module is used for performing morphological analysis processing on the segmentation image and extracting a binary image of the fundus blood vessel;
the step of extracting a plurality of different features of the fundus blood vessel respectively specifically includes:
respectively extracting edge features, texture features and space shape features of the fundus blood vessels from the fundus image to be processed;
the step of performing feature fusion on the plurality of different features specifically includes:
extracting feature vectors based on the edge features, the texture features and the space shape features respectively, and cascading the extracted feature vectors to form comprehensive feature vectors;
and carrying out normalization processing on the comprehensive characteristic vector by using a probability normalization algorithm, and processing the normalized comprehensive characteristic vector by using a principal component analysis algorithm to realize the characteristic fusion.
7. An electronic device, comprising: at least one memory, at least one processor, a communication interface, and a bus;
the memory, the processor and the communication interface complete mutual communication through the bus, and the communication interface is also used for information transmission between the electronic equipment and the fundus image equipment;
the memory has stored therein a computer program operable on the processor, which when executed by the processor, implements the method of any of claims 1 to 5.
8. A non-transitory computer-readable storage medium storing computer instructions that cause a computer to perform the method of any one of claims 1-5.
CN201811418985.4A 2018-11-26 2018-11-26 Method for extracting blood vessels in fundus image based on multi-feature fusion and electronic equipment Active CN109615614B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811418985.4A CN109615614B (en) 2018-11-26 2018-11-26 Method for extracting blood vessels in fundus image based on multi-feature fusion and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811418985.4A CN109615614B (en) 2018-11-26 2018-11-26 Method for extracting blood vessels in fundus image based on multi-feature fusion and electronic equipment

Publications (2)

Publication Number Publication Date
CN109615614A CN109615614A (en) 2019-04-12
CN109615614B true CN109615614B (en) 2020-08-18

Family

ID=66003908

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811418985.4A Active CN109615614B (en) 2018-11-26 2018-11-26 Method for extracting blood vessels in fundus image based on multi-feature fusion and electronic equipment

Country Status (1)

Country Link
CN (1) CN109615614B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009631A (en) * 2019-04-15 2019-07-12 唐晓颖 Vascular quality appraisal procedure, device, equipment and the medium of eye fundus image
CN110349170B (en) * 2019-07-13 2022-07-08 长春工业大学 Full-connection CRF cascade FCN and K mean brain tumor segmentation algorithm
CN112257791A (en) * 2020-10-26 2021-01-22 重庆邮电大学 Classification method of multi-attribute classification tasks based on CNN and PCA
CN112734723B (en) * 2021-01-08 2023-06-30 温州医科大学 Multi-source data-oriented breast tumor image classification prediction method and device
CN114220054B (en) * 2021-12-15 2023-04-18 北京中科智易科技股份有限公司 Method for analyzing tactical action of equipment and synchronously displaying equipment based on equipment bus data

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106530283A (en) * 2016-10-20 2017-03-22 北京工业大学 SVM (support vector machine)-based medical image blood vessel recognition method
CN107248161A (en) * 2017-05-11 2017-10-13 江西理工大学 Retinal vessel extracting method is supervised in a kind of having for multiple features fusion
CN107704886A (en) * 2017-10-20 2018-02-16 北京工业大学 A kind of medical image hierarchy system and method based on depth convolutional neural networks

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106803055B (en) * 2015-11-26 2019-10-25 腾讯科技(深圳)有限公司 Face identification method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106530283A (en) * 2016-10-20 2017-03-22 北京工业大学 SVM (support vector machine)-based medical image blood vessel recognition method
CN107248161A (en) * 2017-05-11 2017-10-13 江西理工大学 Retinal vessel extracting method is supervised in a kind of having for multiple features fusion
CN107704886A (en) * 2017-10-20 2018-02-16 北京工业大学 A kind of medical image hierarchy system and method based on depth convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"Exploiting ensemble learning for automatic cataract detection and grading";Ji-Jiang Yang等;《Computer Methods and Programs in Biomedicine》;20160229;第124卷;第45-57页,图7-2 *

Also Published As

Publication number Publication date
CN109615614A (en) 2019-04-12

Similar Documents

Publication Publication Date Title
CN109615614B (en) Method for extracting blood vessels in fundus image based on multi-feature fusion and electronic equipment
CN108230329B (en) Semantic segmentation method based on multi-scale convolution neural network
CN109345508B (en) Bone age evaluation method based on two-stage neural network
CN111145209B (en) Medical image segmentation method, device, equipment and storage medium
JP6192271B2 (en) Image processing apparatus, image processing method, and program
CN110826596A (en) Semantic segmentation method based on multi-scale deformable convolution
CN109002755B (en) Age estimation model construction method and estimation method based on face image
CN110210493B (en) Contour detection method and system based on non-classical receptive field modulation neural network
CN110245621B (en) Face recognition device, image processing method, feature extraction model, and storage medium
CN111028923A (en) Digital pathological image dyeing normalization method, electronic device and storage medium
CN110414541B (en) Method, apparatus, and computer-readable storage medium for identifying an object
EP4293628A1 (en) Image processing method and related apparatus
CN110363103B (en) Insect pest identification method and device, computer equipment and storage medium
Barman et al. Comparative assessment of Pest damage identification of coconut plant using damage texture and color analysis
CN113378609B (en) Agent proxy signature identification method and device
CN111860465A (en) Remote sensing image extraction method, device, equipment and storage medium based on super pixels
Wu et al. A novel method of data and feature enhancement for few-shot image classification
CN112016592A (en) Domain adaptive semantic segmentation method and device based on cross domain category perception
CN111738069A (en) Face detection method and device, electronic equipment and storage medium
Sathyavathi et al. An intelligent human age prediction from face image framework based on deep learning algorithms
CN112801238B (en) Image classification method and device, electronic equipment and storage medium
CN113516003A (en) Identification model-based identification method and device applied to intelligent security
CN114463574A (en) Scene classification method and device for remote sensing image
CN113837062A (en) Classification method and device, storage medium and electronic equipment
CN112613341A (en) Training method and device, fingerprint identification method and device, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant