CN112734828B - Method, device, medium and equipment for determining center line of fundus blood vessel - Google Patents

Method, device, medium and equipment for determining center line of fundus blood vessel Download PDF

Info

Publication number
CN112734828B
CN112734828B CN202110116256.9A CN202110116256A CN112734828B CN 112734828 B CN112734828 B CN 112734828B CN 202110116256 A CN202110116256 A CN 202110116256A CN 112734828 B CN112734828 B CN 112734828B
Authority
CN
China
Prior art keywords
image
blood vessel
seed
fundus
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110116256.9A
Other languages
Chinese (zh)
Other versions
CN112734828A (en
Inventor
柯鑫
董洲
凌赛广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yiwei Science And Technology Beijing Co ltd
Original Assignee
Yiwei Science And Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yiwei Science And Technology Beijing Co ltd filed Critical Yiwei Science And Technology Beijing Co ltd
Priority to CN202110116256.9A priority Critical patent/CN112734828B/en
Publication of CN112734828A publication Critical patent/CN112734828A/en
Application granted granted Critical
Publication of CN112734828B publication Critical patent/CN112734828B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20101Interactive definition of point of interest, landmark or seed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention provides a method, a device, a medium and equipment for determining a centerline of a blood vessel at the bottom of an eye, wherein the method comprises the following steps: obtaining a blood vessel pre-selection area image according to the fundus image; calculating a characteristic value and a characteristic vector of each pixel point in the blood vessel preselected region image according to a characteristic extraction operator; calculating the sub-pixel level displacement of each pixel point in the direction vertical to the blood vessel according to the characteristic value and the characteristic vector of each pixel point; obtaining a first seed point image comprising a plurality of seed points according to the feature vector and the sub-pixel level displacement in the vertical blood vessel direction; obtaining a second seed point image comprising a plurality of seed points according to the screening processing of the first seed point image; and obtaining a blood vessel central line of the fundus image according to the plurality of seed points on the second seed point image. The method can well improve the extraction precision of the center line of the blood vessel.

Description

Method, device, medium and equipment for determining centerline of fundus blood vessel
Technical Field
The invention relates to the technical field of image processing, in particular to a method, a device, a medium and equipment for determining a centerline of a blood vessel of a fundus.
Background
With the rapid development of the artificial intelligence field in computer technology, computer-aided diagnosis technology is also gradually developed. The computer aided diagnosis technology is used for assisting imaging doctors to find focuses and improving the diagnosis accuracy by combining the analysis and calculation of a computer through the imaging technology, the medical image processing technology and other possible physiological and biochemical means.
In medical examination, the eye is the only organ that can be examined non-destructively and is rich in information. Research indicates that the ocular fundus retina has obvious correlations with angiostenosis constriction, diffuse constriction, arteriovenous cross compression, blood vessel walking change, copper wire artery, hemorrhage, cotton wool spots, hard exudation and retinal nerve fiber layer defect with cardiovascular and cerebrovascular diseases and chronic diseases.
The fundus image processing is carried out based on the computer intelligence, so that the film reading time of a doctor can be greatly saved, and the doctor can be helped to more finely know the current disease situation and the disease development condition. In the process of implementing the invention, the inventor finds that at least the following problems exist in the prior art:
in the prior art, the extraction precision of the blood vessel central line of the fundus image is low or large sample marking and training are required, however, due to the fineness of the fundus blood vessel, marking work is complex and workload is huge, and the visual influence is great, so that the method has great limitation in practice.
Disclosure of Invention
The embodiment of the invention provides a method, a device, a medium and equipment for determining a blood vessel centerline of an eye fundus to improve the extraction precision of the blood vessel centerline of an eye fundus image and/or save the workload of sample labeling.
According to a first aspect of the present disclosure, there is provided a method of fundus vessel centerline determination, comprising:
obtaining a blood vessel pre-selection area image according to the fundus image;
calculating a characteristic value and a characteristic vector of each pixel point in the blood vessel preselected region image according to a characteristic extraction operator;
calculating the sub-pixel level displacement of each pixel point in the direction vertical to the blood vessel according to the characteristic value and the characteristic vector of each pixel point;
obtaining a first seed point image comprising a plurality of seed points according to the feature vector and the sub-pixel level displacement in the vertical blood vessel direction;
obtaining a second seed point image comprising a plurality of seed points according to the screening processing of the first seed point image;
and obtaining a blood vessel central line of the fundus image according to the plurality of seed points on the second seed point image.
In some possible embodiments, the obtaining a first seed point image including a plurality of seed points according to the feature vector and the sub-pixel level displacement in the vertical blood vessel direction may specifically include:
and obtaining a first blood vessel seed image comprising a plurality of seed points based on a non-maximum suppression algorithm according to the feature vector and the sub-pixel level displacement in the direction vertical to the blood vessel.
In some possible embodiments, the obtaining a second seed point image including a plurality of seed points according to the screening processing on the first seed point image may specifically include:
and screening the seed points on the first seed point image according to the first seed point image and a preset gradient threshold value to obtain a second seed point image comprising a plurality of seed points.
In some possible embodiments, the feature extraction operator may include any one or a combination of any of the following: the method comprises the following steps of a Laplace operator, an angular point detection algorithm, a Zuniga-Haralick positioning operator, a Hessian matrix and a Log operator.
In some possible embodiments, the feature value and the feature vector of each pixel point in the blood vessel pre-selection region image are calculated according to a feature extraction operator; calculating the sub-pixel level displacement of each pixel point in the direction vertical to the blood vessel according to the characteristic value and the characteristic vector of each pixel point; the method specifically comprises the following steps:
the sub-pixel level displacement in the vertical blood vessel direction is calculated by adopting the following formula:
Figure BDA0002920747680000021
where n is the feature vector of each pixel, n x Is the component of the feature vector n on the x-axis, n y Is the component of the eigenvector n on the y-axis; f. of x 、f y Is the first partial derivative, f xx 、f xy 、f yy Is the second partial derivative.
In some possible embodiments, after obtaining a first seed point image including a plurality of seed points according to the feature vector and the sub-pixel level displacement of the vertical blood vessel direction, the method may further include:
and removing the virtual detection area with consistent gray scale contained in the first sub-point image.
In some possible embodiments, the obtaining a blood vessel centerline of the fundus image according to a plurality of seed points on the second seed point image may specifically include:
and connecting the plurality of seed points on the second seed point image into a blood vessel central line by adopting a minimized cost function and/or an interpolation function.
In some possible embodiments, the obtaining of the image of the preselected region of the blood vessel from the fundus image may specifically include:
separating a single channel image or a combined channel image of a plurality of channels from the fundus image;
obtaining a basic blood vessel region image by using a threshold segmentation method for the single channel image or the combined channel image of a plurality of channels; the threshold segmentation method comprises one or more of the following methods in combination: a point-based global threshold method, a region-based global threshold method, a dynamic threshold segmentation method, a local threshold segmentation method, a multi-threshold segmentation method, an adaptive threshold segmentation method, an OTSU threshold segmentation method (the greater body threshold segmentation method);
based on blob analysis, removing error regions in the basic blood vessel region image to obtain a blood vessel pre-selection region image including main blood vessels and capillary vessels.
According to a second aspect of the present disclosure, there is provided an apparatus for fundus blood vessel centerline determination, comprising:
the preliminary processing module is used for obtaining a blood vessel preselected region image according to the fundus image;
the first calculation module is used for calculating the characteristic value and the characteristic vector of each pixel point in the blood vessel preselected region image according to the characteristic extraction operator;
the second calculation module is used for calculating the sub-pixel level displacement of each pixel point in the direction vertical to the blood vessel according to the characteristic value and the characteristic vector of each pixel point;
a first seed point image determining module, configured to obtain a first seed point image including a plurality of seed points according to the feature vector and the sub-pixel-level displacement in the vertical blood vessel direction;
the second seed point image determining module is used for obtaining a second seed point image comprising a plurality of seed points according to the screening processing of the first seed point image;
and the blood vessel central line determining module is used for obtaining the blood vessel central line of the fundus image according to the plurality of seed points on the second seed point image.
According to a third aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements any one of the methods for determination of a centerline of a blood vessel of the fundus as described above.
According to a fourth aspect of the present disclosure, there is provided an apparatus for fundus blood vessel centerline determination, comprising:
one or more processors;
storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement any of the methods of fundus vessel centerline determination described above.
The technical scheme has the following beneficial effects:
the technical scheme can improve the extraction precision of the fundus image blood vessel central line, and at least reaches the extraction precision of a sub-pixel level. The method for extracting the center line of the blood vessel can greatly save the workload of labeling and simultaneously achieve high extraction precision. The embodiment can save the workload, the labeling time and the complexity of sample labeling.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1A is a flow chart of a method of fundus vessel centerline determination according to an embodiment of the present invention;
FIG. 1B is a flow chart of another method of fundus blood vessel centerline determination in accordance with an embodiment of the present invention;
FIG. 2 is a detailed flowchart of step S110 according to an embodiment of the present invention;
FIG. 3A is an original fundus image of an embodiment of the present invention as an example;
FIG. 3B is a schematic diagram of extracting a region of interest ROI, as an example, according to an embodiment of the present invention;
FIG. 3C is a diagram illustrating an enhanced blood vessel after an enhancement treatment according to an exemplary embodiment of the present invention;
fig. 3D is an image of a preselected region of a blood vessel obtained by threshold segmentation and blob analysis, as an example, according to an embodiment of the present invention;
FIG. 3E is an overall vessel region image, as an example, according to an embodiment of the present invention;
FIG. 3F is a partially enlarged image of a vessel centerline extracted at sub-pixel level accuracy as an example in accordance with an embodiment of the present invention;
FIG. 4 is a schematic diagram of connecting vessel centerline segments as one example of an embodiment of the present invention;
FIG. 5 is a functional block diagram of an apparatus for determining a centerline of a blood vessel at a fundus of an eye according to an embodiment of the present invention;
FIG. 6 is a detailed functional block diagram of the preliminary processing module 210 according to an embodiment of the present invention;
FIG. 7 is a functional block diagram of a computer-readable storage medium of an embodiment of the present invention;
fig. 8 is a functional block diagram of an apparatus for determination of a centerline of a blood vessel at a fundus of an eye according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1A is a flowchart of a method for determining a centerline of a blood vessel in a fundus of an eye according to an embodiment of the present invention. As shown in fig. 1A, the method of fundus blood vessel centerline determination includes the steps of:
s110: and obtaining a blood vessel pre-selection area image according to the fundus image. Alternatively, this step may perform preprocessing such as ROI extraction, enhancement processing, normalization processing, and/or dessication processing on the fundus image to obtain a blood vessel pre-selected region image. Alternatively, in this step, after one or more of the above-described preprocessing processes are performed on the fundus image, a threshold segmentation process and a blob analysis process may be performed to obtain a blood vessel pre-selection region image.
S120: and calculating the characteristic value and the characteristic vector of each pixel point in the blood vessel preselected region image according to the characteristic extraction operator. Wherein, the characteristic value may include: a gradient value.
S130: and calculating the sub-pixel level displacement of each pixel point in the direction vertical to the blood vessel according to the characteristic value and the characteristic vector of each pixel point.
S140: a first seed point image including a plurality of seed points is obtained according to the feature vector and the sub-pixel level displacement perpendicular to the blood vessel direction.
S150: and obtaining a second seed point image comprising a plurality of seed points according to the screening processing of the first seed point image.
S160: from the plurality of seed points on the second seed point image, a blood vessel center line of the fundus image is obtained.
In some variant embodiments, step S150-160 may not be performed, but step S150': from a first seed point image including a plurality of seed points, a blood vessel centerline of a fundus image or a blood vessel centerline image corresponding to the fundus image is obtained.
FIG. 1B is a flow chart of another method of fundus blood vessel centerline determination in accordance with an embodiment of the present invention. As shown in fig. 1B, the method of fundus blood vessel centerline determination differs from that of fig. 1A in that it includes the steps of:
s115: and carrying out image segmentation processing on the obtained blood vessel preselected region image to obtain an integral blood vessel region image.
In this step, various non-blood vessel images in the blood vessel pre-selected region image can be deleted through image segmentation processing, such as but not limited to: bleeding spots, streaking, oozing, optic disc edges, images of other non-linear features, other isolated and scattered anomaly images, other noise or interference images, and the like.
In some embodiments, the image segmentation processing is performed on the blood vessel pre-selected region image in S115 to obtain an overall blood vessel region image, which may specifically include the following steps:
carrying out multi-scale feature analysis on blood vessels in a plurality of regions in the blood vessel pre-selection region image; the multi-scale feature analysis refers to selecting a plurality of different image features to carry out combination identification to extract corresponding blood vessels.
And combining a plurality of regions or different types of blood vessels extracted after the multi-scale feature analysis into an integral blood vessel region image. The multi-scale characteristic analysis is favorable for further fine extraction of the blood vessel from the image of the blood vessel preselected region so as to improve the blood vessel segmentation precision and lay a foundation for extracting the center line of the blood vessel with sub-pixel high precision subsequently.
In some embodiments, the multiscale feature analysis of the blood vessel in a plurality of regions in the image of the preselected region of the blood vessel may specifically include any one or more of the following steps:
extracting a main blood vessel of the optic disc region based on the first scale feature analysis; by way of example, the first scale features include any one or more of the following in combination: the maximum caliber width, area, line length, angle, roundness and other image characteristics of the blood vessel.
Extracting capillary vessels of the macular region based on the second scale feature analysis; as an example, deep learning target detection may be employed to identify capillaries of the macular region.
Extracting blood vessels of the edge region of the blood vessel pre-selection region image based on the third scale feature analysis; as an example, the edge region includes a region that is relatively far from the center of the image, and extracting the edge region image can achieve more accurate calculation of the blood vessel density.
Extracting abnormal blood vessels based on fourth scale feature analysis; as an example, a blood vessel with abnormal curvature or abnormal color is identified and extracted. The blood vessels of hypertensive patients are usually whitish and bright, and the blood vessels are normally reddish and dark against the background.
And removing noise and/or interference of strip bleeding based on the fifth scale feature analysis. For example, the noise and bleeding streaks are filtered according to their non-linear characteristics, including roundness, rectangularity, etc.
And S120': and calculating the characteristic value and the characteristic vector of each pixel point in the whole blood vessel region image according to the characteristic extraction operator.
S130': and calculating the sub-pixel level displacement of each pixel point in the direction vertical to the blood vessel according to the characteristic value and the characteristic vector of each pixel point in the whole blood vessel region image.
In some embodiments, the obtaining a first seed point image including a plurality of seed points according to the feature vector and the sub-pixel level displacement in the vertical blood vessel direction in step S140 may specifically include:
a first vessel seed image including a plurality of seed points is obtained based on a Non-Maximum suppression (NMS) algorithm according to the feature vector and a sub-pixel level displacement perpendicular to the vessel direction. The NMS algorithm suppresses elements that are not maxima, which can be understood as local maximum search, obtains local maxima, and screens out (suppresses) the rest of the values in the neighborhood. Non-maximal suppression may suppress all gradient values except local maxima or sub-pixel level displacements in the vertical vessel direction, which indicate the locations with the most intense intensity value changes. Non-maxima inhibit NMS can more accurately locate seed points of vessel centerlines. In the above-described first seed point image including a plurality of seed points, each first seed point has a direction represented by a feature vector.
In some embodiments, the obtaining a second seed point image including a plurality of seed points according to the screening processing on the first seed point image in step S150 may specifically include:
and screening the seed points on the first seed point image according to the first seed point image and a preset gradient threshold value to obtain a second seed point image comprising a plurality of seed points. In this step, a gradient threshold is selected, and the seed points on the first seed point image are subjected to screening processing to obtain seed points with gradient values meeting a preset condition, for example, seed points or pixel points with gradient values below the preset gradient threshold are removed, seed points with gradient values greater than or equal to the gradient threshold are obtained, and a plurality of screened seed points form a second seed point image. This step is beneficial to obtaining seed points meeting the requirements and removing some non-centerline points. The gradient value threshold value can be directly given, for example, the gradient value distribution of the image is observed by pure people, a proper characteristic value is selected, or the gradient threshold value is obtained by analyzing the image by using methods such as histogram analysis and the like in the image processing process. And each feature vector is used for indicating the direction of the corresponding first seed point or second seed point, and all the second seed points are connected according to the plurality of seed points on the second seed point image and the directions respectively indicated by the feature vectors corresponding to the plurality of seed points, so that the blood vessel central line of the fundus image is obtained.
In some embodiments, the feature extraction operator in step S120 may include any one or a combination of any of the following: laplace operator, corner detection algorithm, zuniga-Haralick positioning operator, hessian matrix and Log operator (Laplacian of Gaussian).
In some embodiments, the feature value and the feature vector of each pixel point in the blood vessel preselection area image are calculated according to the feature extraction operator in the steps S120 to S130; calculating the sub-pixel level displacement of each pixel point in the direction vertical to the blood vessel according to the characteristic value and the characteristic vector of each pixel point; the method specifically comprises the following steps:
the sub-pixel level displacement in the vertical blood vessel direction is calculated by adopting the following formula:
Figure BDA0002920747680000071
where n is the feature vector of each pixel, n x Is the component of the feature vector n on the x-axis, n y Is the component of the eigenvector n on the y-axis; f. of x 、f y Is the first partial derivative, f xx 、f xy 、f yy Is the second partial derivative. (tn) x ,tn y ) Representing a sub-pixel level displacement in the vertical vessel direction.
In some embodiments, after obtaining the first seed point image including a plurality of seed points according to the feature vector and the sub-pixel level displacement in the vertical blood vessel direction in step S140, the method may further include:
and removing the virtual detection area with consistent gray level contained in the first sub-point image.
In some embodiments, the obtaining the blood vessel centerline of the fundus image from the plurality of seed points on the second seed point image in step S160 may specifically include:
and connecting the plurality of seed points on the second seed point image into the vessel central line by adopting a cost function and/or an interpolation function. The interpolation function may comprise any one or a combination of any plurality of the following functions: linear interpolation, cubic convolution interpolation, least square interpolation, newton interpolation, lagrange interpolation. The blood vessel central line with the accuracy of sub-pixel level or above can be obtained by the method.
FIG. 4 is a schematic diagram of connecting vessel centerline segments as an example of an embodiment of the present invention. As shown in fig. 4, two tangent lines T1 and T2 are respectively made at adjacent end points of two adjacent vessel centerline segments CLS1 and CLS2, an included angle between the tangent lines T1 and T2 is denoted as θ, and a distance between the two adjacent end points is denoted as d. In another embodiment, the connecting the plurality of seed points on the second seed point image into the final complete blood vessel centerline specifically includes the following steps: the method comprises the steps of firstly performing preliminary connection to form a plurality of separated or separated vessel centerline segments, and then performing further screening connection on the extracted plurality of vessel centerline segments according to the characteristics of one or more of the extending direction (for example, the tangential direction of the end points of two adjacent vessel centerline segments in fig. 4), the angle theta (included angle) between the end point tangents T1 and T2, the distance d between the adjacent end points, and the like, so as to obtain the final complete vessel centerline. Specifically, if the distance d between the current vessel centerline segment CLS1 and the adjacent vessel centerline segment CLS2 is less than or equal to the preset distance threshold, it is determined to connect the current vessel centerline segment CLS1 with the adjacent vessel centerline segment CLS2, otherwise, no connection is made. If the included angle theta between the current blood vessel centerline segment CLS1 and the adjacent blood vessel centerline segment CLS2 or the blood vessel centerline segment to be connected is within a preset included angle range (i.e. smaller than a preset included angle threshold), determining to connect the current blood vessel centerline segment CLS1 with the adjacent blood vessel centerline segment CLS2 or the blood vessel centerline segment to be connected, otherwise, not connecting. Or when the two conditions are simultaneously met, connecting the two vessel centerline segments. In other cases, if it is identified that the current vessel centerline segment CLS1 is a main vessel and the adjacent vessel centerline segment CLS2 is a branch vessel of the main vessel, even if the angle θ between the end point tangents T1, T2 is greater than the above-mentioned angle threshold, the CLS2 as the branch vessel is connected to the adjacent end point of the CLS1 as the main vessel.
In some embodiments, after extracting the vessel centerline of the vessel pre-selection region image or fundus image, the method may further include the steps of: combining the extracted blood vessel central lines; and smoothing the combined vessel central line to remove error vessel central line segments caused by noise. The combination and smoothing means that screening and connection are performed, and then smoothing is performed based on an interpolation algorithm, so that the image becomes smooth.
Fig. 2 is a detailed flowchart of step S110 according to an embodiment of the present invention. As shown in fig. 2, in some embodiments, obtaining the image of the preselected region of blood vessels from the fundus image in step S110 may specifically include:
s112: a single channel image, or a combined channel image of a plurality of channels is separated from the fundus image.
The fundus image may be an original fundus image or a pre-processed fundus image obtained by pre-processing the original fundus image. By way of example, the type of fundus image may include any of: color images, black and white images, hyperspectral or multispectral images. The color image can extract an R channel image, a G channel image or a B channel image, and can also extract any one of an H channel image, an I channel image and an S channel image; or extracting an R channel image and a G channel image, or an R channel image and a B channel image, or a B channel image and a G channel image, or a combined channel image formed by weighted combination of the R channel image, the G channel image and the B channel image; or any two channels of the H channel, the I channel and the S channel are combined to form a combined channel image. For a multispectral image or a hyperspectral image, which may include more than 3 channels, for example, 10 channels, one channel or multiple channels are extracted from the more than 3 channels and weighted-combined to obtain a single-channel image, or a combined-channel image. In other cases, the separated single channel image or the combined channel image can be normalized or enhanced to achieve the purpose of image feature enhancement.
S114: and obtaining a basic blood vessel region image by using a threshold segmentation method on a single channel image or a combined channel image of a plurality of channels. The thresholding method may include one or more of the following: a point-based global threshold method, a region-based global threshold method, a dynamic threshold segmentation method, a local threshold segmentation method, a multi-threshold segmentation method, an adaptive threshold segmentation method, an OTSU threshold segmentation method (the grand threshold segmentation method).
S116: based on the blob analysis, removing error regions in the basic blood vessel region image to obtain a blood vessel pre-selection region image at least comprising main blood vessels and capillary vessels.
Specifically, in this step, a part of the error region may be removed using a threshold range corresponding to any one or a combination of any plurality of features (features of blob) such as contour length, width, rectangularity, circularity, chromaticity, luminance, saturation, contrast, and area, and a preselected region including at least a main blood vessel and a capillary vessel may be distinguished. Strip bleeding, noise and the like can be removed through blob analysis.
FIG. 3A is an original fundus image of an embodiment of the present invention as an example; FIG. 3B is a schematic diagram of extracting a region of interest ROI, as an example, according to an embodiment of the present invention; FIG. 3C is a diagram illustrating an enhanced blood vessel after an enhancement treatment according to an exemplary embodiment of the present invention; fig. 3D is an image of a preselected region of a blood vessel obtained by threshold segmentation and blob analysis, as an example, according to an embodiment of the present invention. FIG. 3E is an overall vessel region image, as an example, of an embodiment of the present invention; fig. 3F is an exemplary enlarged partial image of the vessel centerline extracted at sub-pixel level accuracy according to an embodiment of the present invention. With reference to fig. 3A to 3F, the fundus vascular centerline with a sub-pixel level or higher accuracy can be obtained by the above processing procedure of the embodiment of the present invention, which is beneficial to subsequent high-accuracy quantitative measurement of fundus vascular information (index).
The technical scheme of the method for determining the fundus blood vessel center line can improve the extraction precision of the fundus image blood vessel center line, and at least achieves the extraction precision of a sub-pixel level. The method for extracting the center line of the blood vessel provided by the embodiment of the invention can greatly save the workload of labeling and simultaneously achieve high extraction precision. The technical scheme of the embodiment can save the workload, the labeling time and the complexity of the sample labeling and avoid the adverse effect of subjective factors on the sample labeling.
Fig. 5 is a functional block diagram of an apparatus for determining a centerline of a blood vessel in a fundus of an eye according to an embodiment of the present invention. As shown in fig. 5, the fundus blood vessel center line determining apparatus 200 includes:
a preliminary processing module 210 for obtaining a blood vessel pre-selection region image from the fundus image;
the first calculating module 220 is configured to calculate a feature value and a feature vector of each pixel point in the blood vessel preselected region image according to the feature extraction operator;
the second calculating module 230 is configured to calculate a sub-pixel level displacement of each pixel point in the direction perpendicular to the blood vessel according to the feature value and the feature vector of each pixel point;
a first seed point image determining module 240, configured to obtain a first seed point image including a plurality of seed points according to the feature vector and the sub-pixel-level displacement in the direction perpendicular to the blood vessel;
a second seed point image determining module 250, configured to obtain a second seed point image including a plurality of seed points according to the screening processing on the first seed point image;
a blood vessel centerline determining module 260 for obtaining a blood vessel centerline of the fundus image from the plurality of seed points on the second seed point image.
In some embodiments, the apparatus 200 for fundus blood vessel centerline determination may further include:
and the fine extraction module 115 is configured to perform image segmentation on the obtained blood vessel preselected region image to obtain an entire blood vessel region image.
The first calculating module 220' replaces the first calculating module 220, and is configured to calculate a feature value and a feature vector of each pixel point in the whole blood vessel region image according to the feature extraction operator.
The fine extraction module 115 may delete various non-blood vessel images in the blood vessel pre-selected region image through an image segmentation process, for example, including but not limited to: bleeding spots, streaking, oozing, optic disc edges, images of other non-linear features, other isolated and scattered anomaly images, other noise or interference images, and the like.
A fine extraction module 115, specifically configured to perform multi-scale feature analysis on blood vessels in multiple regions in the blood vessel pre-selection region image; the multi-scale feature analysis is to select a plurality of different image features to carry out combined recognition to extract corresponding blood vessels; and combining a plurality of regions or different types of blood vessels extracted after the multi-scale feature analysis into an integral blood vessel region image. The multi-scale feature analysis is favorable for further fine blood vessel extraction of the image of the blood vessel preselected area so as to improve the blood vessel segmentation precision and lay a foundation for subsequent extraction of the sub-pixel high-precision blood vessel center line.
In some embodiments, the multi-scale feature analysis is performed on the blood vessels in a plurality of regions in the blood vessel pre-selection region image, which may specifically include, but is not limited to, any one or any plurality of the following processes:
extracting a main blood vessel of the optic disc region based on the first scale feature analysis; by way of example, the first scale features include any one or more of the following in combination: the maximum diameter width, area, line length, angle, roundness of the blood vessel and other image characteristics. Extracting capillary vessels of the macular region based on the second scale feature analysis; as an example, the macular region may be identified using deep learning target detection. Extracting blood vessels of the edge region of the blood vessel pre-selection region image based on the third scale feature analysis; as an example, the edge region includes a region that is relatively far from the center of the image, and extracting the edge region image can achieve more accurate calculation of the blood vessel density. Extracting abnormal blood vessels based on fourth scale feature analysis; as an example, a blood vessel with abnormal curvature or abnormal color is identified and extracted. The blood vessels of hypertensive patients are usually whitish and bright, and the blood vessels are normally reddish and dark against the background. And removing noise and/or interference of bleeding streaks based on the fifth scale feature analysis. For example, the noise and bleeding streaks are filtered according to their non-linear characteristics, including roundness, rectangularity, etc.
In some embodiments, the first sub-point image determination module 240 may be specifically configured to: and obtaining a first blood vessel seed image comprising a plurality of seed points based on a non-maximum suppression algorithm according to the feature vector and the sub-pixel level displacement vertical to the blood vessel direction.
In some embodiments, the second seed point image determination module 250 may be specifically configured to: and screening the seed points on the first seed point image according to the first seed point image and a preset gradient threshold value to obtain a second seed point image comprising a plurality of seed points.
In some embodiments, the feature extraction operator may include any one or a combination of any of the following: laplace operator, angular point detection algorithm, zuniga-Haralick positioning operator, hessian matrix and Log operator (Laplacian of Gaussian).
In some embodiments, the second calculation module 230 may be specifically configured to: the sub-pixel level displacement in the vertical vessel direction is calculated by adopting the following formula:
Figure BDA0002920747680000111
where n is the feature vector of each pixel, n x Is the component of the feature vector n on the x-axis, n y Is the component of the eigenvector n on the y-axis; f. of x 、f y Is the first partial derivative, f xx 、f xy 、f yy Is the second partial derivative.
In some embodiments, the first sub-point image determination module 240 may be further configured to: and removing the false detection areas with consistent gray levels contained in the first sub-point image.
In some embodiments, the vessel centerline determination module 260 may be specifically configured to: and connecting the plurality of seed points on the second seed point image into a blood vessel central line by adopting a minimum cost function and/or an interpolation function. The interpolation function may comprise any one or a combination of any plurality of the following functions: linear interpolation, cubic convolution interpolation, least square interpolation, newton interpolation, lagrange interpolation.
Fig. 6 is a detailed functional block diagram of the preliminary processing module 210 according to an embodiment of the present invention. As shown in fig. 6, in some embodiments, the preliminary processing module 210 may specifically include:
a channel separation unit 212 for separating a single channel image, or a combined channel image of a plurality of channels, from the fundus image;
a threshold segmentation unit 214, configured to obtain, for a single channel image or a combined channel image of multiple channels, a basic blood vessel region image by using a threshold segmentation method, where the threshold segmentation method includes a combination of one or more of the following methods: a point-based global threshold method, a region-based global threshold method, a dynamic threshold segmentation method, a local threshold segmentation method, a multi-threshold segmentation method, an adaptive threshold segmentation method, an OTSU threshold segmentation method (the shijin threshold segmentation method);
and the error removing unit 216 is configured to remove the error region in the basic blood vessel region image based on the blob analysis, and obtain a blood vessel pre-selection region image including the main blood vessels and the capillary vessels.
The device for determining the fundus blood vessel center line can improve the extraction precision of the fundus image blood vessel center line, and at least reaches the extraction precision of a sub-pixel level. The method for extracting the center line of the blood vessel can greatly save the workload of labeling and simultaneously achieve high extraction precision. The technical scheme of the embodiment can save the workload, the labeling time and the complexity of the sample labeling and avoid the adverse effect of subjective factors on the sample labeling.
FIG. 7 is a functional block diagram of a computer-readable storage medium of an embodiment of the present invention. As shown in fig. 7, an embodiment of the present invention further provides a computer-readable storage medium 300, a computer program 310 is stored in the computer-readable storage medium 300, and when executed by a processor, the computer program 310 implements the following steps:
obtaining a blood vessel pre-selection area image according to the fundus image;
calculating the characteristic value and the characteristic vector of each pixel point in the blood vessel preselection area image according to the characteristic extraction operator;
calculating the sub-pixel level displacement of each pixel point in the direction vertical to the blood vessel according to the characteristic value and the characteristic vector of each pixel point in the blood vessel pre-selection area image;
obtaining a first seed point image comprising a plurality of seed points according to the feature vector and the sub-pixel level displacement in the direction vertical to the blood vessel;
obtaining a second seed point image comprising a plurality of seed points according to the screening processing of the first seed point image;
from the plurality of seed points on the second seed point image, a blood vessel center line of the fundus image is obtained.
Alternatively, the computer program 310 realizes the following steps when executed by a processor:
obtaining a blood vessel pre-selection area image according to the fundus image;
performing image segmentation processing on the obtained blood vessel preselected area image to obtain an integral blood vessel area image;
calculating the characteristic value and the characteristic vector of each pixel point in the whole blood vessel region image according to the characteristic extraction operator;
calculating the sub-pixel level displacement of each pixel point in the direction vertical to the blood vessel according to the characteristic value and the characteristic vector of each pixel point in the whole blood vessel region image;
obtaining a first seed point image comprising a plurality of seed points according to the feature vector and the sub-pixel level displacement in the direction vertical to the blood vessel;
obtaining a second seed point image comprising a plurality of seed points according to the screening processing of the first seed point image;
from the plurality of seed points on the second seed point image, a blood vessel center line of the fundus image is obtained.
In some embodiments, the storage medium is further configured to store program code for performing the steps of: and obtaining a first blood vessel seed image comprising a plurality of seed points based on a non-maximum suppression algorithm according to the feature vector and the sub-pixel level displacement vertical to the blood vessel direction.
In some embodiments, the storage medium is further configured to store program code for performing the steps of: and screening the seed points on the first seed point image according to the first seed point image and a preset gradient threshold value to obtain a second seed point image comprising a plurality of seed points.
In some embodiments, the storage medium is further configured to store program code for performing the steps of: the feature extraction operator may comprise any one or combination of any number of the following: the method comprises the following steps of Laplace operator, corner detection algorithm, zuniga-Haralick positioning operator, hessian matrix and Log operator.
In some embodiments, the storage medium is further configured to store program code for performing the steps of: the sub-pixel level displacement in the vertical blood vessel direction is calculated by adopting the following formula:
Figure BDA0002920747680000131
where n is the feature vector of each pixel, n x Is the component of the feature vector n on the x-axis, n y Is the component of the eigenvector n on the y-axis; f. of x 、f y Is the first partial derivative, f xx 、f xy 、f yy Is the second partial derivative.
In some embodiments, the storage medium is further configured to store program code for performing the steps of: and removing the virtual detection area with consistent gray level contained in the first sub-point image.
In some embodiments, the storage medium is further configured to store program code for performing the steps of: and connecting the plurality of seed points on the second seed point image into a blood vessel central line by adopting a minimum cost function and/or an interpolation function.
In some embodiments, the storage medium is further configured to store program code for performing the steps of:
separating a single channel image or a combined channel image of a plurality of channels from the fundus image;
obtaining a basic blood vessel region image by using a threshold segmentation method on a single channel image or a combined channel image of a plurality of channels, wherein the threshold segmentation method comprises the following one or more methods in combination: a point-based global threshold method, a region-based global threshold method, a dynamic threshold segmentation method, a local threshold segmentation method, a multi-threshold segmentation method, an adaptive threshold segmentation method, an OTSU threshold segmentation method (the greater body threshold segmentation method);
based on blob analysis, removing error regions in the basic blood vessel region image to obtain a blood vessel pre-selection region image including main blood vessels and capillary vessels.
The computer readable storage medium may include physical means for storing information, typically by digitizing the information for storage on a medium using electrical, magnetic or optical means. The computer-readable storage medium according to this embodiment may include: devices that store information using electrical energy, such as various types of memory, e.g., RAM, ROM, etc.; devices that store information using magnetic energy such as hard disks, floppy disks, tapes, core memories, bubble memories, and usb disks; devices that store information optically, such as CDs or DVDs. Of course, there are other ways of storing media that can be read, such as quantum memory, graphene memory, and so forth.
Fig. 8 is a functional block diagram of an apparatus for determination of a centerline of a blood vessel at a fundus of an eye according to an embodiment of the present invention. As shown in fig. 8, the device includes one or more processors, a communication interface, a memory, and a communication bus, wherein the processors, the communication interface, and the memory communicate with each other through the communication bus.
A memory for storing a computer program;
one or more processors configured to execute the program stored in the memory, the one or more processors configured to perform the steps of:
obtaining a blood vessel pre-selection area image according to the fundus image;
calculating a characteristic value and a characteristic vector of each pixel point in the blood vessel preselected region image according to the characteristic extraction operator;
calculating the sub-pixel level displacement of each pixel point in the direction vertical to the blood vessel according to the characteristic value and the characteristic vector of each pixel point in the blood vessel preselected region image;
obtaining a first seed point image comprising a plurality of seed points according to the feature vector and the sub-pixel level displacement in the direction vertical to the blood vessel;
obtaining a second seed point image comprising a plurality of seed points according to the screening processing of the first seed point image;
from the plurality of seed points on the second seed point image, a blood vessel center line of the fundus image is obtained.
Or, one or more processors, when executing the program stored in the memory, implement the following steps:
obtaining a blood vessel pre-selection area image according to the fundus image;
carrying out image segmentation processing on the obtained blood vessel preselected area image to obtain an integral blood vessel area image;
calculating the characteristic value and the characteristic vector of each pixel point in the whole blood vessel region image according to the characteristic extraction operator;
calculating the sub-pixel level displacement of each pixel point in the direction vertical to the blood vessel according to the characteristic value and the characteristic vector of each pixel point in the whole blood vessel region image;
obtaining a first seed point image comprising a plurality of seed points according to the feature vector and the sub-pixel level displacement in the direction vertical to the blood vessel;
obtaining a second seed point image comprising a plurality of seed points according to the screening processing of the first seed point image;
from the plurality of seed points on the second seed point image, a blood vessel center line of the fundus image is obtained.
In some exemplary embodiments, the processor may further execute the program code of the following steps: and obtaining a first blood vessel seed image comprising a plurality of seed points based on a non-maximum suppression algorithm according to the feature vector and the sub-pixel level displacement vertical to the blood vessel direction.
In some exemplary embodiments, the processor may further execute the program code of the following steps: and screening the seed points on the first seed point image according to the first seed point image and a preset gradient threshold value to obtain a second seed point image comprising a plurality of seed points.
In some exemplary embodiments, the processor may further execute the program code of the following steps: the feature extraction operator comprises any one or any combination of the following: the method comprises the following steps of Laplace operator, corner detection algorithm, zuniga-Haralick positioning operator, hessian matrix and Log operator.
In some exemplary embodiments, the processor may further execute the program code of the following steps: the sub-pixel level displacement in the vertical vessel direction is calculated by adopting the following formula:
Figure BDA0002920747680000151
where n is the feature vector of each pixel, n x Is the component of the feature vector n on the x-axis, n y Is the component of the eigenvector n on the y-axis; f. of x 、f y Is the first partial derivative, f xx 、f xy 、f yy Is the second partial derivative.
In some exemplary embodiments, the processor may further execute the program code of the following steps: and removing the virtual detection area with consistent gray level contained in the first sub-point image.
In some exemplary embodiments, the processor may further execute the program code of the following steps: and connecting the plurality of seed points on the second seed point image into a blood vessel central line by adopting a minimum cost function and/or an interpolation function.
In some exemplary embodiments, the processor may further execute the program code of the following steps:
separating a single channel image or a combined channel image of a plurality of channels from the fundus image;
obtaining a basic blood vessel region image by using a threshold segmentation method on a single channel image or a combined channel image of a plurality of channels, wherein the threshold segmentation method comprises the following one or more methods in combination: a point-based global threshold method, a region-based global threshold method, a dynamic threshold segmentation method, a local threshold segmentation method, a multi-threshold segmentation method, an adaptive threshold segmentation method, an OTSU threshold segmentation method (the greater body threshold segmentation method);
based on the blob analysis, removing error regions in the basic blood vessel region image to obtain a blood vessel pre-selection region image including main blood vessels and capillary vessels.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus. The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
It should be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the device, the electronic device and the readable storage medium embodiments, since they are substantially similar to the method embodiments, the description is simple, and the relevant points can be referred to the partial description of the method embodiments.
There are many Hardware Description Languages (HDL), such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluent, CUPL (Central Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALAM, RHDL (Ruby Hardware Description Language), and so on, and VHDL (Very-High-speed Integrated Circuit Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that hardware circuitry for implementing the logical method flows can be readily obtained by a mere need to program the method flows with some of the hardware description languages described above and into an integrated circuit.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a vehicle-mounted human-computer interaction device, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
Although the present application provides method steps as described in an embodiment or flowchart, more or fewer steps may be included based on conventional or non-inventive means. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of sequences, and does not represent a unique order of performance. When implemented in an actual device or end product, can be executed sequentially or in parallel according to the methods shown in the embodiments or figures (e.g., parallel processor or multi-thread processing environments, even distributed data processing environments). The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the presence of additional identical or equivalent elements in a process, method, article, or apparatus that comprises the recited elements is not excluded.
For convenience of description, the above devices are described as being divided into various modules by functions, which are described separately. Of course, in implementing the present application, the functions of each module may be implemented in one or more software and/or hardware, or a module implementing the same function may be implemented by a combination of a plurality of sub-modules or sub-units, and the like. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement or the like made within the spirit and principle of the present application shall be included in the scope of the claims of the present application.

Claims (8)

1. A method of fundus vessel centerline determination, comprising:
obtaining a blood vessel pre-selection area image according to the fundus image;
carrying out image segmentation processing on the obtained blood vessel preselected area image to obtain an integral blood vessel area image;
calculating a characteristic value and a characteristic vector of each pixel point in the whole blood vessel region image according to the characteristic extraction operator; calculating the sub-pixel level displacement of each pixel point in the direction vertical to the blood vessel according to the characteristic value and the characteristic vector of each pixel point in the whole blood vessel region image;
obtaining a first seed point image comprising a plurality of seed points based on a non-maximum suppression algorithm according to the feature vector and the sub-pixel level displacement in the direction perpendicular to the blood vessel; removing the virtual detection area with consistent gray scale contained in the first sub-point image;
screening the seed points on the first seed point image according to the first seed point image and a preset gradient threshold value to obtain a second seed point image comprising a plurality of seed points; a gradient value of each seed point in the second seed point image is greater than or equal to the gradient threshold;
connecting a plurality of seed points on the second seed point image into a blood vessel central line of the fundus image by adopting a minimum cost function and/or an interpolation function;
the image segmentation processing is performed on the obtained blood vessel preselected region image to obtain an overall blood vessel region image, and the image segmentation processing specifically comprises the following steps:
performing multi-scale feature analysis on blood vessels of a plurality of regions in the blood vessel pre-selection region image, which specifically comprises the following steps: extracting a main blood vessel of the optic disc region based on the first scale feature analysis; the first scale features comprise image features in combination of any of: the maximum caliber width, area, line length, angle and roundness of the blood vessel; extracting capillary vessels of the macular region based on the second scale feature analysis; extracting blood vessels of the edge region of the blood vessel pre-selection region image based on the third scale feature analysis; identifying and extracting blood vessels with abnormal curvature or abnormal color based on the fourth scale feature analysis; based on the fifth scale feature analysis, screening and filtering the noise and the strip bleeding according to the nonlinear features of the noise and the strip bleeding;
and combining the multiple regions or different types of blood vessels extracted after the multi-scale feature analysis into an integral blood vessel region image.
2. The method of claim 1, wherein the feature extraction operator comprises any one or a combination of any of the following: the method comprises the following steps of Laplace operator, corner detection algorithm, zuniga-Haralick positioning operator, hessian matrix and Log operator.
3. The method according to claim 2, wherein the characteristic value and the characteristic vector of each pixel point in the blood vessel pre-selection region image are calculated according to a characteristic extraction operator; calculating the sub-pixel level displacement of each pixel point in the direction vertical to the blood vessel according to the characteristic value and the characteristic vector of each pixel point; the method specifically comprises the following steps:
the sub-pixel level displacement in the vertical vessel direction is calculated by adopting the following formula:
Figure FDA0003964353380000021
wherein n is a feature vector of each pixel point, n x Is the component of the feature vector n on the x-axis, n y Is the component of the eigenvector n on the y-axis; f. of x 、f y Is the first partial derivative, f xx 、f xy 、f yy Is the second partial derivative.
4. The method according to claim 1, characterized in that said obtaining of the image of the preselected area of the blood vessel from the fundus image comprises:
separating a single channel image or a combined channel image of a plurality of channels from the fundus image;
obtaining a basic blood vessel region image by using a threshold segmentation method for the single channel image or the combined channel image of a plurality of channels;
based on blob analysis, removing error regions in the basic blood vessel region image to obtain a blood vessel pre-selection region image including main blood vessels and capillary vessels.
5. The method of claim 1, wherein the interpolation function comprises any one or a combination of any plurality of the following functions: linear interpolation, cubic convolution interpolation, least square interpolation, newton interpolation, lagrange interpolation.
6. An apparatus for fundus vessel centerline determination, comprising:
the preliminary processing module is used for obtaining a blood vessel preselected region image according to the fundus image;
the fine extraction module is used for carrying out image segmentation processing on the obtained blood vessel preselected area image to obtain an integral blood vessel area image; the fine extraction module is specifically configured to perform multi-scale feature analysis on blood vessels in multiple regions in the blood vessel pre-selection region image, and specifically includes the following steps: extracting a main blood vessel of the optic disc region based on the first scale feature analysis; the first scale features comprise image features in combination of any of: the maximum diameter width, area, line length, angle and roundness of the blood vessel; extracting capillary vessels of the macular region based on the second scale feature analysis; extracting blood vessels of the edge region of the blood vessel pre-selection region image based on the third scale feature analysis; identifying and extracting blood vessels with abnormal curvature or abnormal color based on the fourth scale feature analysis; based on the fifth scale feature analysis, screening and filtering the noise and the strip bleeding according to the nonlinear features of the noise and the strip bleeding; combining a plurality of regions extracted after multi-scale feature analysis or different types of blood vessels into an integral blood vessel region image;
the first calculation module is used for calculating the characteristic value and the characteristic vector of each pixel point in the whole blood vessel region image according to the characteristic extraction operator; the second calculation module is used for calculating the sub-pixel level displacement of each pixel point in the direction vertical to the blood vessel according to the characteristic value and the characteristic vector of each pixel point in the whole blood vessel region image;
the first seed point image determining module is used for obtaining a first seed point image comprising a plurality of seed points based on a non-maximum suppression algorithm according to the feature vector and the sub-pixel level displacement in the direction perpendicular to the blood vessel; removing the virtual detection area with consistent gray scale contained in the first sub-point image;
the second seed point image determining module is used for screening the seed points on the first seed point image according to the first seed point image and a preset gradient threshold value to obtain a second seed point image comprising a plurality of seed points; a gradient value of each seed point in the second seed point image is greater than or equal to the gradient threshold;
a vessel centerline determining module for connecting the plurality of seed points on the second seed point image into a vessel centerline of the fundus image using a minimization cost function and/or an interpolation function.
7. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out a method of fundus blood vessel centerline determination as claimed in any one of claims 1 to 5.
8. An apparatus for fundus blood vessel centerline determination, comprising:
one or more processors;
storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement a method of fundus blood vessel centerline determination as claimed in any of claims 1 to 5.
CN202110116256.9A 2021-01-28 2021-01-28 Method, device, medium and equipment for determining center line of fundus blood vessel Active CN112734828B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110116256.9A CN112734828B (en) 2021-01-28 2021-01-28 Method, device, medium and equipment for determining center line of fundus blood vessel

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110116256.9A CN112734828B (en) 2021-01-28 2021-01-28 Method, device, medium and equipment for determining center line of fundus blood vessel

Publications (2)

Publication Number Publication Date
CN112734828A CN112734828A (en) 2021-04-30
CN112734828B true CN112734828B (en) 2023-02-24

Family

ID=75594354

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110116256.9A Active CN112734828B (en) 2021-01-28 2021-01-28 Method, device, medium and equipment for determining center line of fundus blood vessel

Country Status (1)

Country Link
CN (1) CN112734828B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344895A (en) * 2021-06-23 2021-09-03 依未科技(北京)有限公司 High-precision fundus blood vessel diameter measuring method, device, medium and equipment
CN113470102B (en) * 2021-06-23 2024-06-11 依未科技(北京)有限公司 Method, device, medium and equipment for measuring fundus blood vessel curvature with high precision
CN113781405B (en) * 2021-08-19 2024-03-22 上海联影医疗科技股份有限公司 Vessel centerline extraction method, apparatus, computer device, and readable storage medium
CN113610841B (en) * 2021-08-26 2022-07-08 首都医科大学宣武医院 Blood vessel abnormal image identification method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104573712A (en) * 2014-12-31 2015-04-29 浙江大学 Arteriovenous retinal blood vessel classification method based on eye fundus image
CN106651846A (en) * 2016-12-20 2017-05-10 中南大学湘雅医院 Method for segmenting vasa sanguinea retinae image
CN109166124A (en) * 2018-11-20 2019-01-08 中南大学 A kind of retinal vascular morphologies quantization method based on connected region
CN109410191A (en) * 2018-10-18 2019-03-01 中南大学 Optical fundus blood vessel localization method and its anaemia screening method based on OCT image
CN111340789A (en) * 2020-02-29 2020-06-26 平安科技(深圳)有限公司 Method, device, equipment and storage medium for identifying and quantifying eye fundus retinal blood vessels

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899862A (en) * 2015-04-01 2015-09-09 武汉工程大学 Retinal vessel segmentation algorithm based on global or local threshold
CN106407917B (en) * 2016-09-05 2017-07-25 山东大学 The retinal vessel extracting method and system distributed based on Dynamic Multi-scale
WO2018116321A2 (en) * 2016-12-21 2018-06-28 Braviithi Technologies Private Limited Retinal fundus image processing method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104573712A (en) * 2014-12-31 2015-04-29 浙江大学 Arteriovenous retinal blood vessel classification method based on eye fundus image
CN106651846A (en) * 2016-12-20 2017-05-10 中南大学湘雅医院 Method for segmenting vasa sanguinea retinae image
CN109410191A (en) * 2018-10-18 2019-03-01 中南大学 Optical fundus blood vessel localization method and its anaemia screening method based on OCT image
CN109166124A (en) * 2018-11-20 2019-01-08 中南大学 A kind of retinal vascular morphologies quantization method based on connected region
CN111340789A (en) * 2020-02-29 2020-06-26 平安科技(深圳)有限公司 Method, device, equipment and storage medium for identifying and quantifying eye fundus retinal blood vessels

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
眼底图像中血管分割技术研究;周琳;《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》;20111115;正文第6页第1段、第23-37页第三章 *

Also Published As

Publication number Publication date
CN112734828A (en) 2021-04-30

Similar Documents

Publication Publication Date Title
CN112734828B (en) Method, device, medium and equipment for determining center line of fundus blood vessel
CN112734785B (en) Method, device, medium and equipment for determining sub-pixel level fundus blood vessel boundary
Sreelatha et al. Early detection of skin cancer using melanoma segmentation technique
WO2022063198A1 (en) Lung image processing method, apparatus and device
CN109190690B (en) Method for detecting and identifying cerebral microhemorrhage points based on SWI image of machine learning
CN112734774B (en) High-precision fundus blood vessel extraction method, device, medium, equipment and system
CN111340789A (en) Method, device, equipment and storage medium for identifying and quantifying eye fundus retinal blood vessels
CN108133476B (en) Method and system for automatically detecting pulmonary nodules
CN112734773B (en) Sub-pixel-level fundus blood vessel segmentation method, device, medium and equipment
CN116188485A (en) Image processing method, device, computer equipment and storage medium
CN110060246B (en) Image processing method, device and storage medium
CN113781403B (en) Chest CT image processing method and device
KR20150059860A (en) Method for processing image segmentation using Morphological operation
Oprisescu et al. Automatic pap smear nuclei detection using mean-shift and region growing
US10943350B2 (en) Automated segmentation of histological sections for vasculature quantification
Cheng et al. Dynamic downscaling segmentation for noisy, low-contrast in situ underwater plankton images
CN115829980A (en) Image recognition method, device, equipment and storage medium for fundus picture
Yang et al. Detection of microaneurysms and hemorrhages based on improved Hessian matrix
CN114387209A (en) Method, apparatus, medium, and device for fundus structural feature determination
CN114387218A (en) Vision-calculation-based identification method, device, medium, and apparatus for characteristics of fundus oculi
Baglietto et al. Automatic segmentation of neurons from fluorescent microscopy imaging
CN115100178A (en) Method, device, medium and equipment for evaluating morphological characteristics of fundus blood vessels
Irshad et al. Automatic optic disk segmentation in presence of disk blurring
CN112734784A (en) High-precision fundus blood vessel boundary determining method, device, medium and equipment
Alazawee et al. Analyzing and detecting hemorrhagic and ischemic strokebased on bit plane slicing and edge detection algorithms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant