CN116596950B - Retina fundus blood vessel tracking method based on feature weighted clustering - Google Patents

Retina fundus blood vessel tracking method based on feature weighted clustering Download PDF

Info

Publication number
CN116596950B
CN116596950B CN202310631644.XA CN202310631644A CN116596950B CN 116596950 B CN116596950 B CN 116596950B CN 202310631644 A CN202310631644 A CN 202310631644A CN 116596950 B CN116596950 B CN 116596950B
Authority
CN
China
Prior art keywords
blood vessel
feature
calculating
pixel points
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310631644.XA
Other languages
Chinese (zh)
Other versions
CN116596950A (en
Inventor
谢怡宁
龙俊
郝明
匡洪宇
张宇明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeast Forestry University
Original Assignee
Northeast Forestry University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeast Forestry University filed Critical Northeast Forestry University
Priority to CN202310631644.XA priority Critical patent/CN116596950B/en
Publication of CN116596950A publication Critical patent/CN116596950A/en
Application granted granted Critical
Publication of CN116596950B publication Critical patent/CN116596950B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Geometry (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention provides a retina fundus blood vessel tracking method of feature weighted clustering, relates to the technical field of medical image computer analysis and processing, and aims to solve the problem that bifurcation point blood vessels are difficult to divide in retina fundus blood vessel tracking in the existing method. The method comprises the following steps: s1, preprocessing an eye fundus blood vessel segmentation data set, training an eye fundus blood vessel semantic segmentation model by using a preprocessed image, and segmenting and extracting an eye fundus blood vessel by using the eye fundus blood vessel semantic segmentation model; s2, skeletonizing the extracted fundus blood vessel, and segmenting the skeletonized blood vessel; s3, calculating the characteristics of each section of skeletonized blood vessel, and constructing a characteristic vector; s4, calculating feature similarity measurement weight according to the feature vector for the blood vessel at each bifurcation point; and S5, calculating the similarity among blood vessels at the bifurcation points to perform clustering, and completing the blood vessel tracking. The method can accurately divide the blood vessel at the bifurcation point so as to realize the accurate tracking of the blood vessel and provide a basis for calculating the biomarker.

Description

Retina fundus blood vessel tracking method based on feature weighted clustering
Technical Field
The invention relates to the technical field of computer analysis and processing of medical images, in particular to a retina fundus blood vessel tracking method of feature weighted clustering.
Background
The diabetes mellitus patients in China are in explosive growth, and the heart of the diabetes mellitus with high mortality disability rate is vascular complications in the most countries of the world, wherein coronary artery lesions (Coronary artery disease, CAD) are the most dominant macrovascular complications, and diabetic retinopathy (Diabetic retinopathy, DR) is one of the most dominant microvascular complications. The incidence rate of CAD of diabetics can reach 55%, and the CAD is the most common cause of death of diabetics, DR is a main blinding disease, and huge social resource and medical resource expenditure are required. Therefore, how to more accurately predict, prevent and reduce the occurrence of diabetes and vascular complications (CAD and DR) thereof is a key for preventing and controlling diabetes and complications thereof in the future. The diabetes mellitus is prevented firstly, the occurrence of diabetes mellitus can be reduced from the source, and the diabetes mellitus is the most basic and economic and effective management strategy, but the traditional diabetes mellitus risk prediction model has the characteristics of poor efficiency, poor stability, high cost, high implementation difficulty and the like. Therefore, it is necessary to develop efficient, stable, noninvasive, highly accessible predictive assessment models for diabetes and DR and CAD.
There is a great deal of research currently showing that the ocular fundus blood vessel morphology of patients with diabetes and vascular complications thereof can change, diseases can be found in advance by capturing the subtle changes of the ocular fundus blood vessel morphology, and early intervention and treatment can be performed. The current researchers mainly measure the morphological change of the fundus blood vessel through fundus blood vessel biomarkers, and the method is objective, accurate and high in reliability. Because biomarker calculation relates to calculation of characteristics such as diameter, direction and curvature of a single blood vessel, blood vessel tracking is needed to finish dividing the single blood vessel, but the existing blood vessel tracking method has the problem that blood vessel division at a bifurcation point is difficult, so that application of a retina fundus blood vessel tracking method is greatly restricted, and further improvement is needed.
Disclosure of Invention
The invention aims to solve the technical problems that:
the existing method has the problem that bifurcation point blood vessel division is difficult in retinal fundus blood vessel tracking.
The invention adopts the technical scheme for solving the technical problems:
the invention provides a retina fundus blood vessel tracking method of feature weighted clustering, which comprises the following steps:
s1, preprocessing a fundus blood vessel segmentation data set, training a fundus blood vessel semantic segmentation model by using a preprocessed image, and segmenting and extracting fundus blood vessels by using the fundus blood vessel semantic segmentation model;
s2, skeletonizing the extracted fundus blood vessel, and segmenting the skeletonized blood vessel;
s3, calculating the characteristics of each section of skeletonized blood vessel, and constructing a characteristic vector;
s4, calculating feature similarity measurement weight according to the feature vector for the blood vessel at each bifurcation point;
and S5, calculating the similarity among blood vessels at the bifurcation points to perform clustering, and completing the blood vessel tracking.
Further, preprocessing the data set in step S1 includes clipping, rotating, scaling, and flipping the data.
Further, the step S2 of skeletonizing the extracted fundus blood vessel and segmenting the skeletonized blood vessel includes the following steps:
s21, calculating the central line of each blood vessel to obtain a skeletonizing result;
s22, searching all 8 neighborhood blood vessel pixel points in the skeletonized blood vessel, taking the point with the number of the 8 neighborhood blood vessel pixel points being more than 2 as a bifurcation point, and disconnecting the blood vessel.
Further, step S21 calculates the center line of each blood vessel using a refinement algorithm, including the steps of:
s211, creating a mark image with the same size as the blood vessel segmentation image and used for marking the pixel points to be deleted;
s212, traversing edge pixel points of a blood vessel segmentation image, checking 8 neighborhood pixel points of the edge pixel points, and determining whether to delete the pixel points according to a specific refinement rule, wherein the adopted refinement rule comprises:
endpoint refinement rules: endpoint pixels having only one neighboring pixel point are deleted,
bifurcation point refinement rules: branch point pixels having three or more adjacent pixels are deleted,
breaking point refinement rules: deleting pixels having two adjacent pixels but not conforming to the vessel connectivity rule,
intersection refinement rules: deleting intersection pixels having four or more adjacent pixel points;
s213, if the pixel point needs to be deleted, marking the pixel point in the mark image as to be deleted;
s214, continuously traversing all edge pixel points until one round of traversal is completed;
s215, deleting corresponding pixel points in the segmented image according to the marked pixel points to be deleted in the mark image, and updating the segmented image;
s216, iterating the process until no pixel points which can be deleted exist, and obtaining the blood vessel center line.
Further, the calculating the feature of each segment of blood vessel in step S3, and constructing a feature vector, includes the following steps:
the calculation is characterized in that: taking the trend and the diameter of the blood vessel at the bifurcation point as references for blood vessel attribution division, fitting the blood vessel by adopting a least square method, calculating the slope to represent the trend of the blood vessel, and simultaneously calculating the diameter of the blood vessel as a characteristic;
let the fitted vascular equation be:
y=wx+b
wherein w is the slope of the fitting line and b is the intercept of the fitting line;
the formula of the least square fitting straight line is:
where m is the number of pixels of the vessel,and->Mean value of vessel abscissa and ordinate, x i And y is i Respectively representing the abscissa and the ordinate of the ith pixel point;
the calculation formula of the vessel diameter is:
wherein m is the number of pixels of the blood vessel, d represents the average diameter of the blood vessel, d i The diameter of the ith pixel point is represented, and the diameter is obtained by calculating the number of pixel points at the ith pixel point, at which the vertical line of the blood vessel intersects with the blood vessel.
Further, the calculating the feature similarity metric weight according to the feature vector for the blood vessel at each bifurcation in step S4 includes the following steps:
calculating the similarity measurement weight of each feature:
the characteristic independence is strong, the distribution dispersion has larger weight, the CRITIC is adopted to measure the independence among the characteristics, the coefficient of variation method is adopted to measure the degree of dispersion among the characteristics, the two are combined with the calculation weight, and the CRITIC method calculation weight formula is as follows:
wherein the method comprises the steps ofCRITIC weights representing feature j, C j Information quantity indicating feature j, n indicating the total number of features;
the information amount calculation formula is:
wherein r is jk Representing the correlation coefficient of the feature j and the feature k;
the correlation coefficient calculation formula is:
wherein Cov (j, k) represents the covariance of feature j and feature k, σ k Representing the standard deviation of feature k;
the weight calculation formula of the variation coefficient method is as follows:
wherein W is j 2 Coefficient of variation weight, v, representing feature j j The coefficient of variation representing feature j is given by:
wherein mu j Representing the mean value of the feature j;
combining the variation coefficient weight with the CRITIC weight, wherein the characteristic weight calculation formula is as follows:
wherein W is j As the weight of the feature j,CRITIC weight representing feature j, +.>Coefficient of variation weights representing feature jλ represents a weight balance coefficient.
Further, the step S5 of calculating the similarity between blood vessels at the bifurcation point to perform clustering, and completing the blood vessel tracking, includes the following steps:
s51, traversing all bifurcation points, and calculating the similarity between blood vessel segments at the bifurcation points; the feature similarity calculation formula is:
where S (x, y) represents the similarity score between vessel x and vessel y, n represents the total number of features,and->The j-th features of the blood vessel x and the blood vessel y are respectively represented, and p is a constant;
s52, if the similarity between the blood vessel x and the blood vessel y is not greater than a threshold tau, respectively calculating the distance between the starting points of the blood vessel x and the blood vessel y and the center of the video disc, and combining the head and tail connection of the blood vessel x and the blood vessel y into one blood vessel according to the distance from the center of the video disc; the similarity between the blood vessel x and the blood vessel y is greater than a threshold tau, and the blood vessel x and the blood vessel y are two independent blood vessels and are not combined;
and S53, deleting the merged blood vessels to obtain a blood vessel tracking result.
A retinal fundus blood vessel tracking system of feature weighted clustering, the system having program modules corresponding to the steps of any of the above-described aspects, the steps in the retinal fundus blood vessel tracking method of feature weighted clustering being executed at run-time.
A computer readable storage medium storing a computer program configured to implement the steps of the retinal fundus vessel tracking method of feature weighted clustering of any of the above-described technical solutions when invoked by a processor.
Compared with the prior art, the invention has the beneficial effects that:
the invention relates to a retina fundus blood vessel tracking method of feature weighted clustering, which comprises the steps of firstly dividing blood vessels of retina fundus images by using a fundus blood vessel division model. And then skeletonizing the fundus blood vessel and segmenting the skeletonized blood vessel. And calculating the characteristics of the segmented blood vessel again, and constructing a characteristic vector. The trend and the diameter of the blood vessel are extracted as clustering features, then similarity measurement weights are calculated according to the distribution of the features, so that the features with strong independence and discrete distribution have larger weights, and finally the similarity among the blood vessels at bifurcation points is calculated to perform clustering, so that the blood vessel tracking is completed. Compared with the prior art, the invention has the following three advantages: firstly, a clustering algorithm is adopted to realize vessel tracking, data marking is not needed, and the marking quantity is greatly reduced; secondly, the algorithm performs vascular division by calculating the similarity of key features (such as diameter and trend), and has good generalization performance; finally, the feature similarity measurement weight is self-adaptively determined by utilizing the distribution condition and the independence of the blood vessel features, so that the accuracy of blood vessel tracking is improved.
Drawings
FIG. 1 is a flowchart of a retinal fundus blood vessel tracking method for feature weighted clustering in an embodiment of the present invention;
fig. 2 is a training flow chart of a fundus blood vessel semantic segmentation model in an embodiment of the invention;
FIG. 3 is a flow chart of fundus blood vessel segmentation in an embodiment of the present invention;
FIG. 4 is a flow chart of feature and weight calculation in an embodiment of the invention;
fig. 5 is a flowchart of blood vessel clustering in an embodiment of the present invention.
Detailed Description
In the description of the present invention, it should be noted that the terms "first," "second," and "third" mentioned in the embodiments of the present invention are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first", "a second", or a third "may explicitly or implicitly include one or more such feature.
In order that the above objects, features and advantages of the invention will be readily understood, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings.
The retinal fundus blood vessel tracking method for feature weighted clustering provided herein as shown in fig. 1 comprises the following steps:
s1, a training flow chart of a fundus blood vessel semantic segmentation model shown in FIG. 2, which comprises the following steps of preprocessing a CHASEDB data set, training a UNet model by using a preprocessed image block, and segmenting and extracting fundus blood vessels by using the UNet model:
s11, the fundus blood vessel semantic segmentation data set is processed according to a training set, a verification set and a test set 8:1:1, carrying out random division;
s12, overlapping the training set, the verification set and the test set and cutting the training set, the verification set and the test set into 64 multiplied by 64 image blocks, and then carrying out operations such as rotation, scaling, overturning and the like on the image blocks in the training set to carry out data enhancement;
s13, training the UNet model by using a training set, adjusting model training hyper-parameters, selecting a model with optimal performance by using a verification set, and testing in a test set to obtain a final model;
s14, inputting the retinal fundus image to be tracked by the blood vessel into a final UNet model to obtain a blood vessel segmentation result.
S2, performing skeletonization on the extracted fundus blood vessel and segmenting the skeletonized blood vessel according to a fundus blood vessel segmentation flow chart shown in FIG 3, wherein the method comprises the following steps of:
s21, calculating the central line of each blood vessel by using a thinning algorithm, and then deleting boundary redundant pixels to obtain a blood vessel skeletonizing result;
the method for calculating the central line of each blood vessel by using the refinement algorithm comprises the following steps:
s211, creating a mark image with the same size as the blood vessel segmentation image and used for marking the pixel points to be deleted;
s212, traversing edge pixel points of a blood vessel segmentation image, checking 8 neighborhood pixel points of the edge pixel points, and determining whether to delete the pixel points according to a specific refinement rule, wherein the adopted refinement rule comprises:
endpoint refinement rules: endpoint pixels having only one neighboring pixel point are deleted,
bifurcation point refinement rules: branch point pixels having three or more adjacent pixels are deleted,
breaking point refinement rules: deleting the pixel points which have two adjacent pixel points but do not meet the rule of blood vessel connectivity,
intersection refinement rules: deleting intersection pixels having four or more adjacent pixel points;
s213, if the pixel point needs to be deleted, marking the pixel point in the mark image as to be deleted;
s214, continuously traversing all edge pixel points until one round of traversal is completed;
s215, deleting corresponding pixel points in the segmented image according to the marked pixel points to be deleted in the mark image, and updating the segmented image;
s216, iterating the process until no pixel points which can be deleted exist, and obtaining a blood vessel central line at the moment;
s22, traversing all pixel points in the skeletonized blood vessel, calculating the number of pixel points of 8 neighborhoods of each pixel point, taking all points with the number of 8 neighborhood pixel points being more than 2 as bifurcation points, and setting the pixel value at the bifurcation points to 0 so as to disconnect the blood vessel segments.
S3, calculating the characteristics of each section of blood vessel according to the characteristics and weight calculation flow chart shown in FIG. 4, and constructing a characteristic vector, wherein the method specifically comprises the following steps of:
the calculation is characterized in that: the trend and the diameter of the blood vessel at the bifurcation point are used as references for the attribution division of the blood vessel, so that the blood vessel is fitted by adopting a least square method, the trend of the blood vessel is represented by calculating the slope, and the diameter of the blood vessel is calculated as a characteristic;
let the fitted vascular equation be:
y=wx+b
wherein w is the slope of the fitting line and b is the intercept of the fitting line;
the formula of the least square fitting straight line is:
where m is the number of pixels of the vessel,and->Mean value of vessel abscissa and ordinate, x i And y is i Respectively representing the abscissa and the ordinate of the ith pixel point;
the calculation formula of the vessel diameter is:
wherein m is the number of pixels of the blood vessel, d represents the average diameter of the blood vessel, d i The diameter of the ith pixel point is represented, and the diameter is obtained by calculating the number of pixel points at the ith pixel point, at which the vertical line of the blood vessel intersects with the blood vessel.
S4, calculating feature similarity measurement weights according to feature vectors for blood vessels at each bifurcation point according to a feature and weight calculation flow chart shown in FIG. 4, wherein the method comprises the following steps of:
the calculation method of the feature similarity measurement weight comprises the following steps:
the feature independence is strong, and the distribution dispersion should have larger weight, so the CRITIC is adopted to measure the independence among the features, the variation coefficient method is adopted to measure the degree of dispersion among the features, the two are combined to calculate the weight, and the CRITIC method calculates the weight formula as follows:
wherein the method comprises the steps ofCRITIC weights representing feature j, C j Information quantity indicating feature j, n indicating the total number of features;
the information amount calculation formula is:
wherein sigma j Represents the standard deviation, r, of the feature j jk Representing the correlation coefficient of the feature j and the feature k;
the correlation coefficient calculation formula is:
wherein Cov (j, k) represents the covariance of feature j and feature k, σ k Representing the standard deviation of feature k;
the weight calculation formula of the variation coefficient method is as follows:
wherein the method comprises the steps ofCoefficient of variation weight, v, representing feature j j The coefficient of variation representing feature j is given by:
wherein mu j Representing the mean value of the feature j;
combining the variation coefficient weight with the CRITIC weight, wherein the characteristic weight formula is as follows:
wherein W is j As the weight of the feature j,CRITIC weight representing feature j, +.>And the variation coefficient weight of the characteristic j is represented, and the lambda weight balances the coefficient.
S5, a blood vessel clustering flow chart shown in FIG. 5, wherein similarity among blood vessels at bifurcation points is calculated for clustering, and blood vessel tracking is completed, and the method comprises the following steps:
s51, traversing all bifurcation points, and calculating the similarity between blood vessel segments at the bifurcation points; the feature similarity calculation formula is:
wherein S (x, y) represents a similarity score between vessel x and vessel y,and->The j-th features of the blood vessel x and the blood vessel y are respectively represented, and p is a constant and is set to be 2;
s52, if the similarity between the blood vessel x and the blood vessel y is not greater than a threshold tau, respectively calculating the distance between the starting points of the blood vessel x and the blood vessel y and the center of the video disc, and combining the head and tail connection of the blood vessel x and the blood vessel y into one blood vessel according to the distance from the center of the video disc; the similarity between the blood vessel x and the blood vessel y is greater than a threshold tau, and the blood vessel x and the blood vessel y are two independent blood vessels and are not combined;
and S53, deleting the merged blood vessels to avoid redundancy, and obtaining a blood vessel tracking result.
When merging the blood vessels, merging is sequentially carried out according to the distance between the end points of the blood vessels and the center of the video disc, so that the blood vessels are consistent with the trend of the blood vessels.
Although the present disclosure is disclosed above, the scope of the present disclosure is not limited thereto. Various changes and modifications may be made by one skilled in the art without departing from the spirit and scope of the disclosure, and such changes and modifications would be within the scope of the disclosure.

Claims (7)

1. The retina fundus blood vessel tracking method of feature weighted clustering is characterized by comprising the following steps:
s1, preprocessing a fundus blood vessel segmentation data set, training a fundus blood vessel semantic segmentation model by using a preprocessed image, and segmenting and extracting fundus blood vessels by using the fundus blood vessel semantic segmentation model;
s2, skeletonizing the extracted fundus blood vessel, and segmenting the skeletonized blood vessel;
s3, calculating the characteristics of each section of skeletonized blood vessel, and constructing a characteristic vector;
s4, calculating feature similarity measurement weight according to the feature vector for the blood vessel at each bifurcation point;
s5, calculating the similarity among blood vessels at bifurcation points to perform clustering, and completing blood vessel tracking;
the step S4 of calculating the feature similarity metric weight for the blood vessel at each bifurcation according to the feature vector includes the following steps:
calculating the similarity measurement weight of each feature:
the characteristics with strong independence and large distribution dispersion have larger weight, the independence among the characteristics is measured by CRITIC, the dispersion degree among the characteristics is measured by a variation coefficient method, and the CRITIC are combined to calculate the weight, and the formula of the CRITIC is as follows:
wherein the method comprises the steps ofCRITIC weights representing feature j, C j Information quantity indicating feature j, n indicating the total number of features;
the information amount calculation formula is:
wherein r is jk Representing the correlation coefficient of the feature j and the feature k;
the correlation coefficient calculation formula is:
wherein Cov (j, k) represents the covariance of feature j and feature k, σ k Representing the standard deviation of feature k;
the weight calculation formula of the variation coefficient method is as follows:
wherein W is j 2 Coefficient of variation weight, v, representing feature j j The coefficient of variation representing feature j is given by:
wherein mu j Representing the mean value of the feature j;
combining the variation coefficient weight with the CRITIC weight, wherein the characteristic weight calculation formula is as follows:
wherein W is j As the weight of the feature j,CRITIC weight representing feature j, +.>The variation coefficient weight of the characteristic j is represented, and lambda represents the weight balance coefficient;
step S5, calculating the similarity among blood vessels at the bifurcation point for clustering to finish the blood vessel tracking, comprising the following steps:
s51, traversing all bifurcation points, and calculating the similarity between blood vessel segments at the bifurcation points; the feature similarity calculation formula is:
where S (x, y) represents the similarity score between vessel x and vessel y, n represents the total number of features,and->The j-th features of the blood vessel x and the blood vessel y are respectively represented, and p is a constant;
s52, if the similarity between the blood vessel x and the blood vessel y is not greater than a threshold tau, respectively calculating the distance between the starting points of the blood vessel x and the blood vessel y and the center of the video disc, and combining the head and tail connection of the blood vessel x and the blood vessel y into one blood vessel according to the distance from the center of the video disc; the similarity between the blood vessel x and the blood vessel y is greater than a threshold tau, and the blood vessel x and the blood vessel y are two independent blood vessels and are not combined;
and S53, deleting the merged blood vessels to obtain a blood vessel tracking result.
2. The method of claim 1, wherein preprocessing the dataset of step S1 includes cropping, rotating, scaling, and flipping the dataset.
3. The method for tracking retinal fundus blood vessels by feature weighted clustering according to claim 1, wherein the step S2 of skeletonizing the extracted fundus blood vessels and segmenting the skeletonized blood vessels comprises the steps of:
s21, calculating the central line of each blood vessel to obtain a skeletonizing result;
s22, searching all 8 neighborhood blood vessel pixel points in the skeletonized blood vessel, taking the point with the number of the 8 neighborhood blood vessel pixel points being more than 2 as a bifurcation point, and disconnecting the blood vessel.
4. The feature weighted clustering retinal fundus blood vessel tracking method according to claim 3, wherein step S21 calculates a center line of each blood vessel using a refinement algorithm, comprising the steps of:
s211, creating a mark image with the same size as the blood vessel segmentation image and used for marking the pixel points to be deleted;
s212, traversing edge pixel points of the blood vessel segmentation image, checking 8 neighborhood pixel points of the edge pixel points, and determining whether to delete the pixel points according to a refinement rule, wherein the adopted refinement rule comprises:
endpoint refinement rules: endpoint pixels having only one neighboring pixel point are deleted,
bifurcation point refinement rules: branch point pixels having three or more adjacent pixels are deleted,
breaking point refinement rules: deleting pixels having two adjacent pixels but not conforming to the vessel connectivity rule,
intersection refinement rules: deleting intersection pixels having four or more adjacent pixel points;
s213, if the pixel point needs to be deleted, marking the pixel point in the mark image as to be deleted;
s214, continuously traversing all edge pixel points until one round of traversal is completed;
s215, deleting corresponding pixel points in the segmented image according to the marked pixel points to be deleted in the mark image, and updating the segmented image;
s216, iterating the process until no pixel points which can be deleted exist, and obtaining the blood vessel center line.
5. The method for tracking retinal fundus blood vessels by feature weighted clustering according to claim 1, wherein the calculating the features of each segment of blood vessels in step S3 constructs feature vectors, comprises the steps of:
the calculation is characterized in that: taking the trend and the diameter of the blood vessel at the bifurcation point as references for blood vessel attribution division, fitting the blood vessel by adopting a least square method, calculating the slope to represent the trend of the blood vessel, and simultaneously calculating the diameter of the blood vessel as a characteristic;
let the fitted vascular equation be:
y=wx+b
wherein w is the slope of the fitting line and b is the intercept of the fitting line;
the formula of the least square fitting straight line is:
where m is the number of pixels of the vessel,and->Mean value of vessel abscissa and ordinate, x i And y is i Respectively representing the abscissa and the ordinate of the ith pixel point;
the calculation formula of the vessel diameter is:
wherein m is the number of pixels of the blood vessel, d represents the average diameter of the blood vessel, d i Represent the firstThe diameter of the i pixel points is obtained by calculating the number of the pixel points at the i pixel points, at which the vertical line of the blood vessel intersects with the blood vessel.
6. A retina fundus blood vessel tracking system of characteristic weighted clustering is characterized in that: the system having program modules corresponding to the steps of any of the preceding claims 1 to 5, the steps of the retinal fundus blood vessel tracking method performing the feature weighted clustering described above when run.
7. A computer-readable storage medium, characterized by: the computer readable storage medium stores a computer program configured to implement the steps of the retinal fundus vessel tracking method of feature weighted clustering of any of claims 1-5 when invoked by a processor.
CN202310631644.XA 2023-05-31 2023-05-31 Retina fundus blood vessel tracking method based on feature weighted clustering Active CN116596950B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310631644.XA CN116596950B (en) 2023-05-31 2023-05-31 Retina fundus blood vessel tracking method based on feature weighted clustering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310631644.XA CN116596950B (en) 2023-05-31 2023-05-31 Retina fundus blood vessel tracking method based on feature weighted clustering

Publications (2)

Publication Number Publication Date
CN116596950A CN116596950A (en) 2023-08-15
CN116596950B true CN116596950B (en) 2023-11-17

Family

ID=87604409

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310631644.XA Active CN116596950B (en) 2023-05-31 2023-05-31 Retina fundus blood vessel tracking method based on feature weighted clustering

Country Status (1)

Country Link
CN (1) CN116596950B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127849A (en) * 2016-05-10 2016-11-16 中南大学 Three-dimensional fine vascular method for reconstructing and system thereof
CN112233135A (en) * 2020-11-11 2021-01-15 清华大学深圳国际研究生院 Retinal vessel segmentation method in fundus image and computer-readable storage medium
CN113470102A (en) * 2021-06-23 2021-10-01 依未科技(北京)有限公司 Method, device, medium and equipment for measuring fundus blood vessel curvature with high precision
CN113902689A (en) * 2021-09-24 2022-01-07 中国科学院深圳先进技术研究院 Blood vessel center line extraction method, system, terminal and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019013779A1 (en) * 2017-07-12 2019-01-17 Mohammed Alauddin Bhuiyan Automated blood vessel feature detection and quantification for retinal image grading and disease screening

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127849A (en) * 2016-05-10 2016-11-16 中南大学 Three-dimensional fine vascular method for reconstructing and system thereof
CN112233135A (en) * 2020-11-11 2021-01-15 清华大学深圳国际研究生院 Retinal vessel segmentation method in fundus image and computer-readable storage medium
CN113470102A (en) * 2021-06-23 2021-10-01 依未科技(北京)有限公司 Method, device, medium and equipment for measuring fundus blood vessel curvature with high precision
CN113902689A (en) * 2021-09-24 2022-01-07 中国科学院深圳先进技术研究院 Blood vessel center line extraction method, system, terminal and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Retinal Vascular Network Topology Reconstruction and Artery/Vein Classification via Dominant Set Clustering;Yitian Zhao等;IEEE TRANSACTIONS ON MEDICAL IMAGING;全文 *
基于局部特征空间中智模糊C-均值聚类的视网膜血管分割;黄木连;;信息通信(第08期);全文 *
基于熵权法的混合属性聚类算法;孙浩军;高玉龙;闪光辉;袁婷;;汕头大学学报(自然科学版)(第04期);全文 *

Also Published As

Publication number Publication date
CN116596950A (en) 2023-08-15

Similar Documents

Publication Publication Date Title
Li et al. A large-scale database and a CNN model for attention-based glaucoma detection
CN112716446B (en) Method and system for measuring pathological change characteristics of hypertensive retinopathy
CN107369160A (en) A kind of OCT image median nexus film new vessels partitioning algorithm
Roy et al. Blood vessel segmentation of retinal image using Clifford matched filter and Clifford convolution
US11915428B2 (en) Method for determining severity of skin disease based on percentage of body surface area covered by lesions
Hao et al. Anterior chamber angles classification in anterior segment OCT images via multi-scale regions convolutional neural networks
Ataer-Cansizoglu et al. Analysis of underlying causes of inter-expert disagreement in retinopathy of prematurity diagnosis
Zhang et al. MC-UNet multi-module concatenation based on U-shape network for retinal blood vessels segmentation
CN111784641B (en) Neural image curvature estimation method and device based on topological structure
CN116596950B (en) Retina fundus blood vessel tracking method based on feature weighted clustering
CN112233742A (en) Medical record document classification system, equipment and storage medium based on clustering
Nneji et al. A Dual Weighted Shared Capsule Network for Diabetic Retinopathy Fundus Classification
CN112734769B (en) Medical image segmentation and quantitative analysis method based on interactive information guided deep learning method, computer device and storage medium
Qomariah et al. Exudate Segmentation for Diabetic Retinopathy Using Modified FCN-8 and Dice Loss.
CN112329640A (en) Facial nerve palsy disease rehabilitation detection system based on eye muscle movement analysis
CN112862761B (en) Brain tumor MRI image segmentation method and system based on deep neural network
CN118115783A (en) Cornea staining analysis method based on deep learning and related training method and system
Shen et al. Retinal Vessel Image Segmentation based on Attention Mechanism and U-net Model
CN113724857A (en) Automatic diagnosis device for eye ground disease based on eye ground image retina blood vessel
Ghorbani et al. Toward Keratoconus Diagnosis: Dataset Creation and AI Network Examination
Ying et al. Human Tissue Cell Image Segmentation optimization Algorithm Based on Improved U-net Network
Wei et al. Peripapillary atrophy segmentation in fundus images via multi-task learning
Tang et al. Sensitivity Detection of Retinal Nerve Fiber Layer in Glaucoma Based on High Level Semantic Image Fusion Algorithm
Moustari et al. Enhancement of Diabetic Retinopathy Classification using Attention Guided Convolutional Neural Network
Ramya et al. REVIEW ON IMAGE PROCESSING TECHNIQUES FOR OPTIC DISC SEGMENTATION

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant